Mark your calendars, Canon fans: the camera giant has posted a teaser on social media that shows not one but two new V series models, set to be unveiled in full on Wednesday, March 26 at 6pm CET (10am PT / 1pm ET / 5pm GMT, which is 4am AEST on March 27 in Australia).
The teaser comes with the caption "exciting things are coming" and promises that the new cameras are going to be "the perfect new additions to your kit bag". As part of the V series, they'll be aimed at vloggers and creators.
It's not too difficult to identify the camera on the left of the image, because we've seen it before: it's the Canon PowerShot V1, which has already launched in Japan. It features a 16-50mm F2.8-4.5 lens with 3.1x optical zoom, a new stabilized 22.3MP sensor, and support for 4K / 30p video or 4K / 60p with a 1.4x crop. It's like a bigger sibling to the trending PowerShot G7X Mark III.
What we don't know yet is the price outside of Japan. It's on sale for 148,500 Japanese Yen there, which works out as about $1,000 / £800 / AU$1,600 – but Canon is unlikely to use a straight currency conversion. We should get accurate global pricing next Wednesday.
One launch, two camerasA post shared by Canon UK and Ireland (@canonuk)
A photo posted by on
The second camera, on the right, is more of a mystery – though not a complete surprise. There have been rumors that the PowerShot V1 wouldn't be the only V series model to launch this year, though to date we've not heard much about a second device.
According to Canon Rumors, this second model is set to be the Canon EOS R50 V – an entry-level mirrorless camera for vloggers that could retail for something in the region of $750 / £700 / AU$1,300.
As per the unconfirmed information Canon Rumors has, the suggestion is that the Canon EOS R50 V will come with a new lens of its own, and (as you can guess from the name) it'll be a video-centric take on the APS-C Canon EOS R50. We may see some refined ergonomics, though we don't have many other details.
All will be revealed next Wednesday, and we will of course bring you the news of the announcements as they're made. Given that the social media teaser mentions Canon's YouTube channel, it seems the launch will be livestreamed online too.
You might also likeLionsgate has released a new trailer for its forthcoming Ballerina movie, and it indicates that Keanu Reeves' John Wick will play a far bigger role in the story than we realized.
Ballerina is the second spin-off project set in the John Wick franchise and will gracefully spin its way into theaters in early June. And, with one of 2025's most anticipated new movies' release drawing ever closer, Lionsgate has whet our appetite for its arrival with another official trailer – one that surprisingly puts Reeves' hitman front and center.
Ballerina's first trailer had everyone asking the same question about Reeves' appearance. Now, its latest teaser confirms Wick will be sent after Ana de Armas' Eve Macarro, the film's protagonist whose on a one-woman crusade to track down and kill the individuals who murdered her family. Her quest for vengeance, though, threatens to up-end the established order of the hitman-criminal underworld, which is why Wick is enlisted to use his special talents to stop Macarro.
Reeves is a talented and beloved actor, while Wick is one of his most famous roles among cinephiles. It's no great surprise, then, that Lionsgate is leaning heavily on Wick's appearance in Ballerina to help get bums on seats.
As much as Ballerina's official trailer makes it seem like he'll have a big role to play in its story, though, there's nothing to say that he actually will.
The footage in said teaser could be part of a 10- to 15-minute segment that he shows up in. So, while Wick doesn't appear as if he'll just have a cameo role where he simply crosses paths with Macarro, I'd be amazed if Reeves is part of proceedings for anything longer than an extended sequence in Ballerina's middle act. After all, this is de Armas' movie, so Lionsgate won't want to overshadow Macarro in her own movie... right?
Is Ballerina a prequel movie? Where does it sit on the John Wick timeline? Ballerina takes place alongside the third John Wick flick (Image credit: Lionsgate)Ballerina's story actually runs parallel to events that take place in John Wick: Chapter 3 – Parabellum.
It's not a prequel in the traditional sense, then – i.e., one that precedes the action movie series' first film. However, it plays out before John Wick: Chapter 4, so it's technically a prequel to Reeves' most recent outing as the famous hitman.
Who is part of the Ballerina movie cast? The Director is one of four people that John Wick fans will recognize in Ballerina (Image credit: Lionsgate)De Armas and Reeves notwithstanding, there'll be a few familiar faces, plus some new ones, to look out for in Ballerina.
Ian McShane and the late Lance Reddick reprise their roles as Winston Scott and Charon from the first four John Wick films. Ballerina will mark the final-ever role of Reddick's career, too, following his shock death in March 2023.
Joining the aforementioned quartet on the cast roster are Anjelica Huston, Gabriel Byrne, Catalina Sandino Moreno, and Norman Reedus. Huston returns as The Director, aka the leader of the New York branch of the Ruska Roma crime enterprise, which trained Macarro to become an assassin when she was just a child. The Director is also Wick's adoptive mother, which is why she enlists his help to stop Macarro.
The Walking Dead alumnus Reedus, meanwhile, will play someone called Pine. It's unclear who Byrne and Moreno will play.
When will Ballerina be released in theaters? Norman Reedus will play a mysterious character named Pine in Ballerina (Image credit: Lionsgate)Ballerina will dance into cinemas on June 6 in the US and Australia. UK viewers can catch it in their local multiplex or independent theater a day later on June 7.
You might also likeAs our Oppo Reno 12 Pro review details, a telephoto lens is a rare and valuable addition for a midrange handset. The Oppo Reno 13 Pro comes equipped with a 3.5x telephoto camera, nearly doubling the optical zoom reach of its predecessor, the Oppo Reno 12 Pro.
As far as we’re aware, the Reno 13 Pro’s 3.5x snapper gives Oppo’s latest leading mid-ranger more optical zoom reach than any other phone of its price.
Telephoto cameras are becoming more common on midrange handsets: we recently saw the launch of the Nothing Phone 3a and Phone 3a Pro, which respectively carry 2x and 3x telephoto cameras.
The 50MP 3.5x telephoto camera is joined by a 50MP main camera and 8MP ultra-wide camera, in an arrangement that follows Oppo’s track record of taking – let’s call it inspiration – from a certain Cupertino-based phone maker.
Design lineage aside, the Oppo Reno 13 Pro is also rated to IP69, and Oppo says the phone is resistant to being submerged in fresh water as well as heated jets of water – it’s as waterproof as a phone can get right now.
With these two features – an upgraded camera system and the best possible water resistance – Oppo has done the natural thing and devised a way to make the most of both at the same time. The Reno 13 Pro comes with a new underwater camera mode that utilizes the volume rocker to operate the camera when submerged.
This new mode allows users to take photos with the volume-up button, and start and stop video with the volume-down button, which I must admit is a reasonably intuitive control scheme – even if the idea of hopping in the pool, phone in hand, does still freak me out a bit.
We’ve seen phones with IP69 water resistance before, but it’s very rare that a phone maker actively encourages its users to take a dip with their device – let alone for the sake of a photo shoot.
The Oppo Reno 13 Pro, and its two siblings the Oppo Reno 13 and Reno 13F, have not yet received a release date or pricing – we’ll update this article when the details are confirmed. For now, be sure to check out our guide to the best Oppo phones and let us know what you think about the Reno 13 Pro in the comments.
You might also likeThough we’re still in the era of the iPhone 16 and haven’t yet had confirmation of the iPhone 17 series, we’re already hearing plenty of rumors about the iPhone 18 series.
Indeed, it seems that Apple fans and analysts just can’t help looking ahead – and given that component orders and manufacturing decisions are made well ahead of time, some of these rumors could hold weight when we do finally get the next even-numbered iPhone generation.
Below, we’ve rounded up the five biggest rumors so far about the iPhone 18 series – keep in mind though that we expect some of these to only apply to the iPhone 18 Pro and Pro Max.
A variable aperture camera The iPhone 16 Pro (Image credit: Future / Lance Ulanoff)The iPhone 18 series could bring a variable aperture camera to Apple’s mobile platform for the first time, allowing users to physically control how much light the sensor is exposed to.
Current iPhones offer an approximation of aperture control by controlling shutter speed and processing the image with software, but a true variable aperture allows much more flexibility in light intake and depth of field.
As Pocket-Lint reports, notable Apple tipster Ming-Chi Kuo suggested in late 2024 that Apple is set to order in a large supply of variable aperture camera components, in time for production in 2026.
Variable aperture cameras are currently limited to select Android phones, like the Huawei Mate XT Ultimate and the discontinued Xiaomi 14 Ultra. Including one with the iPhone 18 would track with the renewed focus on hardware photography Apple demonstrated with the iPhone 16’s new Camera Control button.
A new Pro camera sensor The iPhone 16 Pro (Image credit: Future / Lance Ulanoff)Another iPhone 18 camera rumor concerns the adoption of a new sensor for the main camera of the two Pro models
According to noted leaker Jukanlosreve, Apple will use a new triple-layered Samsung sensor for the iPhone 18 Pro’s main camera, utilizing a technology called "PD-TR Logic".
As MacRumors reports, this new sensor should have myriad benefits for the camera’s response time, dynamic range, and reducing noise in photos.
Under-display Face ID (Image credit: Shutterstock)All the way back in May 2024, industry analyst Ross Young suggested that Apple may start implementing under-display Face ID in 2026.
Young had previously predicted that the tech necessary to Face ID would be placed under the display of the iPhone 17 series, though this was later revised.
As mentioned, we’d expect the iPhone 18 series to arrive in 2026, so it tracks that the new Face ID tech would arrive with it. As with other new hardware features, it’s possible Apple could reserve under-display Face ID for the iPhone 18 Pro and iPhone 18 Pro Max.
It’s not yet clear whether Apple will also place the selfie camera under the display, or just the infrared sensors that make Face ID work. As it stands now, very few phones have under-display cameras, though there are some high-profile examples, like the Samsung Galaxy Z Fold 6.
3nm chipsets (Image credit: Apple)Time to get a bit more technical. The latest rumors, as reported by WCCFTech, suggest that the A20 and A20 Pro chipsets that will likely power the iPhone 18 series will be based on the 3nm process, rather than the nascent 2nm process currently being pioneered by chipset manufacturing powerhouse TSMC.
What does that all mean? Well, processors are measured by the minimum size of individual transistors, which are now so small as to be measured in nanometers. As such, as the measurements get smaller, more processing power can fit into the same space. The likes of '3nm' and '2nm' are mostly marketing terms rather than relating to a specific size, but the move from 3nm to 2nm does imply in jump in performance.
The iPhone 15 and iPhone 15 Pro were the first commercially available phones to sport a 3nm chipset – but it seems like Apple may be in less of a rush to keep up with the 2nm process.
LTPO+ displays – like LTPO, but moreso The iPhone 16 (Image credit: Future)Back to Jukanlosreve for this one – the noted leaker and tipster suggested in late 2024 that the iPhone 18 Pro series would come fitted with LTPO+ display panels.
In a post on X (formerly Twitter), Jukanlosreve highlighted that the new display tech could bring increased “speed” (we’re not sure if that means refresh rate or response time) and power efficiency.
Frustratingly, there’s no accompanying hint that the base iPhone 18 will get a high refresh rate LTPO screen – though this may have already been addressed by 2026, if the latest iPhone 17 rumors are to be believed.
Do you have any hopes for the iPhone 18 series yet? Or are you focused on the rumored iPhone 17 series? Let us know in the comments below.
You might also likeEven the next-door neighbor's dog knows not to click a link in an unsolicited email, but how many of us really understand how to use AI safely?
In short, shadow AI is the use of unapproved AI in an organization, similar to shadow IT which focuses on IT devices and services. Where employees might use a personal email address or laptop, shadow AI refers to the use of AI technology which hasn’t been approved for a business use case, particularly where it may constitute a risk to the business.
These risks include the leakage of sensitive or proprietary data, which is a common issue when employees upload documents to an AI service such as ChatGPT, for example, and its contents become available to users outside of the company. But it could also lead to serious data quality problems where incorrect information is retrieved from an unapproved AI source which may then lead to bad business decisions.
Generative AI is well known for its potential to hallucinate, giving plausible but ultimately incorrect information. Take Google’s AI summary as an example of search results getting things wrong. It might be obvious to the interested party with the contextual knowledge to recognize the summary may be wrong, but to the uninitiated, this won’t be the case.
Analysts at Datactics have seen on several occasions that a leading AI tool produces a fictitious LEI (legal entity identifier, required for regulatory reporting), and fictitious annual revenue figures for a corporate entity. The potential consequences of this kind of hallucination should be obvious, but because of the plausibility of the ‘bad data’, it is very easy for it to slip into the system and lead to further unexpected downstream problems, highlighting the need for robust data quality controls and data provenance.
There are technical, economic and cultural reasons for the rise of shadow AI, from cultural normalization and accessibility, pressure to perform, information overload and aggressive AI everywhere. There is very little resistance to these drivers, and most organizations don’t have very comprehensive AI governance in place, or AI awareness training.
What is AI governance, and doesn’t this solve the problem?Part of the remit of AI governance is to address the problem of shadow AI. There’s a plethora of governance policy frameworks and tech platforms that can help with this, and perhaps it is this governance or risk mitigation that is partly to blame for slowing down the adoption of AI as businesses cautiously adopt third-party solutions.
But in the race between AI capability and AI governance, AI capability is accelerating, showing no signs of fatigue, and its benefits are obvious to end users. Meanwhile, by comparison, AI governance is still putting on its running shoes and users aren’t always clear on what does and does not constitute risk.
AI governance covers a broad spectrum, from the ad-hoc mandate of “please do not upload corporate or client information to ” to governance tools and strict policies prohibiting AI usage without prior approval. Many vendors now offer AI governance tools and frameworks to enable this, and the trick is to implement something that provides a high degree of protection without stifling innovation or productivity, depending on the size and type of the business.
How to address the problem of Shadow AI?Using the dimensions of people, processes and technology, we can easily see a holistic way to address the issue of shadow AI that minimizes risks to organizations.
Many companies are now addressing the information leakage issue by implementing a technical architecture called RAG (Retrieval Augmented Generation), where a language model, either large or small, can be augmented with proprietary data in a way that keeps proprietary data securely within the organization, along with the added benefit of reducing AI hallucinations.
Specifically, to shadow AI, businesses can implement controls and detection, usually existing cybersecurity controls which can be easily extended, for example, firewall or proxy server controls or a single sign-on for third-party AI services. Furthermore, if these controls can be integrated with governance, then a much clearer picture of risk exposure can be achieved.
Perhaps most importantly, there needs to be greater cultural awareness of the risks of AI. In the same way that we have cyber security training for all staff, we need to strive for a reality where even the next-door neighbor's dog understands that AI is prevalent across a wide range of software and the potential risks. Understanding of the risks of divulging sensitive data to AI services and the possibility of hallucination and censorship in AI responses, and the importance of treating AI responses as data that informs an answer rather than taking the response as an infallible answer.
Data quality awareness is crucial. The information that goes into an AI model and the information that comes out must be validated and this understanding is something we need to adopt sooner rather than later.
We've featured the best IT infrastructure management service.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Here we are, folks. The final episode of Severance season 2 is almost upon us and, after waiting almost three years for the Apple TV Original to return, it's surreal to think its sophomore outing is almost over.
With the Apple TV+ series growing in popularity since season 2 debuted on January 17, I suspect you're already preparing to be seated as soon as its 10th and final chapter drops, too. Spoilers will circulate online soon after Severance's latest entry airs, so you'll want to watch it as soon as possible to avoid the possibility of someone ruining everything for you.
To that end, you'll want to know when season 2 episode 10 will be released. Below, I'll explain what date and time you can stream it in the US, UK, and Australia. That way, you'll know when you tune into Apple TV+ and catch it right away.
When will Severance season 2 episode 10 be available to stream in the US? What'll happen to Helly R in Severance's season 2 finale? (Image credit: Apple TV+)The Severance season 2 finale will debut on Apple TV+, aka one of the world's best streaming services, at 9pm PT on Thursday, March 20. For those of you who live in the US Eastern time zone, that's 12am ET on Friday, March 21.
What is the release time for Severance season 2's final episode in the UK? What will Harmony Cobel do in season 2 episode 10? (Image credit: Apple TV+)The final installment of season 2 will make its Apple TV+ debut in the UK at 4am GMT on Friday, March 21. That's because the clocks went forward in the US a few weeks ago, so the UK is one hour closer to the US Eastern and Pacific time zones.
Nevertheless, that means it'll be an early morning start for British fans who want to catch episode 10, titled 'Cold Harbor', before work. That might make you feel even more tired at the end of a long working week, but at least you won't have any big surprises ruined for you before you watch it on Friday evening!
When can I watch the Severance season 2 finale in Australia? Will Gemma/Ms. Casey escape and/or be rescued in the season 2 finale? (Image credit: Apple TV+)Season 2 episode 10 of one of the best Apple TV+ shows is set to arrive in Australia at 3pm AEDT on Friday, March 21. Reckon you can leave work early to stream at home asap? If you do, don't tell anyone that I told you to do that...
What is the runtime for Severance season 2 episode 10? Will we see Dylan and Irving in this season's 10th and final chapter? (Image credit: Apple TV Plus)You'll want to set aside some time to watch 'Cold Harbor'. At 76 minutes, it'll be the longest episode in Severance's history – so, if you were expecting another entry around the 40 to 60-minute mark, you'll be in for something of a surprise!
When will the next episode of Severance launch on Apple TV+? Is Seth Milchick going to rebel against Lumon as well? (Image credit: Apple TV_+)The short answer is: not for a long time. As I've stated throughout this article, episode 10 is season 2's final installment. I imagine your 'innie' and 'outie' are saying "boo" and "hiss" to that.
But, fret not, because Severance season 3 is on the way. Apple hasn't officially renewed one of its flagship TV shows for a third season, but director/executive producer Ben Stiller has confirmed that work is underway on season 3's scripts. Dichen Lachman, who plays Gemma/Ms. Casey, exclusively told me that she doesn't know when filming on season 3 will begin, though. So, as I said at the start of this section, it could be a while before the incredibly successful sci-fi mystery thriller is back on our screens.
You might also likeOracle has become the latest tech giant to launch a platform for users to build and customize their own AI agents.
The company says its new AI Agent Studio will offer an easy way for organizations to create, manage and deploy AI agents across their business, tailored exactly how they need them to be.
Users will be able to build new AI agents completely from scratch, or extend pre-packagaed agents which can be evolved and customized.
Oracle AI Agent StudioPart of the Oracle Fusion Cloud app suite, AI Agent Studio will be available at no extra cost to users, who will benefit from exactly the same tools Oracle uses to build its own in-house agents.
This includes Agent template libraries, which allow users to create agents with pre-built templates paired with natural language prompts, as well as Agent team orchestration, which lets users set up multiple agents to work alongside human workers on complex tasks through pre-configured templates
Any agents designed in AI Agent Studio will also integrate with Oracle Fusion Applications, the company says, meaning they can collaborate with third-party agents to complete even complex and multi-step processes.
There's also a choice of LLMs available, meaning users have access to a variety of options to address specific business needs - including LLMs specifically optimized for Oracle Fusion Applications, such as Llama and Cohere, or add other external industry-specific LLMs for specialized use cases.
“AI agents are the next phase of evolution in enterprise applications and just like with existing applications, business leaders need the flexibility to create specific functionality to address their unique and evolving business needs,” said Steve Miranda, executive vice president of applications, Oracle.
“Our AI Agent Studio builds on the 50+ AI agents we have already introduced and gives our customers and partners the flexibility to easily create and manage their own AI agents. With the agents already embedded in Fusion Applications and our new AI Agent Studio, customers will be able to further extend automation and ultimately, achieve more while spending less.”
You might also likeI know the London UK turntable specialist Vertere Acoustics from my time at TechRadar's sister publication, What Hi-Fi? (the DG-1 S/Magneto is a rare and special deck indeed) and today, March 20, the company is launching something new. It's called the DG X, and it is the latest in the firm's Dynamic Groove concept range.
Turntable tinkering – and indeed whole system compatibility – is part and parcel of the tangible vinyl experience, but Vertere can help if you'd like. The DG X can be purchased with or without a specially optimized new Groove Runner X (GRX) tonearm and Sabre Lite cartridge.
I'll get straight to pricing first. Deep breath, everyone: Vertere's DG X Sabre Lite package, including the DG X, Groove Runner tonearm, and Sabre Lite cartridge, is priced at £4,150 – so although US and Australian pricing is not yet official, that's around $5,390 or AU$8.499, before any additional shipping and duties.
Get into Vertere's groove – there's so much to loveIn this iteration, you're getting Vertere's upgraded spindle thrust motor drive, precision machined bearings, and a sophisticated triple-layer "cast illuminated plinth structure," which aims to further eliminate unwanted resonance.
Its modular design also means it can be upgraded with newer cartridges and cables (and even tonearms) in time, so it can evolve just as your love of vinyl and ever-changing musical explorations evolve. Also, the detachable tonearm is assembled with secure transit screws, meaning you'll be able to transport it with (relative) confidence.
Vertere tells me the DG X is "the pinnacle of Vertere’s innovative expertise" thanks to its improved, easier-to-understand user interface – in fact, with the setup promising to take less than 15 minutes, the company is aiming squarely at the beginner vinyl enthusiast as well as the seasoned analog audiophile here.
How good is it under rigorous test conditions? We're working on it, so feel free to check back for a fully star-rated TechRadar review very soon.
You may also likeFinancial debt, if left unchecked, can spiral out of control quickly. Simply making the minimum payments on a credit card or avoiding debt collectors doesn’t solve the root problem. Instead, interest continues to build, compounding the issue over time.
Similarly, in the world of IT management, a concept called “security debt” operates much the same way. Security debt refers to software flaws that remain unresolved for longer than a year. Much like financial debt, the longer these vulnerabilities go unaddressed, the more they accumulate, leaving businesses exposed to significant risk.
Research reveals 74% of organizations have some level of security debt, with half grappling with high-severity vulnerabilities – commonly referred to as ‘critical’ security debt. Despite these concerning statistics, organizations can take actionable steps to reduce their security debt.
Understanding the roots of security debtTo effectively reduce security debt, it’s important to first understand how it builds up. One major factor is a lack of prioritization, where organizations fail to focus on remediating the most critical vulnerabilities.
The age and size of applications also significantly contribute to security debt. Studies show a strong correlation between the age of an application and the likelihood that flaws will go unresolved. Nearly two fifths of all critical security debt are found in older applications (over 3.4 years old), meaning the older the application, the higher the chances of flaws accumulating.
Application size compounds the issue. As codebases grow, so does the volume of unresolved flaws. Large applications often carry the highest proportion of security debt, with 40% having unresolved flaws and 47% dealing with critical debt. While smaller or newer applications aren’t immune to security debt, older and larger monolithic systems typically present the greatest challenges.
Another contributing factor is the use of third-party, open source code. Vulnerabilities in third-party code are discovered on an ongoing basis, so unless these libraries are updated regularly, applications face an increasing risk. Additionally, the rise of generative AI in coding exacerbates the issue. Gartner predicts that by 2028, 75% of enterprise developers will use AI code assistants.
While AI-generated code isn’t inherently less secure than human-written code, it often carries risks. Many Large Language Models (LLMs) used to generate code are trained on insecure open-source projects, resulting in vulnerabilities if not properly vetted. An over-reliance on AI without proper oversight can accelerate the accumulation of security debt.
It’s also worth noting that security debt isn’t necessarily the result of poor decision-making or mismanagement. Time and resource constraints often force developers to make difficult choices about which flaws to address and which to defer.
Harnessing AI to combat security debtFortunately, advancements in AI tools provide development teams with powerful tools to reduce security debt. AI-driven solutions, particularly those trained on curated security datasets, excel at identifying and remediating vulnerabilities with high accuracy. These tools enable developers to address security risks more efficiently while ensuring data integrity and system security.
AI allows developers to “shift security left” in the software development lifecycle, identifying and resolving issues as they write code. This proactive approach minimizes the likelihood of costly vulnerabilities arising later in the development process, saving valuable time and resources. Additionally, by incorporating AI, organizations can better manage the growing volume of flaws, tackling both critical and less severe security debt.
Frequent code scanning remains essential, but without actionable remediation, it is not enough. AI bridges this gap by enabling continuous fixing alongside continuous scanning. By automating parts of the remediation process, AI helps teams overcome resource constraints and ensures that vulnerabilities are addressed before they become significant liabilities. Despite initial concerns about AI’s role in security, it is clear that using it responsibly is key to mitigating security debt effectively.
A future with AIAs AI continues to reshape the technological landscape, its impact on security is set to grow. With seven out of ten organizations already facing significant backlogs of security debt and vulnerabilities on the rise, development teams will need all the help they can get to stay ahead.
The future of software security will place greater emphasis on prevention. Rather than solely focusing on identifying and fixing flaws, the priority will be to prevent vulnerabilities from entering the codebase in the first place. AI has the potential to accelerate this shift by enabling scalable, secure fixes and supporting developers in tackling not only critical security debt but also the broader spectrum of unresolved flaws.
By working with AI responsibly and strategically, organizations can build a safer, more secure digital future while giving developers the tools they need to address security debt effectively.
We've featured the best Large Language Models (LLMs) for coding.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Allowing your kids access to your phone’s wallet can be a surefire way to quickly lose your savings, but there are also times when you might want to let them in, such as when they need to use a digital library card. So, what can you do to get it right?
Well, Google thinks it has a solution of sorts. It’s just announced a new feature for Google Wallet that grants kids access to limited funds while ensuring that parents are still in control. The feature is being rolled out to users in the US, UK, Australia, Spain, and Poland “over the next few weeks,” Google says.
According to Google’s press release, parents and guardians can “allow their children to access digital payments on their Android device with appropriate supervision.” In practice, that means “kids can use Google Wallet to securely tap to pay in stores and keep supported passes like event tickets, library cards and gift cards in one convenient place.”
The update comes with built-in parental controls. “A child’s payment cards can only be added with parental consent,” Google says, “and parents will receive an email whenever their child makes a transaction. Parents can also easily track recent purchases, remove payment cards and turn off access to passes right in Family Link.”
Financial independence (Image credit: Shutterstock / sdx15)This isn’t the first time Google has implemented a kid-friendly payment system with parental controls included. In the company’s Fitbit Ace LTE smartwatch, for example, children can tap to pay for items, while parents can monitor purchases and reward their youngsters when chores are completed.
Apple also has a similar feature built into Apple Cash. Parents or guardians can view a child’s recent card transactions, choose who they can send money to, receive notifications when a payment is made, lock the child’s Apple Cash account, and more.
Financial literacy is a great life skill for children to have, so it makes sense to allow them some degree of independence here, as Google and others are doing. The built-in controls should go some way to reassuring parents, although each family will need to work out an arrangement that works best for them.
You might also likePuget Systems has announced a partnership with Comino to provide advanced liquid-cooled multi-GPU servers optimized for artificial intelligence, machine learning, and high-performance computing workloads.
The collaboration will expand access to high-density GPU computing with its Comino Grando Server, delivering extreme performance, efficiency, and reliability at a more affordable price.
Featuring dual CPUs and up to eight GPUs, it competes with the most powerful computers for intensive computing tasks.
Optimized for AI, research, and rendering workloadsThe Puget Systems Comino Grando Server is engineered for AI research, deep learning, and scientific simulations, supporting high-performance, high-reliability RAM featuring Micron 8x 32GB DDR5 5600 for high-speed data processing.
This makes it one of the best computers for running video editing software, complex visualizations, handling large datasets, and running real-time simulations.
The small business server is also designed for high reliability, with a redundant power supply system featuring up to 4x 2000W hot-swap CRPS modules that support multiple redundancy modes, allowing for continuous operation even in demanding scenarios.
One of the key advantages of the Puget Systems Comino Grando Server is its ability to operate efficiently in air-cooled and water-cooled racks, handling ambient temperatures up to 40°C ensuring compatibility with both legacy infrastructure and modern energy-efficient data centers.
The system offers scalable fan options, allowing configurations from ultra-performant, high noise level with up to 5.5 kW cooling capacity to low-performant, silent operation with up to 2.5 kW cooling capacity.
Additionally, its liquid cooling system supports up to 5.5kW of thermal dissipation, ensuring consistent performance across demanding computational tasks.
By combining Puget Systems’ expertise in custom computing solutions with Comino’s liquid cooling technology, this partnership delivers a high-performance server solution at a lower cost. Additionally, businesses can install up to 8 hot-swap SSDs (SATA or NVMe) for expanded storage flexibility making it one of the best workstations available.
The Puget Systems Comino Grando Server is set to debut at GTC 2025 and will be available for configuration across a wide range of applications.
You may also likeStability AI's videos have infused text and images with movement and life for a few years but are now literally adding a new dimension by turning two-dimensional images into three-dimensional videos.
The company's new Stable Virtual Camera tool is designed to process even a single image into a moving, multi-perspective video, meaning you could rotate around and view the film from any angle.
It's not entirely a new concept, as virtual cameras have long been a staple of filmmaking and animation, letting creators navigate and manipulate digital scenes. But Stability AI has taken that concept and thrown in a heavy dose of generative AI. The result means that instead of requiring detailed 3D scene reconstructions or painstakingly calibrated camera settings, Stable Virtual Camera lets users generate smooth, depth-accurate 3D motion from even a single image, all with minimal effort.
What makes this different from other AI-generated video tools is that it doesn’t just guess its way through animation and rely on huge datasets or frame-by-frame reconstructions. Stable Virtual Camera uses a multi-view diffusion process to generate new angles based on the provided image so that the result looks like a model that could actually exist in the real world.
The tool lets users control camera trajectories with cinematic precision, choosing from movements like zoom, rotating orbit, or even a spiral. The resulting video can be in vertical form for mobile devices or widescreen if you have a cinema. The virtual camera can work with just one image but will handle up to 32.
Stability AI has made the model available under a Non-Commercial License for research purposes. That means you can play with it if you have some technical ability by grabbing the code from GitHub. Going open-source as Stability AI usually does also means the AI developer community can refine and expand the virtual camera's capabilities without the company needing to pay.
3D AIOf course, no AI model is perfect, and Stability AI is upfront about the kinks still being worked out. If you were hoping to generate realistic people, animals, or particularly chaotic textures (like water), you might end up with something that belongs in a low-budget horror film.
Don't be surprised if you see videos made with it featuring perspectives that awkwardly travel through objects or have perspective shifts leading to flickering, ghostly artifacts. Whether this will be a widely adopted tool or just another AI gimmick ignored by dedicated filmmakers remains to be seen.
Not to mention how much competition it faces among AI video tools OpenAI's Sora, Pika, Runway, Pollo, and Luma Labs' Dream Machine. Stable Virtual Camera will have to show it performs well in the real world of filmmaking to go beyond just another fun demo video.
You might also like