After months battling it out to become the world's most valuable company, and briefly holding that title on several occasions, Nvidia has become the first company ever to hit a staggering $4 trillion market cap.
Although the company's value has since dipped slightly, to just $3.972 trillion at the time of writing, it remains the world's most valuable company ahead of Microsoft and Apple – the only two other companies to have passed the coveted $3 trillion mark.
Although fluctuations continue to see the three companies change positions, Nvidia continues to see strong growth due to continued demand for AI chips.
Nvidia becomes the world's first $4 trillion companyMuch of Nvidia's success can be attributed to the launch of ChatGPT in November 2022, as its stock has risen more than 15x in the past five years as a result of the demand for AI.
More recently, Nvidia shares are up 15% month-over-month and 22% year-to-date.
The news comes in the year after Nvidia hit $2 trillion (February 2024) and $3 trillion (June 2024) valuations.
Internally, Nvidia's earnings also continue to grow, with first-quarter revenue up 12% quarter-over-quarter and a health 69% year-over-year. The company also revealed that data centre revenue also up 73% compared with last year, accounting for over 88% of the company's total revenue.
"Global demand for Nvidia's AI infrastructure is incredibly strong. AI inference token generation has surged tenfold in just one year, and as AI agents become mainstream, the demand for AI computing will accelerate," CEO Jensen Huang explained.
This marks a departure from Nvidia's past, when it was known for its gaming GPUs. With Nvidia stock worth just 1% of what it's worth today eight years ago, it's unclear how much more growth the company could be set to experience.
You might also likeNaughty Dog has released a surprise update for The Last of Us Part 2 that lets you play the game's story in chronological order.
The new Chronological mode is now available for free and allows players to experience the game's non-linear story beats in the order that they take place.
Spoiler alert! If you've played The Last of Us Part 2, this means that those Ellie and Joel flashbacks that are scattered throughout the story will now be featured chronologically near the beginning of the game.
This also means that Joel's gut-punch death won't kick-start the emotional journey and will come much later, and we'll learn about Abby's motivations a lot sooner.
"Through the new Chronological mode, we believe players will gain even deeper insight into Part 2’s narrative," Naughty Dog explained in a PlayStation Blog post.
"Those who have already played will know its story is told non-linearly, as Ellie and Abby’s motivations, realizations, and emotional stakes unfold across myriad flashbacks and present-day storylines. While this structure is very intentional and core to how our studio wanted Part II’s themes and narrative beats to impact players, we always wondered what it would be like to experience this story chronologically. And now finally, we can answer that question."
One of the game's strong suits is its narrative structure, and although the Chronogical mode is a neat addition that I'll gladly experiment with, I don't think it will offer the same emotional payoff the original format does.
"It was no small feat to bring The Last of Us Part II’s story chronologically together, given that Part II’s story is so meticulously put together," the studio said. "We’re grateful to the developers both at Naughty Dog and our partners at Nixxes to make the Chronological mode as smooth as possible. And while we of course recommend players still new to the game to play through Part 2's story as was originally developed, the team’s hard work has paid off with a fascinating new way to enjoy this chapter."
In addition to this new mode, new trophies tied to the experience have been added, along with two new Uncharted 4: A Thief's End costumes for Joel and Tommy, inspired by Nathan Drake and Sam Drake.
These skins can be unlocked by completing the narrative in Chronological mode and will be usable in No Return, The Last of Us Part 2 Remastered's survival mode.
You might also like...Samsung just unveiled its line of foldable phones for 2025, headlined by the Samsung Galaxy Z Fold 7, and next up it’s Google, with the Google Pixel 10 Pro Fold expected to land in August – but if you're looking for a foldable and power is your priority, a new benchmark suggests you’re better off buying the Z Fold 7 than waiting for Google's new foldable.
A Geekbench listing for the Pixel 10 Pro Fold (spotted by NotebookCheck) includes a single-core score of 2,276 and a multi-core result of 6,173. In isolation those numbers might not mean much, but they’re well below the results we’ve seen for the Samsung Galaxy Z Fold 7.
While Geekbench hasn’t yet listed an average score for the Z Fold 7, recent results include the likes of 2,826 and 2,552 for single-core, and 9,053 and 8,639 for multi-core. It’s likely that when an average is reached it will be similar to the Samsung Galaxy S25 Ultra’s averages, as that phone has the same chipset and amount RAM, and that’s sitting at 2,878 for single-core and 9,510 for multi-core.
(Image credit: Geekbench)Expectedly underpoweredAll of those numbers are much higher than the Google Pixel 10 Pro Fold’s results. Now, we’d take the Pixel’s results with a pinch of salt, since these would have been recorded on a pre-release handset, so further optimization could take place. And this is just one result, so it could end up being an outlier.
But there’s nothing overly surprising here, as the Google Tensor G5 chipset the phone will probably use is unlikely to match the Galaxy Z Fold 7’s Snapdragon 8 Elite chipset. That’s the case every year, with Google’s chipsets proving less powerful than rival options.
Still, this is an improvement on the Google Pixel 9 Pro Fold, which tends to record single-core scores of just under 2,000, and multi-core scores of around 4,500.
So going by this one Pixel 10 Pro Fold result, it should be a significant upgrade on the Pixel 9 Pro Fold at least, and while it probably won’t be the speediest phone around, it’s unlikely to feel lacking in power in everyday use.
We should find out for sure in August, with leaks pointing to August 20 as the announcement date for the whole Google Pixel 10 line.
You might also likeIt’s finally official: The Big Bang Theory spinoff Stuart Fails to Save the Universe has been green lit by HBO Max, following earlier reports from 2023 that it had been in development. We don’t know when it will begin to film or air, but fans can start to get excited.
Or can we? This isn’t the first spinoff we’ve seen for the franchise, with CBS and Paramount+ partnering up to bring us seven seasons of Young Sheldon and new kid on the block Georgie & Mandy’s First Marriage.
Even though there was overlap between the hit comedy series and its initial prequel, The Big Bang Theory (TBBT) hasn’t been on our screens since 2019. But making a comeback with a spinoff like this has me worried, and not in a quirky Sheldon Cooper way.
Stuart Fails to Save the Universe isn’t the part of The Big Bang Theory universe we want to seeStuart (Kevin Sussman), Raj (Kunal Nayyar) and Sheldon (Jim Parsons) in The Big Bang Theory. (Image credit: CBS)If you could choose to follow-up with any element of The Big Bang Theory, it definitely wouldn’t be with comic book store owner Stuart (Kevin Sussman). When we last saw him, he’d finally moved in with the love of his life Denise (Lauren Lapkus), who he’d originally hired to help in the store after it became popular.
Here’s what HBO Max tells us to expect from Stuart Fails to Save the Universe: “Tasked with restoring reality after he breaks a device built by Sheldon and Leonard, accidentally bringing about a multiverse Armageddon. Stuart is aided in this quest by his girlfriend Denise, geologist friend Bert ( Brian Posehn), and quantum physicist/all-around pain in the ass Barry Kripke (John Ross Bowie). Along the way, they meet alternate-universe versions of characters we’ve come to know and love from The Big Bang Theory. As the title implies, things don’t go well.”
It’s a clever get-out-of-jail-free card for HBO Max. As the synopsis suggests, fan favourites from TBBT might return, but it isn’t guaranteed. The most obvious follow-up to the main series would have been to catch up with Sheldon (Jim Parsons), Penny (Kayley Cuoco), Leonard (Johnny Galecki) or Howard (Simon Helberg). Choosing to follow a minor character fans might not remember or care about is the very definition of the meme “go girl, give us nothing”.
Can we then assume that a left-field approach like Stuart Fails to Save the Universe could actually damage the legacy of TBBT? We talk a lot about existing IPs being left alone in favour of original storytelling, and this could be a great example. It’s currently unclear what value the new spinoff will add, other than choosing to appease fans with surprise cameos as and when it wants to.
There's proof in the spinoff pudding, too. While Young Sheldon became a surprise smash hit that ruled the Nielsen ratings, Georgie & Mandy’s First Marriage has been much more mixed. Granted, it’s been renewed for a second season, but after the single-cam format of Young Sheldon, it returned to TBBT’s signature multi-cam approach. Combined with a laugh track, the audience reactions suggest the time for classic sitcom formats has been and gone.
If Stuart Fails to Save the Universe can lean into what came before in TBBT without making it a superficial cameo soup with its sights set on high ratings, then there’s potential for a job well done. If not, well… Stuart might fail to save his own show.
You might also likeWe saw plenty of new product announcements at the Samsung Galaxy Unpacked 2025 event yesterday, but we didn't quite get all of the news we were expecting – and Samsung has gone on the record to clarify that more launches are on the way.
First up, as reported by The Korea Times (via Android Authority), Samsung mobile chief TM Roh has said the long-awaited Galaxy tri-fold phone should be coming before the end of 2025, so adjust your savings plans accordingly.
"We are working hard on a tri-fold smartphone with the goal of launching it at the end of this year," Roh told journalists. "We are now focusing on perfecting the product and its usability, but we have not decided its name."
One name in the running is apparently the Samsung Galaxy G Fold, but watch this space. The device will seemingly come with two hinges and three sections to its main display, and is also rumored to cost quite a bit (as you would expect).
Tablets are coming tooThe Samsung Galaxy Tab S10 Ultra (Image credit: Philip Berne / Future)Next up, there's the Samsung Galaxy Tab S11 tablet series, following on from the models we saw last year, including the Samsung Galaxy Tab S10 Plus. There's an update here as well, again courtesy of Android Authority.
According to a Samsung executive, the Galaxy Tab S11 devices will be "coming through shortly" in line with the "traditional cadence" – which most likely means a September launch this year, to match the 2024 schedule.
Leaks around the Samsung Galaxy Tab S11 tablets have been pretty thin on the ground so far, but we are expecting both a standard model and an Ultra edition to match the Samsung Galaxy Tab S10 Ultra – as Samsung takes on the best iPads once more.
In the meantime, we've got hands-on impressions of all the devices Samsung did announce yesterday: the Galaxy Z Fold 7, Galaxy Z Flip 7, Galaxy Z Flip 7 FE, Galaxy Watch 8, Galaxy Watch 8 Classic.
You might also likeThe browser wars could be about to heat up in a big way, with ChatGPT-maker OpenAI apparently set to launch its own web browser in the coming weeks. According to Reuters, that could put significant pressure on Google Chrome and potentially “fundamentally change how consumers browse the web.”
Citing “three people familiar with the matter,” Reuters reports that the browser will have an artificial intelligence (AI) chat interface that would keep many user interactions within the chat window rather than linking out to external websites.
As well as that, the browser might integrate OpenAI’s AI agent – dubbed Operator – that would allow the app to “carry out tasks on behalf of the user.” This could include “booking reservations or filling out forms” on the websites you use.
OpenAI’s browser is apparently built using Google’s open-source Chromium tech, which powers Chrome, Edge, and many of the other best web browsers. OpenAI’s product is due to launch “in the coming weeks,” Reuters’ sources believe.
Analysis: The battle for your data goldmine(Image credit: OpenAI)OpenAI faces stiff competition in the browser world. Google Chrome currently enjoys a stranglehold position, with roughly two-thirds of the available market share.
AI firm Perplexity and web firms Brave and The Browser Company have also launched their own AI browsers. Like OpenAI’s rumored effort, Perplexity’s browser can perform tasks on your behalf.
Reuters suggests a clear motive for OpenAI: user data. Running its own browser would allow the company to harvest as much information as possible from users, which could then go back into training its AI models and providing other monetization opportunities.
After all, Chrome “provides user information to help Alphabet target ads more effectively and profitably, and also gives Google a way to route search traffic to its own engine by default,” Reuters says. An OpenAI browser would give the AI firm a similarly powerful access route to lucrative data.
If you’re concerned about your privacy, then, OpenAI’s browser is likely to be ringing alarm bells in your head. OpenAI has faced criticism for its data collection practices, as has Google Chrome. As someone who's used Firefox for over two decades, that has me worried.
When scooping up users’ private data is an incentive for launching a browser – as Reuters implies it might be – serious caution is advised. We’ll have a clearer idea of all this when OpenAI’s web browser launches later this year, so keep your eyes peeled.
You might also likeIf you’ve got your sights set on the iPhone 17 Air, you can also now start thinking about what color you'd like your slimline iPhone to be, as a new leak has revealed the four shades that Apple will apparently sell the device in.
According to leaker Majin Bu – citing a previous leak from Fixed Focus Digital which they claim to have now “confirmed” – the iPhone 17 Air will be available in black, silver, light gold, and light blue shades.
The black shade is apparently a “dark and understated tone”, the silver model is said to be “bright and clear”, the light gold is described as being “warm and soft”, and the light blue as “very light” and “evoking a clear sky”.
(Image credit: Majin Bu)You can see how these colors might look in the image Majin Bu provided above, though it sounds like the shades shown are probably guesses based on how their sources described the colors.
In any case, we’d take this information with a pinch of salt – although the use of mostly quite pale, delicate shades seems fitting for phone that’s likely to be billed as slim and elegant.
Some surprising chipset newsIn other iPhone 17 Air news, leaker Fixed Focus Digital (via Phone Arena) has claimed that the handset will have an A19 Pro chipset. That’s at odds with previous leaks, which suggested only the iPhone 17 Pro and Pro Max would have this chipset, while the iPhone 17 and iPhone 17 Air would have an A19.
However, they added that the iPhone 17 Air will get a version of the A19 Pro with a five-core GPU while the iPhone 17 Pro models will get a six-core version, in which case the Pro models would still have the most powerful chipset.
They also mention RAM, saying that the standard iPhone 17 will have 8GB, while the other three models will have 12GB. So if this is all correct then the upcoming iPhones could have three different power tiers rather than the usual two.
We’re a bit skeptical of this since it would be a change of form for Apple, and since other leaks point to the iPhone 17 Air having the base A19 chipset, but if Apple does boost the power that could help to make up for the Air only having one rear camera lens, which is something that’s widely rumored to be the case.
We should find out exactly how powerful the iPhone 17 Air is – and what colors it’s available in – sometime in September, as that’s when it will likely launch, alongside the rest of the iPhone 17 series.
You might also likeNew research has claimed barely one in 10 (12%) SMEs have invested in AI-related training for their staff.
The report from The Institute of Coding revealed nearly one in three (29%) SMEs now see a lack of training as their biggest obstacle to AI tools adoption, with a further one in two (52%) citing a lack of internal skills and knowledge as the main battier.
Moreover, the research depicts a troubling picture for smaller companies when compared with their larger counterparts – 82% of medium businesses expressed confidence working with AI compared with 37% of smaller businesses.
Small businesses are struggling to adopt AIAround one in two (51%) of the SMEs surveyed agreed that AI could now be perceived as critical, but only around half of these (27%) believe they can safely and effectively implement AI tools.
As such, The Institute of Coding is warning of a growing AI readiness divide between different types of companies, launching its own free short AI courses aimed at all career levels.
Looking ahead, companies that are on the verge of being left out by the AI revolution are now calling for greater support from the government.
Three in five (59%) call for national AI skills strategies to support businesses of all sizes, with three-quarters asking for clearer guidance on the AI skills they're likely to need in the next three to five years.
"This isn’t just about individual business success – it’s about ensuring the entire UK economy can participate in the AI transformation," Professor Rachid Hourizi MBE, Director of the Institute of Coding, explained.
The report goes on to explain that, if micro businesses and sole traders are not explicitly included in national plans, AI will become concentrated, not democratized.
You might also likeFormer Microsoft lead Peter Moore believes the Xbox brand might not be around today if the company hadn't focused on repairing countless Xbox 360 consoles.
In an interview with The Game Business, Moore suggested that the company was in hot water trying to figure out how to fix the Xbox 360's "red ring of death", but determined that spending the $1.15 billion on repairs is what ultimately saved its future.
"It took us a while to figure out what was going on," Moore said (thanks, GamesRadar). "Were the fans in the right place? [...] Trying to figure out whether, you know, [...] wrapping the towel around it would create more heat, which would rejoin some of the issues of the cracks in some of the units. All of this was going on, and it was stressful. But the one thing I will always say is, you know, this was, for us, a defining moment.
"If we hadn't done what we did, I'm not sure the Xbox brand would be around today."
Moore continued, saying, "We felt that that was money well spent to hang on to a brand that we built, that we felt had huge viability going forward – and of course it does," he said. "And doing good by the gamers."
However, Moore believes that if Xbox had fallen, the game industry as a whole would have suffered.
"Rising tide lifts all ships. Microsoft's entry into the market created a massive rising tide," he said. "They put billions of dollars into marketing, advertising, R&D, and I don’t think the industry would be anywhere close to what it is today if Microsoft wasn't involved."
In other news, Microsoft recently made significant cuts to its gaming division, laying off an estimated 9,000 staff members, and canceling multiple projects such as the Perfect Dark reboot, Everwild, and an unannounced MMO from The Elder Scrolls Online developer.
You might also like...This week, Samsung Galaxy Unpacked unveiled its latest smartphones, the Galaxy Z Fold 7, Flip 7, and Flip 7 FE, all integrated with Google Gemini AI features. To sweeten the deal for potential customers, Google announced a special offer: six free months of Google AI Pro for those who purchase the new phones. This premium subscription service includes access to the Gemini 2.5 Pro model, the Google Veo 3 AI video generator, two terabytes of cloud storage, and early access to upcoming AI features.
Of course, once those six months are up, you'll have to pay the standard $20 a month to keep your subscription. But, Google likely believes more than a few people will be happy to pay after they get accustomed to its AI toolkit. The psychology behind this is as simple as free samples at the grocery store. Google isn’t trying to sell you a subscription right now because it thinks you won't want to give it up just because it isn't free anymore.
It's a pretty impressive set of features. Veo 3 is one of the most powerful consumer-facing video generators available. And Gemini 2.5 Pro is far more coherent in conversation than its predecessors.
Gemini try and buyIt's easy to imagine how Google hopes the six months will go. You might spend a month fiddling with Veo and creating movies about your dog going on adventures. Or start turning to Gemini to summarize very long emails, and eventually every email. Or you might get a great recipe from a random prompt to Gemini and soon use it to plan your every meal. By the end of six months, Google's AI might just be what you turn to a dozen times a day as a reflex. By the time Google asks for $20 a month, you might even consider it a bargain.
That's Google's dream scenario, but it comes with a risk. Google is betting that people will find these tools indispensable. But, if people take it for granted for six months, they might resent having to pay for it, no matter how much you enjoy playing with Veo 3 and talking to Gemini. Nobody likes the feeling of having something useful pulled away unless you pony up. That's probably even more likely when it comes bundled with a device that already costs over a thousand dollars. There’s a version of this where the user relationship becomes less “wow, this is useful,” and more “wait, I have to pay extra for that now?”
But, I wouldn't be surprised if a scenario somewhere between the extremes still makes Google happy. Tying its most advanced AI tools to Samsung’s brand-new hardware is smart. The Galaxy Z Fold 7 and Flip 7 are devices people buy because they want the bells and whistles. They're practically built for people who like to show off. In other words, the people most likely to find ways to use AI enough to justify $20.
But it cuts both ways. Because if the experience feels too essential, people will feel punished when it disappears. And if it doesn’t feel essential enough, they won’t bother subscribing. The six-month trial is walking a very fine line between generosity and locking someone in to an AI future.
You might also likeAt a time when many believe that oversight of the Artificial Intelligence industry is desperately needed, the US government appears to have different ideas. The "One Big Beautiful Bill Act" (OBBBA)—recently given the nod by the House of Representatives—includes a 10-year moratorium on state and local governments enacting or enforcing regulations on AI models, systems, or automated decision-making tools.
Supporters claim the goal is to streamline AI regulation by establishing federal oversight, thereby preventing a patchwork of state laws that could stifle innovation and create compliance chaos. Critics warn that the moratorium could leave consumers vulnerable to emerging AI-related issues, such as algorithmic bias, privacy violations, and the spread of deepfakes.
Basically, if the AI sector is the Wild West, no one will be allowed to clean up Dodge.
Why should we care?History may not literally repeat itself, but there are historical patterns and trends that we can view and hopefully be informed by, and our history books are packed with examples of technology reshaping the lives of the workforce.
And be it in the form of James Watt’s steam engine or Henry Ford’s moving assembly line, the cost of the progress brought by fresh technology is regularly paid by the large numbers of people sent home without a pay packet.
And AI will cost jobs too.
Experts such as those at McKinsey, the Lancet, or the World Economic Forum (WEF) may not agree on exact numbers or percentages of lost jobs, but the consistent message is that it will be bad:
Of course, as with all new technologies, new jobs will be created. But we can’t all be prompt engineers.
The Great Brain RobberyEssentially, those hit hardest by the bulk of new technologies from the Spinning Jenny onwards were the ones engaged to carry out physical work. But AI wants to muscle in on the intellectual and creative domains previously considered uniquely human. For example, nonpartisan American think tank the Pew Research Center reckons 30% of media jobs could be automated by 2035.
And those creative jobs are under threat because creatives are being ripped off.
Many AI models are trained on massive datasets scraped from the internet, and these often include articles, books, images, music and even code that are protected by copyright laws, but AI companies lean heavily towards take-first-ask-later. Obviously, artists, writers, and other content creators see this practice as unauthorized use of their intellectual property and they argue that ultimately, it’s not even in the best interests of the AI sector.
If AI takes work away from human creatives—devastating creative industries already operating on thin margins—there will be less and less innovative content to feed to AI systems which will result in AI feeding off homogenized AI content – a derivative digital snake eating its own tail.
A smarter way forward would be to find a framework where creatives are compensated for use of their work to ensure the sustainability of human produced product. The music industry already has a model where artists receive payments via performing rights organizations such as PRS, GEMA and BMI. The AI sector needs to find something similar.
To make this happen, regulators may need to be involved.
Competitive opportunity versus minimizing societal harmWithout regulation, we risk undermining the economic foundations of creative and knowledge-based industries. Journalism, photography, literature, music, and visual arts depend on compensation mechanisms that AI training currently bypasses.
The United Kingdom and the European Union are taking notably different paths when it comes to regulating AI. The EU is pursuing a strict, binding regulatory framework, an approach designed to protect fundamental rights, promote safety, and ensure ethical use of AI across member states. In contrast, the UK is currently opting for a more flexible approach, emphasizing innovation and light-touch oversight aiming to encourage rapid AI development and attracting investment.
But this light-touch strategy could be a massive misstep – one that in the long term could leave everyone wishing we’d thought things through.
While AI enthusiasts may initially be pleased with minimal interference from regulators, eventually AI businesses will come up against consumer trust, something they absolutely need.
While AI businesses operating in Europe will be looking at higher compliance costs, there is also a clearer regulatory landscape and therefore more likely to be greater consumer trust – a huge commercial advantage.
Meanwhile, AI businesses operating in light-touch markets (such as the UK) need to consider how their AI data practices align with their (and their competitors’) brand values and customer expectations. As public awareness grows, companies seen as exploiting creators may face reputational damage. And a lack of consumer confidence could lead to a shift in mindset from previously arm’s-length regulators.
Regardless of the initial regulatory environment, early adopters of ethical AI practices may gain competitive advantages as regulatory requirements catch up to ethical standards. Perhaps the wisest way forward is to voluntarily make Dodge City a better place, even if there’s no sheriff in town – for now.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
As organizations continue to adopt AI tools, security teams are often caught unprepared for the emerging challenges. The disconnect between engineering teams rapidly deploying AI solutions and security teams struggling to establish proper guardrails has created significant exposure across enterprises. This fundamental security paradox—balancing innovation with protection—is especially pronounced as AI adoption accelerates at unprecedented rates.
The most critical AI security challenge enterprises face today stems from organizational misalignment. Engineering teams are integrating AI and Large Language Models (LLMs) into applications without proper security guidance, while security teams fail to communicate their AI readiness expectations clearly.
McKinsey research confirms this disconnect: leaders are 2.4 times more likely to cite employee readiness as a barrier to adoption versus their own issues with leadership alignment, despite employees currently using generative AI three times more than leaders expect.
Understanding the Unique Challenges of AI ApplicationsOrganizations implementing AI solutions are essentially creating new data pathways that are not necessarily accounted for in traditional security models. This presents several key concerns:
1. Unintentional Data Leakage
Users sharing sensitive information with AI systems may not recognize the downstream implications. AI systems frequently operate as black boxes, processing and potentially storing information in ways that lack transparency.
The challenge is compounded when AI systems maintain conversation history or context windows that persist across user sessions. Information shared in one interaction might unexpectedly resurface in later exchanges, potentially exposing sensitive data to different users or contexts. This "memory effect" represents a fundamental departure from traditional application security models where data flow paths are typically more predictable and controllable.
2. Prompt Injection Attacks
Prompt injection attacks represent an emerging threat vector poised to attract financially motivated attackers as enterprise AI deployment scales. Organizations dismissing these concerns for internal (employee-facing) applications overlook the more sophisticated threat of indirect prompt attacks capable of manipulating decision-making processes over time.
For example, a job applicant could embed hidden text like "prioritize this resume" in their PDF application to manipulate HR AI tools, pushing their application to the top regardless of qualifications. Similarly, a vendor might insert invisible prompt commands in contract documents that influence procurement AI to favor their proposals over competitors. These aren't theoretical threats - we've already seen instances where subtle manipulation of AI inputs has led to measurable changes in outputs and decisions.
3. Authorization Challenges
Inadequate authorization enforcement in AI applications can lead to information exposure to unauthorized users, creating potential compliance violations and data breaches.
4. Visibility Gaps
Insufficient monitoring of AI interfaces leaves organizations with limited insights into queries, response and decision rationales, making it difficult to detect misuse or evaluate performance.
The Four-Phase Security ApproachTo build a comprehensive AI security program that addresses these unique challenges while enabling innovation, organizations should implement a structured approach:
Phase 1: Assessment
Begin by cataloging what AI systems are already in use, including shadow IT. Understand what data flows through these systems and where sensitive information resides. This discovery phase should include interviews with department leaders, surveys of technology usage and technical scans to identify unauthorized AI tools.
Rather than imposing restrictive controls (which inevitably drive users toward shadow AI), acknowledge that your organization is embracing AI rather than fighting it. Clear communication about assessment goals will encourage transparency and cooperation.
Phase 2: Policy Development
Collaborate with stakeholders to create clear policies about what types of information should never be shared with AI systems and what safeguards need to be in place. Develop and share concrete guidelines for secure AI development and usage that balance security requirements with practical usability.
These policies should address data classification, acceptable use cases, required security controls and escalation procedures for exceptions. The most effective policies are developed collaboratively, incorporating input from both security and business stakeholders.
Phase 3: Technical Implementation
Deploy appropriate security controls based on potential impact. This might include API-based redaction services, authentication mechanisms and monitoring tools. The implementation phase should prioritize automation wherever possible.
Manual review processes simply cannot scale to meet the volume and velocity of AI interactions. Instead, focus on implementing guardrails that can programmatically identify and protect sensitive information in real-time, without creating friction that might drive users toward unsanctioned alternatives. Create structured partnerships between security and engineering teams, where both share responsibility for secure AI implementation.
Phase 4: Education and Awareness
Educate users about AI security. Help them understand what information is appropriate to share and how to use AI systems safely. Training should be role-specific, providing relevant examples that resonate with different user groups.
Regular updates on emerging threats and best practices will keep security awareness current as the AI landscape evolves. Recognize departments that successfully balance innovation with security to create positive incentives for compliance.
Looking AheadAs AI becomes increasingly embedded throughout enterprise processes, security approaches must evolve to address emerging challenges. Organizations viewing AI security as an enabler rather than an impediment will gain competitive advantages in their transformation journeys.
Through improved governance frameworks, effective controls and cross-functional collaboration, enterprises can leverage AI's transformative potential while mitigating its unique challenges.
We've listed the best online cybersecurity courses.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Munich-based startup Cerabyte is developing what it claims could become a disruptive alternative to magnetic tape in archival data storage.
Using femtosecond lasers to etch data onto ceramic layers within glass tablets, the company envisions racks holding more than 100 petabytes (100,000TB) of data by the end of the decade.
Yet despite these bold goals, practical constraints mean it may take decades before such capacity sees real-world usage.
The journey to 100PB racks starts with slower, first-generation systemsCMO and co-founder Martin Kunze outlined the vision at the recent A3 Tech Live event, noting the system draws on “femtosecond laser etching of a ceramic recording layer on a glass tablet substrate.”
These tablets are housed in cartridges and shuttled by robotic arms inside tape library-style cabinets, a familiar setup with an unconventional twist.
The pilot system, expected by 2026, aims to deliver 1 petabyte per rack with a 90-second time to the first byte and just 100MBps in sustained bandwidth.
Over several refresh cycles, Cerabyte claims that performance will increase, and by 2029 or 2030, it anticipates “a 100-plus PB archival storage rack with 2GBps bandwidth and sub-10-second time to first byte.”
The company’s long-term projections are even more ambitious, and it believes that femtosecond laser technology could evolve into “a particle beam matrix tech” capable of reducing bit size from 300nm to 3nm.
With helium ion beam writing by 2045, Cerabyte imagines a system holding up to 100,000PB in a single rack.
However, such claims are steeped in speculative physics and should, as the report says, be “marveled at but discounted as realizable technology for the time being.”
Cerabyte’s stated advantages over competitors such as Microsoft’s Project Silica, Holomem, and DNA storage include greater media longevity, faster access times, and lower cost per terabyte.
“Lasting more than 100 years compared to tape’s 7 to 15 years,” said Kunze, the solution is designed to handle long-term storage with lower environmental impact.
He also stated the technology could ship data “at 1–2GBps versus tape’s 1GBps,” and “cost $1 per TB against tape’s $2 per TB.”
So far, the company has secured around $10 million in seed capital and over $4 million in grants.
It is now seeking A-round VC funding, with backers including Western Digital, Pure Storage, and In-Q-Tel.
Whether Cerabyte becomes a viable alternative to traditional archival storage methods or ends up as another theoretical advance depends not just on density, but on long-term reliability and cost-effectiveness.
Even if it doesn't become a practical alternative to large HDDs by 2045, Cerabyte’s work may still influence the future of long-term data storage, just not on the timeline it projects.
Via Blocksandfiles
You might also like