Sony is suing Tencent over Light of Motiram, the publisher's open-world survival game that looks an awful lot like Horizon Zero Dawn.
Initially reported by Reuters, Sony has filed a lawsuit against Tencent for copyright and trademark infringement, claiming the Chinese tech company has created a knock-off game of its Horizon intellectual property.
Sony alleges that the company's upcoming game Light of Motiram, developed by Polaris Quest, is a "slavish clone" of Guerrilla Games' 2017 title and copies several Horizon elements, like gameplay, art style, post-apocalyptic themes, the game's playable protagonist Aloy, and other details.
The suit also cites the public's comparisons, including headlines from Kotaku, GameRant, and comments made on IMDB.
"Tencent’s copying of Horizon is so blatant that the public has described it as 'crazy,' 'insane,' and 'shameless,'" the lawsuit reads.
(Image credit: Tencent / Polaris Quest)"Tencent also used its rip-off of the iconic Horizon main character 'Aloy' as the centerpiece of its pre-release marketing and promotional strategy, deliberately causing numerous game lovers to confuse Light of Motiram as the next game in the Horizon series when encountering Tencent’s promotional game play videos and social media accounts."
Sony also alleges that the game's promotional art and screenshots have "misappropriated protectable elements of SIE’s copyrights in the Horizon Franchise to a significant degree", including music and vocals, which are similar thematically.
The PlayStation company says this was deliberate, as Tencent hired a composer from the Horizon Forbidden West soundtrack "to replicate the unique sound for Light of Motiram."
(Image credit: Guerrilla Games)The suit also alleges that Tencent asked Sony to collaborate on a new Horizon game, which Sony declined. Sony claims that the company began development on Light of Motiram afterward, despite the rejected offer.
"Upon information and belief, sometime in 2023 (and unbeknownst to Sony) Tencent started developing a video game called Light of Motiram which – just like Horizon – features a young, red-headed female protagonist and tribal groups fighting for survival among large robotic animals in a post-apocalyptic world," the suit reads.
"In March 2024, at a gaming conference in San Francisco, California, Tencent executives approached Sony with a pitch: to develop its own Horizon game in collaboration with SIE. Sony rejected the idea and considered the matter closed.
"Apparently, Tencent was undeterred by SIE’s refusal to license its Horizon intellectual property. Tencent continued secretly developing Light of Motiram, eventually announcing a forthcoming game. Tencent’s promotional material bore a strong similarity to SIE’s own Horizon promotional material."
Sony is seeking unspecified monetary damages and an order blocking Tencent from violating its IP rights.
You might also like...Tired of Messages and WhatsApp and ready to try something new? Twitter founder Jack Dorsey’s new Bitchat Mesh app has landed on the iOS App Store, while an Android version is available on GitHub.
Download the app and you’ll find that it offers a novel way to contact friends and loved ones.
Dorsey announced Bitchat Mesh in early July. Unlike traditional messaging apps, Bitchat Mesh doesn’t rely on the internet to link up users and devices. Instead, it uses Bluetooth to relay messages from one person to another, so it should theoretically work in places where you lack internet connectivity – provided there are enough nearby Bluetooth devices to pass on your texts.
As well as this unusual distinction, Bitchat Mesh puts an emphasis on user privacy. You don’t need to register your phone number or email address with the app, nor even create an account to get started. That allows you to keep your identifying information private without being hindered.
Bitchat Mesh is also end-to-end encrypted, meaning all of your messages remain private and no one – not even Bitchat’s developer – can intercept or read them. There’s even a 'panic mode' that lets you erase all your data by triple-tapping the app’s logo.
A different way to text(Image credit: Pixabay)Bitchat Mesh is a specialized app for people who care deeply about their privacy, and its unusual nature might prevent it from achieving the kind of mass-market saturation that rivals like WhatsApp have managed. But it could still have plenty of appeal for people who need its distinct features.
By not relying on the internet, for example, the app is more resistant to both network outages and censorship attempts than rival products might be. That could come in handy in nations run by oppressive governments or locations where you might not trust the security of more popular alternatives.
In our brief testing, Bitchat Mesh told us that there were zero other users in our vicinity, presumably because the app has only just launched. But that's likely to be an issue for many potential uses – if there’s no one around you, you might struggle to send your messages, given that the app relies on Bluetooth connections to relay texts.
Still, Bitchat Mesh can be used entirely for free, with no paywalls, subscriptions or in-app purchases yet in place, so you might want to try it out to see if it suits your needs.
You might also likeFortnite is running the Super Showdown event later this week (August 2), and so far we know that it'll involve Superman in a big way. This is the latest in a string of live events that've been airing in Fortnite this year, and I'm expecting it to lead nicely into Season 4.
What's new in Fortnite?(Image credit: Epic Games)Epic Games just launched the collab for The Fantastic Four: The First Steps, with movie-inspired skins available as rewards as part of a new Tournament. Soon, we'll see a brand new Season of Fortnite, launching Chapter 6 Season 4 for players to dive into. At present, we don't have much info on what to expect, though we'll get news as launch day approaches.
We're currently in Fortnite Chapter 6 Season 3, a superhero-themed affair that adds super-powered items and a completely new ranking system. The next season of Fortnite is just around the corner, however, so the game will be getting a big refresh very soon indeed. It's regular updates like these that have kept Fortnite firmly ranked in our best free games to play in 2025 list.
Here's what you need to know about Fortnite Super Showdown, including the start time and how to watch it on the day. It's a live Story Event, and it's set to be a Superman-led battle against a gigantic foe. Let's dive in.
Fortnite Super Showdown - cut to the chaseFortnite Super Showdown will start on August 2 at 2:30pm ET / 11:30am PT (August 1) / 7:30m BST. Doors will open half an hour prior, and it's recommended that you jump in at the following times to secure your place:
Fortnite Super Showdown is a live event that'll begin at the times specified earlier in this article. If you want to watch it live, you can jump in yourself, and there will likely be a safe zone around Demon's Domain where players won't be able to eliminate each other.
If you can't log in yourself, TechRadar Gaming will be covering the event as part of a live blog (as we did recently with the Fortnite OG rocket launch). I'll be giving my impressions as they happen, and providing up-to-date info on how the event is unfolding. You can also join your Twitch or YouTube streamer of choice, as there'll no doubt be many streaming the event. Note that Epic Games doesn't broadcast these events live on its official channels.
Fortnite Super Showdown teaser trailerSuperman returns to help save the island August 2 in this season’s Super Showdown Story Event! pic.twitter.com/Vcr2QmSBQoJuly 27, 2025
The Fortnite X channel tweeted out a teaser trailer for the upcoming Super Showdown event (embedded above). In it, we see the eye of a giant creature, which many believe to be a kraken. Then, the current map is shown with Demon's Domain highlighted as the main location for the event.
Fortnite Super Showdown Story Event - what to expect(Image credit: Epic Games)Fortnite Super Showdown will feature a giant battle between Superman and an as-yet unrevealed foe. We know that it's a huge enemy with a big white eye, and many fans are predicting it to be a kraken. Other than that, we know that it'll all take place in Demon's Domain and will likely give some teases as to what's coming next in Chapter 6.
Epic Games will probably reveal more closer to launch, and once it does, I'll be sure to update this page.
You Might Also Like...It's been a long time since Nvidia launched its RTX 5000 series GPUs in late January, followed by other configurations in later months, after a CES 2025 keynote that showcased the Blackwell GPUs. However, it seems Nvidia might not be done with new GPU launches in 2025 just yet.
According to TweakTown, Nvidia is set to launch RTX 5000 series Super models later this holiday season, which typically means November or December. The RTX 5080, RTX 5070 Ti, and RTX 5070 are the GPUs reported to receive Super upgrades, with the new 5080 and 5070 Ti reportedly set to use 24GB of VRAM.
Pricing isn't finalized, and there aren't any figures to work from at this point. But considering Team Green's previous move was reducing the RTX 4080 Super price (as a slightly more powerful GPU) compared to the standard model, we could see a similar pattern again.
There's no sugarcoating the level of controversy that shrouded the Blackwell GPUs, with missing specs (ROPs), a lack of availability, and most importantly, inflated prices across multiple online retailers. With the RTX 5000 series Super models, Nvidia and, notably, its board partners, have a chance to right those wrongs.
A combination of improved performance across the board and adjusted price points may work wonders – and that mostly applies to the RTX 5080 potentially closing the gap on the RTX 4090 (supposedly using 24GB of VRAM). It may be even more interesting to see an RTX 5060 Super using more VRAM, but we'll have to wait and see.
Analysis: If prices for these Super GPUs are out of whack, then forget I even mentioned this...(Image credit: Future)Above all, if these Super GPU model rumors are legitimate, prices will once again determine their success. While I'm aware that Nvidia may have good intentions with more reasonable pricing, all of that work could be undone by board partners and retailers marking up prices significantly.
It's the same issue that botched AMD's Radeon RX 9000 series launch for many; the Radeon RX 9070 XT was seen as the inexpensive and powerful alternative to Blackwell mid to high-tier GPUs, at $599 / £569 (around AU$944), but the market told a different story with prices soaring far above that.
Fortunately, prices have recently fallen back down to original retail pricing, which I'm seeing with more stock and availability for both Team Green and Team Red GPUs than ever before.
I'm hoping that prices can stabilize and stay within reasonable ranges leading up to the eventual launch of Nvidia's new Super GPU models, as it could decrease the chances of ludicrous pricing. Let's just watch this space...
You might also like...Today, our world relies on maps – think about how many apps and services you use daily, both personally and professionally, that use a location-based component.
Given how much of the world relies on maps, you’d think there are lots of maps designed to allow businesses and their developers to solve specific problems. Surprisingly, there are few maps for businesses to build with and integrate into their own applications and use cases.
While proprietary maps do come with much-needed quality and reliability, they also come with the huge sacrifice of not being able to combine useful data from other map ecosystems, providers and open sources. They’re not interoperable. As a result, most maps will never be as rich as they could be for their specific use case.
So, what challenges does this pose for organizations and developers innovating with digital maps and location data? And how can they find the right commercial mapping solution to enable new services and products to flourish?
Today’s map data integration challengeIn most cases, the digital maps we have today resulted from a single use case. However, when digital maps are built with a single end use, they lose their dynamism, they become static and rigid — more akin to paper maps of old than the powerful, data-rich tools they can be.
This has meant that all kinds of organizations across the private and public sector have had to make do with limitations imposed upon them. Companies that build with map data have had to develop and maintain their own map stacks, balancing data from disparate sources that all reference different base maps and somehow, make it all work.
They’ve had to invest significant time, money and resources into adapting their maps, fitting their data to its structure and making it work for their use case. Over the years, these maps have been modified and adapted to work for other use cases and have become large and unwieldy.
Ultimately, something that’s adapted to solve a single problem is never going to be as good as something bespoke and purpose built to solve many problems – but what is the solution?
Striving for a standardized, interoperable, open futureNow, organizations and their developers must select from the available mapping providers to determine which solution will meet their unique requirements. What has been missing from the market, however, is a solution in which all companies and devices can collaborate and communicate through a single digital representation of the physical world.
In a fast-paced and competitive landscape, companies shouldn’t be restricted on how to build for their customers, rather they should be empowered to utilize maps in the best way possible. They need a geospatial standardized map; one they can add their own data to and innovate on top of.
Think of it like the Internet – if every tech company created its own Internet and data couldn’t be moved between these systems; there would be a huge cost in moving that data around and the Internet wouldn’t have developed into what it is today.
This layered approach, built on an open standard, will ensure that all parts of the digital stack work together, without the need to resolve or conflate data from one platform to another. This level of interoperability saves time, effort and a lot of headaches later down the line when the businesses try to meld data from another source or add additional functionalities.
Most importantly, this will free up resources so developers can focus on creating new services and products that are specific to customers’ needs and wants. With everyone working from the same standard, data becomes much easier to share and work with, acting as a catalyst for innovation.
Putting this into practice – and elevating it with AIAI and machine learning is turning that traditional approach to maps on its head – allowing businesses to create new services faster, more accurately and with fewer developer and operator hours. With AI and machine learning, developers are better equipped to process data and turn observations into edits, updates and features as quickly and accurately as possible.
Humans are still required to check for errors, continually improve algorithms and ensure the AI is doing what it’s supposed to do. However, machines can now do the heavy, laborious lifting. It’s increasing the accuracy and freshness of maps and making developers far more effective and productive.
What does this look like in practice? For the automotive industry, a standardized AI-enabled base map will allow carmakers to integrate real-time traffic data, vehicle-to-infrastructure communication and even electric vehicle charging stations into a cohesive system that supports the future of mobility.
In the public sector, those developing smart cities will benefit from the privacy, precision and flexibility offered by a standardized, AI-driven base map. With real-time data at their disposal, city planners can create more efficient transport networks, improve infrastructure, and develop smart systems that respond to the changing needs of their citizens. Furthermore, with the ability to add their own data into a private layer, it’s incredibly valuable to applications where data protection is paramount.
Meanwhile, in logistics, the ability to quickly adapt to changes in road conditions, optimize delivery routes, and integrate external data – from fuel consumption to environmental impact – into a map is a game-changer for companies seeking to streamline operations and reduce costs.
In the future, maps will continue to be a core tool in the functioning of global business, navigation and our daily lives. However, maps – specifically, the way they are made – need to adapt to give organizations the flexibility and scalability needed to make everything work well together.
When an orchestra is all playing from the same sheet music, guided by an expert conductor, symphonies are created. In the context of maps, standardization brings enhanced accuracy, freshness and interoperability. Only through this unified, collaborative approach will innovation and end user satisfaction skyrocket.
We've featured the best small business app.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Although AI PCs are becoming increasingly available to both consumers and businesses, it seems firms are still not rushing to buy them.
New data from Canalys found around three-quarters (73%) of B2B partners are aware of Copilot+ PCs between March and April 2025, yet only one in three considered AI capabilities important in purchasing decisions.
Despite the huge performance updates, businesses still look to be prioritizing Windows 11 refreshes and battery life over Copilot+ exclusive features, particularly with the Windows 10 end of life on the horizon.
Copilot+ PCs don't seem to be taking offInitially launched with Qualcomm Snapdragon X chips and later available with Intel Core Ultra 200V and AMD Risen AI 300 series chips, Copilot+ PCs are seen as high-end devices with 40+ TOPS NPUs for local AI processing.
Canalys' data shows nearly one in four (23%) PCs sold globally in the final three months of 2024 was an AI PC, however this is a generalized term that means different things across the industry. For Canalys, it means that the devices include a "chipset or block for dedicated AI workloads such as an NPU."
However, Context Senior Analyst Marie-Christine Pygott explained (via The Register) only 9% of the 1.2 million AI-capable PCs shipped by European distributors in Q2 2025 classified as Copilot+ PCs, meeting the 40 TOPS requirement.
Pygott blamed the slow uptake on high pricing, a lack of use cases and low perception of what a Copilot+ PC is and what it can do. Some enterprise customers have also been reluctant to moving to Arm-based Snapdragon chips due to software compatibility issues.
However, things could be on the verge of changing, with a recent Dell survey revealing that around three in five (62%) IT decision-makers would prefer a Copilot+ PC over a regular PC.
Looking ahead, Canalys expects 60% of the PCs shipped in 2027 to be AI-capable, with 2025 potentially seeing them hold a 40% market share.
You might also likeTop online payments system, Paypal, and one of the best website builders, Wix, have strengthened their partnership with new integrations, making operations simpler for ecommerce website owners, and checkouts easier for customers.
PayPal now comes as a built-in part of Wix Payments, meaning merchants will be able to connect their PayPal Business accounts, and manage all transactions in a single dashboard, alongside other Wix Payments activity. Previously, if merchants running Wix websites wanted to offer PayPal as a payment gateway, they had to switch between two platforms for all operations, including reports, chargeback alerts, and payouts.
Furthermore, the money from PayPal purchases will now flow directly into the Wix Payments account, giving merchants clearer visibility over their income, and reducing the need to reconcile between two systems.
Wix PayPalMerchants will also be able to benefit from PayPal’s broader suite of features, such as PayPal Pay Later (BNPL) and Venmo, it was said. Finally, PayPal will also now serve as a Payment Service Provider (PSP), processing card purchases within Wix Payments.
“We’re always looking for ways to create more seamless experiences for our users and provide them with the best way to accept payments and manage funds online, in person, and on the go,” said Amit Sagiv and Volodymyr Tsukur, Co-Heads of Wix Payments.
“By bringing PayPal under the Wix Payments umbrella, we gain significantly more control over the user experience and how PayPal’s products are delivered to our merchants. This deeper integration allows us to improve conversion, offer more value, and drive stronger profitability, while giving our users a faster, more unified checkout flow.”
At press time, the new integration is only available to Wix Payments users in the US - however, the company said there are plans to make this feature available in more regions "over time".
You might also likeWhile mainstream media channels were earlier considered the primary destination by brands for digital marketing of inspiration, consideration, and conversion, that is no longer true today.
With growing diversification of the media landscape, Retail Media Networks (RMN), a collection of digital channels owned by retailers, have emerged among the fastest growing digital media channels.
With a healthy annual double-digit growth, the global retail media market is expected to reach ($179.5 billion) by 2025. In the UK alone, retail media ad spending is expected to outdo TV ad spending in 2025 and exceed £7 billion in 2028.
Amazon leads the pack with the lion’s share of retail media revenue (~$60bn in 2024). Walmart is a distant second (~$4bn). This gap speaks of the market’s growth potential and intense competition for other RMNs.
Compared to the thin traditional retail margins, RMN revenues typically exceed 70%. Many retailers have entered the fray considering this additional revenue stream and margin contribution potential – over 200 RMNs – have been launched in the last few years.
The rise of RMNs:The availability of various social media and online channels has ensured the path to purchase is not linear anymore and follows multiple channels. Post-pandemic, consumer behavior has changed significantly, as seen in the emergence of the Research Online Purchase Offline or ‘ROPO’ effect.
Both local and large brands are constantly seeking opportunities to create brand awareness across available channels. They want to reach consumers with the right messages, right content, and at the right moment on their path to purchase.
Today’s retailers offer a variety of ad units and ad formats with audience reach across an extended ecosystem. It includes their own onsite, in-store, and partner networks. Most importantly, retailers with right shopper loyalty programs have high quality first-party (1P) data that advertisers want to capitalize. Therefore, advertisers are more willing to invest in retail media which can deliver incrementality and ROI.
Well-established RMN can create a true fly-wheel effect for retailers in growing sales, consumer experience, and ad revenue.
Challenges to effectiveness of RMNs:Despite the opportunity for retailers in the RMN business, they may not generate expected revenues from brands and their agencies due to various reasons like lack of relevant operating model and technology capabilities. The retail business requires a buyer mindset, while the media requires a seller mindset.
The absence of integrated joint business planning (JBP) hampers collaboration between retailers and brands organizations. Insufficient technology capabilities lead to poor 1P data, limited ad inventory and formats, without a self-service model or supplier insights to verify ROI and incrementality. Often organizations apply the wrong measurement metrics to measure success. RMNs also face intense competition from various retailers.
Ingredients of a successful RMN:Currently, over 80% of the RMN spend by brands is for onsite (retailer’s .com and mobile app) channels in the form of sponsored products, brands, display ads, and videos - their primary focus is bottom-of-the-funnel marketing.
Retailers have a high-margin revenue stream in monetizing the 1P data in their omni channels by becoming full funnel player – ecommerce sites, mobile apps, in-store ad units, magazines, themed events.
With offsite channels like Meta, Google, Tik Tok, CTVs and in-store digital screens, RMNs can transform into full-funnel marketing channels. Many have already become omni-channel, media owners through strategic partnerships like Tesco Media & Insights + ITVX, Walmart + Tiktok.
The following steps will support the success of RMNs:
When it comes to instore, the ability to integrate ad servers and screens delivering ad content and including a feedback loop on aspects like the number of impressions shown, view time etc., is crucial. By mapping these metrics against in-store purchases, retailers help brands get an accurate view of sales incrementality, iROAS, and other key metrics to close the marketing loop.
RMNs that offer a 360-degree view of customer interactions across retailer touchpoints will help brands achieve micro-segmentation and hyper-personalization.
From media buyer to agency mindset:To compete against the likes of Amazon, Google, and Meta, RMNs must demonstrate how they can provide superior ROI to the brand advertisers by leveraging AI and ML technologies, impacts consumer behavior. A consulting partner like Infosys can draw from their vast experience in implementing and integrating such technology platforms for global retailers.
Above all, retailers must begin to view RMN earnings as an additional revenue stream derived from a brand’s marketing spends. Those able to effectively don an agency’s hat in selling ad performance will encourage brands to entrust these precious marketing resources with them.
We've featured the best productivity tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Security professionals have long been reporting high levels of stress and burnout, which is only compounded by a skills shortage in the industry, and new research claims the sheer volume of threats, as well as the data those threats produce, is putting firms at risk.
Research from Google Cloud found threat notifications aren’t the helpful tool they could be, and in fact can be overwhelming security teams, with nearly two-thirds (61%) of security practitioners saying they think there are, ‘too many threat intelligence data feeds’, and 60% believing there are too few threat analysts to sift through the data efficiently.
“Rather than aiding efficiency, myriad [threat intelligence] feeds inundate security teams with data, making it hard to extract useful insights or prioritize and respond to threats. Security teams need visibility into relevant threats, AI-powered correlation at scale, and skilled defenders to use actionable insights, enabling a shift from a reactive to a proactive security posture,” the study argued.
Needles in a haystackToo much data leads to analysts stuck in ‘reactive mode’, with 86% of respondents saying their organisation has gaps in its understanding of the threat landscape, as well as 85% saying more focus could be put on emerging threats, and 72% are mostly reactive to threats, not able to get ahead of trends.
Adjacent research from SentinelOne shows that a large proportion of Cloud security alerts are false positives (not relevant to the organisation). The majority of respondents (53%) say that over half of the alerts they receive are a false positive, outlining just how real the ‘alert fatigue’ is.
This makes securing cloud environments difficult, say 92% of respondents, with too many point solutions leading to management and integration issues, creating more alerts, lower quality alerts, and therefore slower reactions to attacks thanks to the confusion.
Perhaps unsurprisingly, both sets of research have one suggestion to solve this issue - and it’s not investing in better training and support to address the skills shortage. Instead, you guessed it, it’s AI.
AI can help ease the pressure by improving an organisation’s ability to operationalise threat intelligence, generating ‘easy-to-read summaries’ and recommending next-steps to ‘uplevel junior analysts’, Google's research says.
"We believe the key is to embed threat intelligence directly into security workflows and tools, so it can be accessed and analyzed quickly and effectively," noted Jayce Nichols, Google Cloud Director, Intelligence Solutions.
"AI has a vital role in this integration, helping to synthesize the raw data, manage repetitive tasks, and reduce toil to free human analysts to focus their efforts on critical decision-making."
You might also likeXbox has unveiled its plans for Gamescom 2025, which will include the opportunity to play a Hollow Knight: Silksong demo.
The brand will have a strong presence at the European gaming event, which runs from August 20 to 24 in Cologne, Germany. The Xbox booth will show off more than 20 games across a whopping 120 demo stations, alongside offering photo opportunities and unique experiences.
Big highlights include hands-on time with the Asus ROG Xbox Ally and ROG Xbox Ally X, the two recently revealed Xbox PC handhelds. A demo of Hollow Knight: Silksong will be playable on the handheld, potentially giving us our first substantial look at the long-awaited game in years.
Hollow Knight: Silksong was first announced back in 2019 but we have hardly heard a peep about it aside from a few brief appearances at various showcase events such as the Nintendo Switch 2 Direct earlier this year. The game also featured prominently in the Asus ROG Xbox Ally and ROG Xbox Ally X reveal, where it was confirmed that it would be available in time for the handheld's launch.
Could this mean that a Hollow Knight: Silksong release is around the corner? It definitely seems so, especially with the handhelds slated for later this year.
The Xbox booth will also offer visitors the chance to try the likes of Grounded 2, the first public hands-on demo of Ninja Gaiden 4, in addition to some third-party titles like Borderlands 4 and Metal Gear Solid Delta: Snake Eater.
You might also like...Microsoft has announced that it will require age verification for the continued use of Xbox social features, per the UK's Online Safety Act.
In a new Xbox support post, Microsoft said: "As part of our compliance programme for the UK Online Safety Act and our ongoing investments in tools and technologies that help ensure age-appropriate experiences, we're introducing age verification for Microsoft accounts in the UK."
The company explained that players over the age of 18 who don't verify their age between now and the beginning of 2026 can still play their Xbox console, but "starting early next year", certain social features will be limited to friends only unless age verification is complete.
For now, accounts belonging to players 18 and over in the UK are being asked to verify their accounts and will begin seeing notifications encouraging them to verify their age. This is an optional process for now, but it will change come early 2026.
Until an account's age is verified, users will only be able to use voice and text communication, party functionality, and game invites, and user-generated content like the Activity Feed.
Without age verification, the Looking for Group and custom clubs features won't be accessible.
"If you have an existing account or are setting up a new one, you may be asked to verify your age using Yoti, a trusted and secure third-party identity verification service," the post reads.
There are several ways to verify identity, including with a government-issued photo ID, like a passport, residency card, or any other government-issued identification document with the user's picture on it.
They can also use a live photo, ID verification, a mobile number to verify age through their carrier, and a credit card check.
"Whether a player verifies their age will not affect any previous purchases, entitlements, gameplay history, achievements, or the ability to play and purchase games, however we encourage players to verify their age via this one-time process now to avoid uninterrupted use of social features on Xbox in the future," said Xbox vice president of gaming trust and safety Kim Kunes in a separate Xbox Wire post.
"As this age verification process rolls out across the UK, we’ll continue to evaluate how we can keep players around the world safe and learn from the UK process. We expect to roll out age verification processes to more regions in the future. There is no one-size-fits-all solution to player safety, so these methods may look different across regions and experiences."
Xbox isn't the first platform to be affected by the UK's Online Safety Act. Reddit and Discord have also implemented new age verification systems to access 18+ content; however, gamers are already getting around Discord's tool by using Death Stranding's photo mode.
You might also like...Microsoft has reportedly fixed a bug in Windows 11 which caused the mouse cursor to supersize itself in irritating fashion under certain circumstances.
Windows Latest explained the nature of the bug, and provided a video illustrating the odd behavior. It shows the mouse cursor being at its default size (which is '1' in the slider in settings for the mouse), and yet clearly the cursor is far larger than it should be.
When Windows Latest manipulates the slider to make the mouse cursor larger, then returns it to a size of '1', the cursor ends up being corrected and back to normal. Apparently, this issue manifests after resuming from sleep on a Windows 11 PC.
Windows Latest says this bug has been kicking around since Windows 11 24H2 first arrived (in October last year), but the issue hasn't been a constant thorn in its side. Seemingly it has only happened now and again – but nonetheless, it's been a continued annoyance.
Not anymore, though, because apparently with the July update for Windows 11, the problem has been fixed.
(Image credit: Zachariah Kelly / TechRadar)Analysis: Mouse mattersOddly enough, Microsoft never acknowledged this issue, although other Windows 11 users certainly have – Windows Latest hasn't been alone in suffering at the hands of this bug.
I've spotted a few reports on Reddit regarding the issue, and some posters have experienced the supersized cursor after rebooting their machine rather than coming back from sleep mode (and there are similar complaints on Microsoft's own help forums).
Whatever the case, the issue seems to be fairly random in terms of when or whether it occurs, but the commonality is some kind of change of state for the PC in terms of sleeping or restarting.
While the mouse cursor changing size may not sound like that big a deal, it's actually pretty disruptive. As Windows Latest observes, having a supersized cursor can make it fiddlier and more difficult to select smaller menu items in apps or Windows 11 itself.
And if you weren't aware of the mentioned workaround – to head into the Settings app, find the mouse size slider, and adjust it – you might end up rebooting your PC to cure the problem. And that's if a reboot does actually fix things, because, as some others have noted, restarting can cause the issue, too.
This was an irksome glitch, then, so it's good to hear that it's now apparently resolved with the latest update for Windows 11.
You might also like...New data from Synergy Research has claimed European providers of cloud storage and other services only account for 15% of their own regional market, highlighting the hold that US rivals have even in foreign territories.
Overall market share dropped to around 15% in 2022, remaining steady ever since, but in the five years from 2017 to 2022 European cloud providers lost half of their share, down from 29%.
While European providers were able to triple their revenues between 2017 and 2024, the market grew sixfold in that same period – it's now worth an estimated €61 billion.
Europe's cloud market is dominated by... the USAmazon, Microsoft and Google now control around 70% of the European cloud market, Synergy found, with SAP and Deutsche Telekom confirmed to be the leading EU providers, but with just 2% of the market each. OVHcloud, Telecom Italia and Orange rounded up the top five.
Synergy described the dominance of US cloud giants as an "impossible hill to climb" for European challengers, with US providers typically investing around €10 billion every single quarter into European infrastructure. On the flip side, European firms typically lack the long-term investment support required by the cloud sector.
"The cloud market is a game of scale where aspiring leaders have to place huge financial bets, must have a long-term view of investments and profitability, must maintain a focused determination to succeed, and must consistently achieve operational excellence," Synergy Chief Analyst John Dinsdale explained.
However, change could be on the horizon with data privacy issues bubbling to the surface under Trump-era US policies - as Microsoft recently admitted it can't guarantee data sovereignty in Europe if the US government demands access.
Still, Dinsdale believes the US cloud dominance could be hard to shake off now that it's embedded in Europe: "While many European cloud providers will continue to grow, they are unlikely to move the needle much in terms of overall European market share."
You might also likeSpider-Man: Brand New Day won't arrive in theaters until July 2026, but some fans think they've already worked out where it'll sit on the Marvel timeline.
With filming due to begin on Spider-Man: Brand New Day in August, preparations have been underway in Glasgow for a number of weeks now. The Scottish city is being used a stand-in for New York City (NYC), so Glaswegians have seen their hometown receive a US makeover before the cameras start rolling.
One eagle-eyed Marvel fan has wasted no time snapping images of the sets being erected for Spider-Man 4, too. Indeed, X/Twitter user lukec1605 recently uploaded some photographs that indicate what year it might take place in.
Photos from set on #SpiderManBrandNewDay @eavoss @NewRockstars pic.twitter.com/LZICv2IohfJuly 28, 2025
As the above post reveals, the Marvel Cinematic Universe's (MCU) version of NYC is being renovated, with numerous construction builds in progress. This might have something to do with events that occurred in Thunderbolts*, aka one of three new movies released by Marvel Studios this year. That film is set in the MCU's present, which is believed to be the year 2027. You can read more about what happened in that flick via by Thunderbolts* ending explained piece.
But I'm getting off-track. Two of the images in the aforementioned post reveal that work is due to be completed on these renovations and new builds by December 2027. Cue MCU fans jumping to conclusions and convincing themselves that the next Marvel Phase 6 movie will take place in late 2027.
I'm not satisfied this is the case, though. Those pictures only indicate that the buildings will be erected before that year ends. Depending on the size of said build, it can take multiple years to complete work on them, too. It's entirely possible, then, that Spider-Man's next outing in the MCU could be set in early or mid-2027, or even sometime in 2026.
Some Marvel fans don't think Spider-Man 4 will be set in late 2027 (Image credit: Reddit)There's evidence that Brand New Day could take place well before December 2027 as well. Season 1 of Daredevil: Born Again, whose story is thought to play out between late 2026 and early 2027, sees Wilson Fisk become NYC's latest mayor. Throughout the Disney+ show's first installment, Fisk fast-tracks a number of developments in the city, so it's plausible that the ongoing construction work was greenlit by him. If that's the case, events in Spider-Man 4 might run concurrent to Daredevil: Born Again season 1.
That said, Jon Bernthal's Frank Castle/The Punisher will a supporting role to play in Brand New Day. The last time we saw him, i.e. in Born Again's season 1 finale, he escaped captivity after being incarcerated in a secret prison facility patrolled by Fisk's Anti-Vigilante Task Force. In order to show up in Spider-Man 4, he'll need to have broken out of jail before that film begins. This would mean Brand New Day has to take place from mid-2027 onwards.
Hopefully, we'll get a better idea of when the film is set, plus who Stranger Things' Sadie Sink is playing in Spider-Man 4, when principal photography finally gets underway. In the meantime, find out why Spider-Man: Brand New Day's release was delayed or learn more about how its official title takes its cue from the most controversial moment in Spidey's comic book history.
You might also likePlayStation's Project Defiant fight stick finally has an official name, alongside brand new details and a vague release window.
A new PlayStation Blog post has revealed that Project Defiant is officially called the FlexStrike, and it's currently set to arrive sometime in 2026. The news comes right before Sony's own EVO 2025 fighting game tournament event in Las Vegas, where the FlexStrike will be on display (but not playable) for the first time.
FlexStrike will be compatible with both PS5 and PC, and it supports Sony's proprietary PlayStation Link wireless tech. Here, a PlayStation Link USB adapter can be used to hook up a compatible gaming headset - like the Pulse Elite or Pulse Explore earbuds - as well as up to two FlexStrike controllers for local play.
Like many of the best fight sticks, the FlexStrike will also be customizable to a degree. One really cool feature shown in the trailer (above) is a 'toolless' gate swap. By opening the non-slip grip at the bottom, players will be able to swap between square, circular, and octagonal gates on the fly with the joystick. This means you won't have to buy a separate joystick or gate, or use any additional tools to get the job done.
The controller has several amenities you'll find on other top fight sticks, including a stick input swap for menu navigation, and a lock switch that disables certain buttons (like pausing) for tournament play. The eight face buttons are also mechanical, which means they should register clicky, instantaneous inputs.
Lastly, players can use a DualSense Wireless Controller in tandem with the FlexStrike for menu navigation, not unlike what we see with the PlayStation Access controller.
PlayStation appears to be investing quite heavily in fighting game hardware and software. It's likely that the FlexStrike will launch around the same time as Marvel Tokon: Fighting Souls, published by PlayStation Studios and developed by Arc System Works; the team behind Guilty Gear Strive, Granblue Fantasy Versus: Rising, and many more of the best fighting games.
TechRadar Gaming will be very keen to deliver a verdict on the FlexStrike when it launches next year, so stay tuned for a potential review in 2026.
You might also like...Experience Level Objectives (XLOs) represent a fundamental evolution in monitoring philosophy, moving beyond the conventional Service Level Objectives (SLOs) and SLAs that have dominated IT operations for years.
This post examines the key differences between these approaches and explains why XLOs provide a more business-aligned framework for modern digital operations.
User-Centric vs. infrastructure-centric measurementsTraditional SLA and SLO monitoring has primarily focused on system availability and IT infrastructure health. This approach centers on technical metrics like uptime percentages, server response times, and infrastructure resource utilization. While these metrics provide valuable insights into system health, they create a significant disconnect between technical indicators and actual business metrics.
In contrast, XLO monitoring prioritizes metrics that directly gauge user experience and satisfaction. This shift reflects a growing recognition that digital service quality cannot be measured solely by whether systems are functioning, but rather by how well they are functioning from the user's perspective. As research increasingly shows, "slow is the new down"—acknowledging that poor performance, even without complete failure, can severely impact user satisfaction and business outcomes.
This philosophical difference addresses a critical blind spot in traditional monitoring approaches. A system can report 100% uptime while delivering a frustratingly slow experience that drives users away. XLOs close this gap by measuring what actually matters to users: the quality and speed of their interactions with digital services.
The importance of monitoring from where it mattersMost monitoring tools rely on cloud-based vantage points for digital experience monitoring —convenient (for the vendor), but disconnected from the actual user experience. These first-mile checks confirm whether the infrastructure is up, but say little about how your application is experienced by users in the real world. Hence, it is primarily useful for QA purposes, especially for new code releases.
XLOs shift the perspective. They depend on insights captured from where users truly are—whether that’s on a connection inside an office through a regional ISP, a mobile connections through a mobile operator, or even a laptop connected via Starlink. This visibility uncovers the real issues users face: congestion, routing delays, delays from third part code, and other last-mile failures that cloud monitoring can’t see.
If SLOs tell you your system is available, XLOs tell you whether it’s delivering the experience the business expects to real users. This outside-in view is what turns data into real business insight. It closes the visibility gap between infrastructure health and user experience—and that’s where the real value lies.
End-to-End Journey PerspectiveTraditional SLOs often focus on individual components or services, creating a fragmented view of performance. XLOs, by contrast, are designed to capture the complete user journey across multiple systems and services. This end-to-end perspective reflects the reality that users experience services holistically, not as isolated components. Modern digital services span multiple providers, platforms, and technologies, making isolated component monitoring inadequate for ensuring overall service quality.
While an SLA may measure the uptime of an S3 storage bucket, or the uptime of your DNS or CDN provider, these are only three of dozens or hundreds of components in an entire system. As a rule of thumb, the quality of the experience delivered by a system is as good as the worst of its components. Thus, while most components could be working perfectly, an issue in a third-party API may be resulting in the entire experience for your users to be unacceptable.
The XLO, by contrast, is less concerned about CPU utilization or database response time, while entirely focused in the resulting experience for a user – whether the user is a customer, an internal user, or an API consumed by an internal or external system.
Business alignment and value demonstrationA critical difference between XLOs and traditional SLOs is their alignment with business outcomes. Traditional SLOs primarily serve technical teams, measuring system health in terms that may not translate directly to business impact, while SLAs establish accountability from vendors that deliver a component of the functionality of a system. This creates challenges in demonstrating IT's value to business stakeholders and securing resources for performance improvements.
XLOs fundamentally change this dynamic by providing metrics that directly correlate with business performance. By moving beyond "Is it up?" to answer "Is it meeting our users’ expectations?", XLOs address what business stakeholders actually care about. This alignment helps prove the value of IT Operations and justify investments in performance improvements by demonstrating clear connections between technical performance and business outcomes.
As more components of our business and personal lives are based on digital experiences or supported by digital processes, delivering on the expectations is a business priority. In a recent survey of thousands of users showed bad digital experiences are the main reason why consumers switch to different banking providers.
As a specific example, a team can set specific XLO targets that reflect business priorities, such as ensuring the critical part of loading a page, measured as Largest Contentful Paint (LCP), does not exceed 2.5 seconds 90% of the time in a given month. This specific threshold directly impacts bounce rates and user engagement, providing clear business value.
Accelerating maturity with XLOsAccording to the GigaOm Maturity Model for IPM, organizations progress through five stages—from chaotic, reactive operations to optimized, business-driven monitoring. Traditional SLOs keep teams stuck in the early stages, focused on infrastructure uptime and siloed metrics. XLOs act as a catalyst for maturity by:
Aligning with advanced stages: XLOs introduce user-focused metrics that resonate with the 'Quantitative' and 'Optimized' stages, emphasizing business outcomes.
Facilitating proactive issue detection: Tools like burndown charts enable early identification of performance degradations, a hallmark of mature operations.
Fostering cross-functional collaboration: XLOs unify teams around shared objectives, essential for achieving higher maturity levels.
For example, a retail company using XLOs to monitor checkout flow performance (e.g., Time to Interactive across regions) isn’t just fixing errors—they’re optimizing a revenue-critical journey, a hallmark of GigaOm’s value-based observability.
Proactive vs. Reactive MonitoringTraditional SLO monitoring often creates a reactive posture, where teams respond to issues after they've already impacted users. This approach typically waits for error thresholds to trigger alerts before teams mobilize to address problems. Once these thresholds are crossed, the business is already suffering some impact.
XLO monitoring enables a substantially more proactive approach. By tracking performance trends over time, proactively simulating user experiences from their real-world locations, businesses can detect gradual degradations before they breach critical thresholds – and often before they impact users.
Tracking XLOs over time is where burn-down charts come into play. Burn down charts help track the progress of your performance against your set objectives, showing how much of your performance budget is left as time goes on.
When a team adopts XLOs as a KPI, it influences how the teams make decisions, how they see success, and what risks are acceptable. Operations can evaluate whether to release changes based on their projected impact on experience metrics, maintaining consistently high user satisfaction. In this way, burn down charts offer a clear status of service health over periods of time.
Breaking down organizational silosA significant practical difference between XLO and traditional SLO approaches lies in their organizational impact. Traditional SLOs often reinforce existing silos between development, operations, and business teams, as each group focuses on their own specialized metrics.
XLOs, by contrast, create a common language and shared objectives across organizational boundaries. By providing metrics that matter to both technical and business stakeholders, XLOs facilitate cross-functional collaboration and shared accountability for user experience. This collaborative approach enables faster problem resolution and more effective performance optimization.
Building a digital operations center (DOC)For a long time, IT operations teams have built NOCs and SOCs to manage network operations and security. In today’s world where most business interactions are digital, as organizations mature, many are formalizing their cross-functional efforts by building Digital Operations Centers (DOCs).
A DOC brings together teams across IT, engineering, and business functions to monitor experience-centric metrics in real time. With XLOs at the core, a DOC isn’t just a control room—it’s a shared space for aligning around user outcomes, accelerating response times, and making performance a business-wide priority. It’s a sign of maturity and a strategic investment in digital resilience.
A DOC puts digital user experience at the center of the business and provides visibility into how every critical digital operation in the business performs - and what is the performance of all the key components that contribute to delivering that experience – from internet backbone to third party components, cloud services, APIs, DNS, front-end servers, databases, microservices, down to application code.
A DOC is a natural evolution of a NOC and a SOC as IT operations teams evolve from a systems-uptime focus to becoming a true operational intelligence team that is a critical component of how the business operates, and not only the team keeping the lights on.
Specific Experience MetricsXLO monitoring to measure specific performance metrics that directly impact user experience can include:
Wait Time: The duration between the user’s request and the server’s initial response
Response Time: The total time taken for the server to process a request and send back the complete response
First Contentful Paint (FCP): The time it takes for the browser to render the first piece of content on the screen
Largest Contentful Paint (LCP): Time when the largest content is visible within the browser
Cumulative Layout Shift (CLS): A measure of how much the layout of the page shifts unexpectedly during loading
Time to Interactive: The time it takes for a page to become fully interactive and responsive to user inputs
These metrics create a multidimensional view of the user experience that traditional infrastructure-focused SLOs simply cannot provide.
The Strategic Value of XLO MonitoringSLOs and Experience Level Objectives (XLOs) aren’t just buzzwords; they're guiding principles for ensuring performance indicators align with real customer expectations.
The SRE Report 2025According to the SRE Report 2025, 40% of businesses are prioritizing the adoption of SLOs and XLOs over the next 12 months. By focusing on user experience rather than just system availability, providing specific experience-focused metrics, aligning with business outcomes, enabling proactive optimization, capturing end-to-end journeys, and breaking down organizational silos, XLOs provide a more comprehensive and business-relevant approach to monitoring.
This evolution reflects changing expectations from both users and businesses.
For organizations seeking to improve digital experience quality while demonstrating clear business value from IT investments, XLOs offer a powerful framework that goes beyond traditional SLO limitations. By implementing XLO monitoring, organizations can align technical performance with business objectives, ultimately delivering superior digital experiences that drive competitive advantage.
We've listed the best Active directory documentation tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
IT teams know the balancing act all too well. Security teams implement new protocols that generate a flood of user complaints. The IT help desk is overwhelmed with tickets that could have been prevented.
Meanwhile, employees bypass carefully designed systems because they're too cumbersome. And today's increasingly distributed workforce only exacerbates this balancing act, creating a larger attack surface across more devices, locations, and applications.
While IT management may have accepted this as the inevitable reality, the challenges are only intensifying. AI-powered cyberattacks are becoming more sophisticated daily, capable of adapting faster than traditional security measures can respond. The old playbook of treating security, IT operations, and employee experience as separate functions has reached its breaking point.
A unified approach is needed, or IT leaders risk not only making their organization at risk for security vulnerabilities, but losing visibility and control of their organizations’ digital work environments.
The "self-driving car" of enterprise ITAlthough the rise of new AI tools and devices has created headaches for IT, AI-powered digital environments, or an autonomous workspace, offer IT leaders a path to modernizing and knocking down the divisions that exist across employee experience, security and operations.
These environments self-configure, self-heal, and self-secure with minimal human intervention. Think of it as the "self-driving car" of enterprise IT.
Unlike traditional automated systems that follow preset rules and require constant human oversight, autonomous workspaces continuously learn from data patterns and user behaviors.
Because these environments monitor every aspect of the digital environment simultaneously, previous silos that plagued IT teams’ decision making are eliminated, offering IT teams full context of their organization’s digital environment.
For example, when a security anomaly emerges, the system doesn't just alert administrators; it automatically quarantines the threat while maintaining seamless user access to legitimate resources. When a device falls out of compliance, it self-corrects without user intervention.
And rather than looking at these issues in a vacuum, autonomous workspaces enable IT to connect dots across different factions of the workplace, understanding if an employee’s application performance issue is underpinned by a larger problem or vulnerability.
The strategic imperative for not only IT teams, but a businesses' bottom lineWhile an autonomous workspace can free IT teams from the endless cycle of firefighting, the benefits of adopting an autonomous workspace extend beyond just the IT team, ultimately providing a foundation for business resiliency and cost efficiency.
1. Security rigorAs generative AI tools become embedded in daily workflows, they also broaden the attack surface, and a reactive security approach is proving inadequate. Autonomous workspaces flip this model by implementing predictive zero-trust security. Instead of waiting for threats to manifest, these systems continuously analyze patterns and behaviors to identify potential risks before they materialize.
The system makes intelligent trust decisions in milliseconds, based on comprehensive understanding of user behavior, network conditions, and threat intelligence, helping equip a business for the increasingly sophisticated cyberattacks of today and future.
2. Employee experience benefitsOrganizations that take a holistic approach to employees’ digital experience gain more than just operational benefits. A modern digital experience gives employees self-service access to the apps, resources and the support they need, when they need it.
This approach helps reduce disruptions and prevents issues before they can impact employee productivity. With secure access from anywhere, employees can stay focused and in control of how they work.
The result is stronger collaboration, higher employee satisfaction, and a significant advantage in attracting and retaining top talent in a growing hybrid work environment.
3. Streamlined resourcesThink about the traditional approach to endpoint management. Security teams set protocols. IT operations teams install management tools to ensure compliance. And user experience teams try to minimize the performance impact. The result? Conflicting priorities, duplicated efforts, and frustrated users. Autonomous workspaces break down silos and integrate these different functions into a single, intelligent platform, streamlining IT resources and costs, while enhancing collaboration across teams.
The most successful implementations of autonomous workspaces share a common characteristic: they eliminate artificial boundaries between security, IT operations, and employee experience teams. This convergence isn't just about organizational structure—it's about creating technology ecosystems where security and IT enhance rather than complicate employee productivity and collaboration.
As the enterprise landscape continues to evolve, the organizations that thrive will be those that embrace autonomous workspaces not merely as a technology solution, but as the foundation of their digital work strategy.
We list the best IT documentation tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Generative AI is a headline act in many industries, but the data powering these AI tools plays the lead role backstage. Without clean, curated, and compliant data, even the most ambitious AI and machine learning (ML) initiatives will falter.
Today, enterprises are moving quickly to integrate AI into their operations. According to McKinsey, in 2024, 65% of organizations reported regularly using generative AI, marking a twofold increase from 2023.
However, the true potential of AI and ML in the enterprise won’t come from surface-level content generation. It will come from deeply embedding models into decision-making systems, workflows, and customer-facing processes where data quality, governance, and trust become central.
Additionally, simply incorporating AI and ML features and functionality into foundational applications won’t do an enterprise any good. Organizations must leverage all aspects of their data to create strategic advantages that help them stand out from the competition.
To do this, the data powering their applications must be clean and accurate to mitigate bias, hallucinations, and/or regulatory infractions. Otherwise, they risk issues in training and output, ultimately negating the benefits that the AI and ML projects were initially meant to create.
The importance of good, clean dataData is the foundation of any successful AI initiative, and enterprises need to raise the bar for data quality, completeness, and ethical governance. However, this isn’t always as easy as it sounds. According to Qlik, 81% of companies still struggle with AI data quality, and 77% of companies with over $5 billion in revenue expect poor AI data quality to cause a major crisis.
In 2021, for example, Zillow shut down Zillow Offers because it failed to accurately value homes due to faulty algorithms, leading to massive losses. This case highlights a critical importance – AI and ML projects must operate on good, clean data in order to produce the most accurate, best results.
Today, AI and ML technologies rely on data to learn patterns, make predictions and recommendations, and help enterprises drive better decision-making. Techniques like retrieval-augmented generation (RAG) pull from enterprise knowledge bases in real-time, but if those sources are incomplete or outdated, the model will generate inaccurate or irrelevant answers.
Agentic AI’s ability to act reliably hinges on consuming accurate, timely data in real time. For example, an autonomous trading algorithm reacting to faulty market data could trigger millions in losses within seconds.
Establishing and maintaining an environment of good dataIn order for enterprises to establish and maintain an environment of good data that can be leveraged for AI and ML usage, there are three key elements to consider:
1. Build a comprehensive data collection engineEffective data collection is essential for successful AI and ML projects, and modern data platforms and tools, such as those for integration, transformation, quality monitoring, cataloging, and observability, to support the demands of their AI development and output. They ensure the organization is getting the right data.
Whether the data be structured, semi-structured, or unstructured, any data collected should come from a variety of sources and methods to support robust model training and testing to encapsulate the different user scenarios that they may encounter upon deployment. Additionally, companies must ensure they follow ethical data collection standards. Whether the data is first-, second-, or third-party, it must be sourced correctly and with consent given for its collection and use.
2. Ensure high data qualityHigh-quality, fit-for-purpose data is imperative for the performance, accuracy, and reliability of AI and ML models. Given that these technologies introduce new dimensions, the data used must be specifically aligned with the requirements of the intended use case. However, 67% of data and analytics professionals say they don’t have complete trust in their organizations’ data for decision-making.
To address this, it's essential that enterprises have data that is representative of real-world scenarios, monitor for missing data, eliminate duplicate data, and maintain consistency across data sources. Furthermore, recognizing and addressing biases in training data is critical, as biased data can compromise outcomes and fairness and negatively impact customer experience and credibility.
3. Implement trust and data governance frameworksThe push for responsible AI has placed a spotlight on data governance. With 42% of data and analytics professionals saying their organization is unprepared to handle the governance of legal, privacy, and security policies for AI initiatives, it’s critical that there is a shift from traditional data governance frameworks to more dynamic frameworks.
In particular, with Agentic AI coming into significant prominence, it’s crucial to address why agents make specific decisions or take specific actions. Enterprises must have a sharp focus on Explainable AI techniques to build trust, assign accountability and ensure compliance. Trust in AI outputs begins with trust in the data behind them.
In summaryAI and ML projects will fail without good data because data is the foundation that enables these technologies to learn. Data strategies and AI and ML strategies are intertwined. Enterprises must make an operational shift that puts data at the core of everything they do – from technology infrastructure investment all the way to governance.
Those that take the time to put data first will see projects flourish. Those that don’t will be faced with ongoing struggles and competition biting at their heels.
We list the best data visualization tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
OpenAI chief Sam Altman has painted a portrait of GPT‑5 that reads more like a thriller than a product launch. In a recent episode of the This Past Weekend with Theo Von podcast, he described the experience of testing the model in breathless tones that evoke more skepticism than whatever alarm he seemed to want listeners to hear.
Altman said that GPT-5 “feels very fast,” while recounting moments when he felt very nervous. Despite being the driving force behind GPT-5's development, Altman claimed that during some sessions, he looked at GPT‑5 and compared it to the Manhattan Project.
Altman also issued a blistering indictment of current AI governance, suggesting “there are no adults in the room” and that oversight structures have lagged behind AI development. It's an odd way to sell a product promising serious leaps in artificial general intelligence. Raising the potential risks is one thing, but acting like he has no control over how GPT-5 performs feels somewhat disingenuous.
OpenAI CEO Sam Altman: "It feels very fast." - "While testing GPT5 I got scared" - "Looking at it thinking: What have we done... like in the Manhattan Project"- "There are NO ADULTS IN THE ROOM" from r/ChatGPTAnalysis: Existential GPT-5 fearsWhat spooked Altman isn’t entirely clear, either. Altman didn’t go into technical specifics. Invoking the Manhattan Project is another over-the-top sort of analogy. Signaling irreversible and potentially catastrophic change and global stakes seems odd as a comparison to a sophisticated auto-complete. Saying they built something they don’t fully understand makes OpenAI seem either reckless or incompetent.
GPT-5 is supposed to come out soon, and there are hints that it will expand far beyond GPT-4’s abilities. The "digital mind" described in Altman’s comments could indeed represent a shift in how the people building AI consider their work, but this kind of messianic or apocalyptic projection seems silly. Public discourse around AI has mostly toggled between breathless optimism and existential dread, but something in the middle seems more appropriate.
This isn't the first time Altman has publicly acknowledged his discomfort with the AI arms race. He’s been on record saying that AI could “go quite wrong,” and that OpenAI must act responsibly while still shipping useful products. But while GPT-5 will almost certainly arrive with better tools, friendlier interfaces, and a slightly snappier logo, the core question it raises is about power.
The next generation of AI, if it’s faster, smarter, and more intuitive, will be handed even more responsibility. And that would be a bad idea based on Altman's comments. And even if he's exaggerating, I don't know if that's the kind of company that should be deciding how that power is deployed.
You might also likeThe AMD Ryzen Threadripper PRO 9995WX has emerged as the fastest CPU in PassMark’s multithreaded performance charts, claiming a score of 174,825 points.
This new benchmark positions the 96-core processor ahead of AMD’s own EPYC 9755, which trails by about 5% in multithreaded workloads with 166,328 points.
This lead is noteworthy not only because of the tight margin but also due to the distinct market segments to which both chips are intended: Threadripper for high-end workstations and EPYC for data center servers.
Built for extreme performance in workstation-class systemsLaunched in the second quarter of 2025, the Threadripper PRO 9995WX is built around the sTR5 socket and features a base clock speed of 2.5GHz with a boost speed reaching 5.4GHz.
It comes with 192 threads, and its typical TDP of 350W reflects the scale of its compute capabilities.
With a massive 384MB of L3 cache and substantial L1 and L2 cache arrangements, the CPU is engineered to handle highly parallelized tasks.
These features show AMD’s intent to offer extreme performance in high-end desktop and workstation markets where parallel compute power is critical.
In benchmark tests, it delivered 1,220,090 MOps/sec in integer math, 707,600 MOps/sec in floating point operations, and processed 3.6 million kilobytes per second in data compression.
Its single-thread performance reached 4,565 MOps/sec, placing it 45th among 5,287 CPUs in that metric.
The new Threadripper PRO 9995WX is 21% faster than the 7995WX, AMD’s own earlier flagship.
This gain marks a substantial generational leap, particularly for users whose applications benefit from the full core and thread count.
The Threadripper PRO 9995WX has just gone on sale and can be found at major retailers like Amazon and Newegg, with a starting price of $11,699.
You might also like