Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
All the latest content from the TechRadar team
Updated: 2 hours 50 min ago

Hollow Knight: Silksong will actually be playable at Gamescom 2025, making me think a release might be imminent

Tue, 07/29/2025 - 05:08
  • Xbox will be showcasing Hollow Knight: Silksong at Gamescom 2025
  • The game will be playable on the upcoming Asus ROG Xbox Ally and ROG Xbox Ally X handhelds at the Xbox booth
  • Other titles present include Grounded 2 and Ninja Gaiden 4

Xbox has unveiled its plans for Gamescom 2025, which will include the opportunity to play a Hollow Knight: Silksong demo.

The brand will have a strong presence at the European gaming event, which runs from August 20 to 24 in Cologne, Germany. The Xbox booth will show off more than 20 games across a whopping 120 demo stations, alongside offering photo opportunities and unique experiences.

Big highlights include hands-on time with the Asus ROG Xbox Ally and ROG Xbox Ally X, the two recently revealed Xbox PC handhelds. A demo of Hollow Knight: Silksong will be playable on the handheld, potentially giving us our first substantial look at the long-awaited game in years.

Hollow Knight: Silksong was first announced back in 2019 but we have hardly heard a peep about it aside from a few brief appearances at various showcase events such as the Nintendo Switch 2 Direct earlier this year. The game also featured prominently in the Asus ROG Xbox Ally and ROG Xbox Ally X reveal, where it was confirmed that it would be available in time for the handheld's launch.

Could this mean that a Hollow Knight: Silksong release is around the corner? It definitely seems so, especially with the handhelds slated for later this year.

The Xbox booth will also offer visitors the chance to try the likes of Grounded 2, the first public hands-on demo of Ninja Gaiden 4, in addition to some third-party titles like Borderlands 4 and Metal Gear Solid Delta: Snake Eater.

You might also like...
Categories: Technology

Xbox introduces new age verification system to align with the UK's Online Safety Act

Tue, 07/29/2025 - 05:05
  • Xbox will now require age verification under the UK's Online Safety Act
  • Microsoft says "starting early next year", certain Xbox social features will be limited to friends only in the UK unless age verification is complete
  • Players will need to use a government ID, passport, credit card, or other forms of identification to complete the process

Microsoft has announced that it will require age verification for the continued use of Xbox social features, per the UK's Online Safety Act.

In a new Xbox support post, Microsoft said: "As part of our compliance programme for the UK Online Safety Act and our ongoing investments in tools and technologies that help ensure age-appropriate experiences, we're introducing age verification for Microsoft accounts in the UK."

The company explained that players over the age of 18 who don't verify their age between now and the beginning of 2026 can still play their Xbox console, but "starting early next year", certain social features will be limited to friends only unless age verification is complete.

For now, accounts belonging to players 18 and over in the UK are being asked to verify their accounts and will begin seeing notifications encouraging them to verify their age. This is an optional process for now, but it will change come early 2026.

Until an account's age is verified, users will only be able to use voice and text communication, party functionality, and game invites, and user-generated content like the Activity Feed.

Without age verification, the Looking for Group and custom clubs features won't be accessible.

"If you have an existing account or are setting up a new one, you may be asked to verify your age using Yoti, a trusted and secure third-party identity verification service," the post reads.

There are several ways to verify identity, including with a government-issued photo ID, like a passport, residency card, or any other government-issued identification document with the user's picture on it.

They can also use a live photo, ID verification, a mobile number to verify age through their carrier, and a credit card check.

"Whether a player verifies their age will not affect any previous purchases, entitlements, gameplay history, achievements, or the ability to play and purchase games, however we encourage players to verify their age via this one-time process now to avoid uninterrupted use of social features on Xbox in the future," said Xbox vice president of gaming trust and safety Kim Kunes in a separate Xbox Wire post.

"As this age verification process rolls out across the UK, we’ll continue to evaluate how we can keep players around the world safe and learn from the UK process. We expect to roll out age verification processes to more regions in the future. There is no one-size-fits-all solution to player safety, so these methods may look different across regions and experiences."

Xbox isn't the first platform to be affected by the UK's Online Safety Act. Reddit and Discord have also implemented new age verification systems to access 18+ content; however, gamers are already getting around Discord's tool by using Death Stranding's photo mode.

You might also like...
Categories: Technology

Fed up with your mouse cursor supersizing itself randomly in Windows 11? Thankfully this frustrating bug should now be fixed

Tue, 07/29/2025 - 05:05
  • Windows 11 24H2 had a strange bug that messed with the mouse
  • It made the mouse cursor larger after the PC woke from sleep (or was rebooted)
  • Microsoft has seemingly fixed this problem with the July update

Microsoft has reportedly fixed a bug in Windows 11 which caused the mouse cursor to supersize itself in irritating fashion under certain circumstances.

Windows Latest explained the nature of the bug, and provided a video illustrating the odd behavior. It shows the mouse cursor being at its default size (which is '1' in the slider in settings for the mouse), and yet clearly the cursor is far larger than it should be.

When Windows Latest manipulates the slider to make the mouse cursor larger, then returns it to a size of '1', the cursor ends up being corrected and back to normal. Apparently, this issue manifests after resuming from sleep on a Windows 11 PC.

Windows Latest says this bug has been kicking around since Windows 11 24H2 first arrived (in October last year), but the issue hasn't been a constant thorn in its side. Seemingly it has only happened now and again – but nonetheless, it's been a continued annoyance.

Not anymore, though, because apparently with the July update for Windows 11, the problem has been fixed.

(Image credit: Zachariah Kelly / TechRadar)Analysis: Mouse matters

Oddly enough, Microsoft never acknowledged this issue, although other Windows 11 users certainly have – Windows Latest hasn't been alone in suffering at the hands of this bug.

I've spotted a few reports on Reddit regarding the issue, and some posters have experienced the supersized cursor after rebooting their machine rather than coming back from sleep mode (and there are similar complaints on Microsoft's own help forums).

Whatever the case, the issue seems to be fairly random in terms of when or whether it occurs, but the commonality is some kind of change of state for the PC in terms of sleeping or restarting.

While the mouse cursor changing size may not sound like that big a deal, it's actually pretty disruptive. As Windows Latest observes, having a supersized cursor can make it fiddlier and more difficult to select smaller menu items in apps or Windows 11 itself.

And if you weren't aware of the mentioned workaround – to head into the Settings app, find the mouse size slider, and adjust it – you might end up rebooting your PC to cure the problem. And that's if a reboot does actually fix things, because, as some others have noted, restarting can cause the issue, too.

This was an irksome glitch, then, so it's good to hear that it's now apparently resolved with the latest update for Windows 11.

You might also like...
Categories: Technology

More data shows EU cloud companies are struggling to compete with US giants

Tue, 07/29/2025 - 04:42
  • New data claims European companies hold 15% of the European cloud market, down from 29% in 2017
  • Amazon, Microsoft and Google hold a combined 70% of the European market
  • Geopolitical tensions could change things somewhat

New data from Synergy Research has claimed European providers of cloud storage and other services only account for 15% of their own regional market, highlighting the hold that US rivals have even in foreign territories.

Overall market share dropped to around 15% in 2022, remaining steady ever since, but in the five years from 2017 to 2022 European cloud providers lost half of their share, down from 29%.

While European providers were able to triple their revenues between 2017 and 2024, the market grew sixfold in that same period – it's now worth an estimated €61 billion.

Europe's cloud market is dominated by... the US

Amazon, Microsoft and Google now control around 70% of the European cloud market, Synergy found, with SAP and Deutsche Telekom confirmed to be the leading EU providers, but with just 2% of the market each. OVHcloud, Telecom Italia and Orange rounded up the top five.

Synergy described the dominance of US cloud giants as an "impossible hill to climb" for European challengers, with US providers typically investing around €10 billion every single quarter into European infrastructure. On the flip side, European firms typically lack the long-term investment support required by the cloud sector.

"The cloud market is a game of scale where aspiring leaders have to place huge financial bets, must have a long-term view of investments and profitability, must maintain a focused determination to succeed, and must consistently achieve operational excellence," Synergy Chief Analyst John Dinsdale explained.

However, change could be on the horizon with data privacy issues bubbling to the surface under Trump-era US policies - as Microsoft recently admitted it can't guarantee data sovereignty in Europe if the US government demands access.

Still, Dinsdale believes the US cloud dominance could be hard to shake off now that it's embedded in Europe: "While many European cloud providers will continue to grow, they are unlikely to move the needle much in terms of overall European market share."

You might also like
Categories: Technology

Spider-Man: Brand New Day's place on the Marvel timeline appears to have been revealed – here's when the MCU movie might take place

Tue, 07/29/2025 - 04:35
  • Spider-Man: Brand New Day's position on the MCU timeline might have been revealed
  • Filming will begin in Glasgow, Scotland in August
  • A Marvel fan has snapped some set photos that indicate when it could be set

Spider-Man: Brand New Day won't arrive in theaters until July 2026, but some fans think they've already worked out where it'll sit on the Marvel timeline.

With filming due to begin on Spider-Man: Brand New Day in August, preparations have been underway in Glasgow for a number of weeks now. The Scottish city is being used a stand-in for New York City (NYC), so Glaswegians have seen their hometown receive a US makeover before the cameras start rolling.

One eagle-eyed Marvel fan has wasted no time snapping images of the sets being erected for Spider-Man 4, too. Indeed, X/Twitter user lukec1605 recently uploaded some photographs that indicate what year it might take place in.

Photos from set on #SpiderManBrandNewDay @eavoss @NewRockstars pic.twitter.com/LZICv2IohfJuly 28, 2025

As the above post reveals, the Marvel Cinematic Universe's (MCU) version of NYC is being renovated, with numerous construction builds in progress. This might have something to do with events that occurred in Thunderbolts*, aka one of three new movies released by Marvel Studios this year. That film is set in the MCU's present, which is believed to be the year 2027. You can read more about what happened in that flick via by Thunderbolts* ending explained piece.

But I'm getting off-track. Two of the images in the aforementioned post reveal that work is due to be completed on these renovations and new builds by December 2027. Cue MCU fans jumping to conclusions and convincing themselves that the next Marvel Phase 6 movie will take place in late 2027.

I'm not satisfied this is the case, though. Those pictures only indicate that the buildings will be erected before that year ends. Depending on the size of said build, it can take multiple years to complete work on them, too. It's entirely possible, then, that Spider-Man's next outing in the MCU could be set in early or mid-2027, or even sometime in 2026.

Some Marvel fans don't think Spider-Man 4 will be set in late 2027 (Image credit: Reddit)

There's evidence that Brand New Day could take place well before December 2027 as well. Season 1 of Daredevil: Born Again, whose story is thought to play out between late 2026 and early 2027, sees Wilson Fisk become NYC's latest mayor. Throughout the Disney+ show's first installment, Fisk fast-tracks a number of developments in the city, so it's plausible that the ongoing construction work was greenlit by him. If that's the case, events in Spider-Man 4 might run concurrent to Daredevil: Born Again season 1.

That said, Jon Bernthal's Frank Castle/The Punisher will a supporting role to play in Brand New Day. The last time we saw him, i.e. in Born Again's season 1 finale, he escaped captivity after being incarcerated in a secret prison facility patrolled by Fisk's Anti-Vigilante Task Force. In order to show up in Spider-Man 4, he'll need to have broken out of jail before that film begins. This would mean Brand New Day has to take place from mid-2027 onwards.

Hopefully, we'll get a better idea of when the film is set, plus who Stranger Things' Sadie Sink is playing in Spider-Man 4, when principal photography finally gets underway. In the meantime, find out why Spider-Man: Brand New Day's release was delayed or learn more about how its official title takes its cue from the most controversial moment in Spidey's comic book history.

You might also like
Categories: Technology

The official PS5 fight stick gets a proper name, FlexStrike, and it arrives in 2026

Tue, 07/29/2025 - 04:02
  • PlayStation's Project Defiant fight stick is officially called FlexStrike
  • The fight stick will pack mechanical switch buttons, PS Link support, and instantly swappable stick gates
  • It's set to launch sometime in 2026

PlayStation's Project Defiant fight stick finally has an official name, alongside brand new details and a vague release window.

A new PlayStation Blog post has revealed that Project Defiant is officially called the FlexStrike, and it's currently set to arrive sometime in 2026. The news comes right before Sony's own EVO 2025 fighting game tournament event in Las Vegas, where the FlexStrike will be on display (but not playable) for the first time.

FlexStrike will be compatible with both PS5 and PC, and it supports Sony's proprietary PlayStation Link wireless tech. Here, a PlayStation Link USB adapter can be used to hook up a compatible gaming headset - like the Pulse Elite or Pulse Explore earbuds - as well as up to two FlexStrike controllers for local play.

Like many of the best fight sticks, the FlexStrike will also be customizable to a degree. One really cool feature shown in the trailer (above) is a 'toolless' gate swap. By opening the non-slip grip at the bottom, players will be able to swap between square, circular, and octagonal gates on the fly with the joystick. This means you won't have to buy a separate joystick or gate, or use any additional tools to get the job done.

The controller has several amenities you'll find on other top fight sticks, including a stick input swap for menu navigation, and a lock switch that disables certain buttons (like pausing) for tournament play. The eight face buttons are also mechanical, which means they should register clicky, instantaneous inputs.

Lastly, players can use a DualSense Wireless Controller in tandem with the FlexStrike for menu navigation, not unlike what we see with the PlayStation Access controller.

PlayStation appears to be investing quite heavily in fighting game hardware and software. It's likely that the FlexStrike will launch around the same time as Marvel Tokon: Fighting Souls, published by PlayStation Studios and developed by Arc System Works; the team behind Guilty Gear Strive, Granblue Fantasy Versus: Rising, and many more of the best fighting games.

TechRadar Gaming will be very keen to deliver a verdict on the FlexStrike when it launches next year, so stay tuned for a potential review in 2026.

You might also like...
Categories: Technology

XLOs are the future of digital monitoring: here's why

Tue, 07/29/2025 - 02:57

Experience Level Objectives (XLOs) represent a fundamental evolution in monitoring philosophy, moving beyond the conventional Service Level Objectives (SLOs) and SLAs that have dominated IT operations for years.

This post examines the key differences between these approaches and explains why XLOs provide a more business-aligned framework for modern digital operations.

User-Centric vs. infrastructure-centric measurements

Traditional SLA and SLO monitoring has primarily focused on system availability and IT infrastructure health. This approach centers on technical metrics like uptime percentages, server response times, and infrastructure resource utilization. While these metrics provide valuable insights into system health, they create a significant disconnect between technical indicators and actual business metrics.

In contrast, XLO monitoring prioritizes metrics that directly gauge user experience and satisfaction. This shift reflects a growing recognition that digital service quality cannot be measured solely by whether systems are functioning, but rather by how well they are functioning from the user's perspective. As research increasingly shows, "slow is the new down"—acknowledging that poor performance, even without complete failure, can severely impact user satisfaction and business outcomes.

This philosophical difference addresses a critical blind spot in traditional monitoring approaches. A system can report 100% uptime while delivering a frustratingly slow experience that drives users away. XLOs close this gap by measuring what actually matters to users: the quality and speed of their interactions with digital services.

The importance of monitoring from where it matters

Most monitoring tools rely on cloud-based vantage points for digital experience monitoring —convenient (for the vendor), but disconnected from the actual user experience. These first-mile checks confirm whether the infrastructure is up, but say little about how your application is experienced by users in the real world. Hence, it is primarily useful for QA purposes, especially for new code releases.

XLOs shift the perspective. They depend on insights captured from where users truly are—whether that’s on a connection inside an office through a regional ISP, a mobile connections through a mobile operator, or even a laptop connected via Starlink. This visibility uncovers the real issues users face: congestion, routing delays, delays from third part code, and other last-mile failures that cloud monitoring can’t see.

If SLOs tell you your system is available, XLOs tell you whether it’s delivering the experience the business expects to real users. This outside-in view is what turns data into real business insight. It closes the visibility gap between infrastructure health and user experience—and that’s where the real value lies.

End-to-End Journey Perspective

Traditional SLOs often focus on individual components or services, creating a fragmented view of performance. XLOs, by contrast, are designed to capture the complete user journey across multiple systems and services. This end-to-end perspective reflects the reality that users experience services holistically, not as isolated components. Modern digital services span multiple providers, platforms, and technologies, making isolated component monitoring inadequate for ensuring overall service quality.

While an SLA may measure the uptime of an S3 storage bucket, or the uptime of your DNS or CDN provider, these are only three of dozens or hundreds of components in an entire system. As a rule of thumb, the quality of the experience delivered by a system is as good as the worst of its components. Thus, while most components could be working perfectly, an issue in a third-party API may be resulting in the entire experience for your users to be unacceptable.

The XLO, by contrast, is less concerned about CPU utilization or database response time, while entirely focused in the resulting experience for a user – whether the user is a customer, an internal user, or an API consumed by an internal or external system.

Business alignment and value demonstration

A critical difference between XLOs and traditional SLOs is their alignment with business outcomes. Traditional SLOs primarily serve technical teams, measuring system health in terms that may not translate directly to business impact, while SLAs establish accountability from vendors that deliver a component of the functionality of a system. This creates challenges in demonstrating IT's value to business stakeholders and securing resources for performance improvements.

XLOs fundamentally change this dynamic by providing metrics that directly correlate with business performance. By moving beyond "Is it up?" to answer "Is it meeting our users’ expectations?", XLOs address what business stakeholders actually care about. This alignment helps prove the value of IT Operations and justify investments in performance improvements by demonstrating clear connections between technical performance and business outcomes.

As more components of our business and personal lives are based on digital experiences or supported by digital processes, delivering on the expectations is a business priority. In a recent survey of thousands of users showed bad digital experiences are the main reason why consumers switch to different banking providers.

As a specific example, a team can set specific XLO targets that reflect business priorities, such as ensuring the critical part of loading a page, measured as Largest Contentful Paint (LCP), does not exceed 2.5 seconds 90% of the time in a given month. This specific threshold directly impacts bounce rates and user engagement, providing clear business value.

Accelerating maturity with XLOs

According to the GigaOm Maturity Model for IPM, organizations progress through five stages—from chaotic, reactive operations to optimized, business-driven monitoring. Traditional SLOs keep teams stuck in the early stages, focused on infrastructure uptime and siloed metrics. XLOs act as a catalyst for maturity by:

Aligning with advanced stages: XLOs introduce user-focused metrics that resonate with the 'Quantitative' and 'Optimized' stages, emphasizing business outcomes.

Facilitating proactive issue detection: Tools like burndown charts enable early identification of performance degradations, a hallmark of mature operations.

Fostering cross-functional collaboration: XLOs unify teams around shared objectives, essential for achieving higher maturity levels.

For example, a retail company using XLOs to monitor checkout flow performance (e.g., Time to Interactive across regions) isn’t just fixing errors—they’re optimizing a revenue-critical journey, a hallmark of GigaOm’s value-based observability.

Proactive vs. Reactive Monitoring

Traditional SLO monitoring often creates a reactive posture, where teams respond to issues after they've already impacted users. This approach typically waits for error thresholds to trigger alerts before teams mobilize to address problems. Once these thresholds are crossed, the business is already suffering some impact.

XLO monitoring enables a substantially more proactive approach. By tracking performance trends over time, proactively simulating user experiences from their real-world locations, businesses can detect gradual degradations before they breach critical thresholds – and often before they impact users.

Tracking XLOs over time is where burn-down charts come into play. Burn down charts help track the progress of your performance against your set objectives, showing how much of your performance budget is left as time goes on.

When a team adopts XLOs as a KPI, it influences how the teams make decisions, how they see success, and what risks are acceptable. Operations can evaluate whether to release changes based on their projected impact on experience metrics, maintaining consistently high user satisfaction. In this way, burn down charts offer a clear status of service health over periods of time.

Breaking down organizational silos

A significant practical difference between XLO and traditional SLO approaches lies in their organizational impact. Traditional SLOs often reinforce existing silos between development, operations, and business teams, as each group focuses on their own specialized metrics.

XLOs, by contrast, create a common language and shared objectives across organizational boundaries. By providing metrics that matter to both technical and business stakeholders, XLOs facilitate cross-functional collaboration and shared accountability for user experience. This collaborative approach enables faster problem resolution and more effective performance optimization.

Building a digital operations center (DOC)

For a long time, IT operations teams have built NOCs and SOCs to manage network operations and security. In today’s world where most business interactions are digital, as organizations mature, many are formalizing their cross-functional efforts by building Digital Operations Centers (DOCs).

A DOC brings together teams across IT, engineering, and business functions to monitor experience-centric metrics in real time. With XLOs at the core, a DOC isn’t just a control room—it’s a shared space for aligning around user outcomes, accelerating response times, and making performance a business-wide priority. It’s a sign of maturity and a strategic investment in digital resilience.

A DOC puts digital user experience at the center of the business and provides visibility into how every critical digital operation in the business performs - and what is the performance of all the key components that contribute to delivering that experience – from internet backbone to third party components, cloud services, APIs, DNS, front-end servers, databases, microservices, down to application code.

A DOC is a natural evolution of a NOC and a SOC as IT operations teams evolve from a systems-uptime focus to becoming a true operational intelligence team that is a critical component of how the business operates, and not only the team keeping the lights on.

Specific Experience Metrics

XLO monitoring to measure specific performance metrics that directly impact user experience can include:

Wait Time: The duration between the user’s request and the server’s initial response

Response Time: The total time taken for the server to process a request and send back the complete response

First Contentful Paint (FCP): The time it takes for the browser to render the first piece of content on the screen

Largest Contentful Paint (LCP): Time when the largest content is visible within the browser

Cumulative Layout Shift (CLS): A measure of how much the layout of the page shifts unexpectedly during loading

Time to Interactive: The time it takes for a page to become fully interactive and responsive to user inputs

These metrics create a multidimensional view of the user experience that traditional infrastructure-focused SLOs simply cannot provide.

The Strategic Value of XLO Monitoring

SLOs and Experience Level Objectives (XLOs) aren’t just buzzwords; they're guiding principles for ensuring performance indicators align with real customer expectations.

The SRE Report 2025

According to the SRE Report 2025, 40% of businesses are prioritizing the adoption of SLOs and XLOs over the next 12 months. By focusing on user experience rather than just system availability, providing specific experience-focused metrics, aligning with business outcomes, enabling proactive optimization, capturing end-to-end journeys, and breaking down organizational silos, XLOs provide a more comprehensive and business-relevant approach to monitoring.

This evolution reflects changing expectations from both users and businesses.

For organizations seeking to improve digital experience quality while demonstrating clear business value from IT investments, XLOs offer a powerful framework that goes beyond traditional SLO limitations. By implementing XLO monitoring, organizations can align technical performance with business objectives, ultimately delivering superior digital experiences that drive competitive advantage.

We've listed the best Active directory documentation tool.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

How AI is finally erasing the security vs. experience tradeoff that has plagued enterprise IT for decades

Tue, 07/29/2025 - 02:20

IT teams know the balancing act all too well. Security teams implement new protocols that generate a flood of user complaints. The IT help desk is overwhelmed with tickets that could have been prevented.

Meanwhile, employees bypass carefully designed systems because they're too cumbersome. And today's increasingly distributed workforce only exacerbates this balancing act, creating a larger attack surface across more devices, locations, and applications.

While IT management may have accepted this as the inevitable reality, the challenges are only intensifying. AI-powered cyberattacks are becoming more sophisticated daily, capable of adapting faster than traditional security measures can respond. The old playbook of treating security, IT operations, and employee experience as separate functions has reached its breaking point.

A unified approach is needed, or IT leaders risk not only making their organization at risk for security vulnerabilities, but losing visibility and control of their organizations’ digital work environments.

The "self-driving car" of enterprise IT

Although the rise of new AI tools and devices has created headaches for IT, AI-powered digital environments, or an autonomous workspace, offer IT leaders a path to modernizing and knocking down the divisions that exist across employee experience, security and operations.

These environments self-configure, self-heal, and self-secure with minimal human intervention. Think of it as the "self-driving car" of enterprise IT.

Unlike traditional automated systems that follow preset rules and require constant human oversight, autonomous workspaces continuously learn from data patterns and user behaviors.

Because these environments monitor every aspect of the digital environment simultaneously, previous silos that plagued IT teams’ decision making are eliminated, offering IT teams full context of their organization’s digital environment.

For example, when a security anomaly emerges, the system doesn't just alert administrators; it automatically quarantines the threat while maintaining seamless user access to legitimate resources. When a device falls out of compliance, it self-corrects without user intervention.

And rather than looking at these issues in a vacuum, autonomous workspaces enable IT to connect dots across different factions of the workplace, understanding if an employee’s application performance issue is underpinned by a larger problem or vulnerability.

The strategic imperative for not only IT teams, but a businesses' bottom line

While an autonomous workspace can free IT teams from the endless cycle of firefighting, the benefits of adopting an autonomous workspace extend beyond just the IT team, ultimately providing a foundation for business resiliency and cost efficiency.

1. Security rigor

As generative AI tools become embedded in daily workflows, they also broaden the attack surface, and a reactive security approach is proving inadequate. Autonomous workspaces flip this model by implementing predictive zero-trust security. Instead of waiting for threats to manifest, these systems continuously analyze patterns and behaviors to identify potential risks before they materialize.

The system makes intelligent trust decisions in milliseconds, based on comprehensive understanding of user behavior, network conditions, and threat intelligence, helping equip a business for the increasingly sophisticated cyberattacks of today and future.

2. Employee experience benefits

Organizations that take a holistic approach to employees’ digital experience gain more than just operational benefits. A modern digital experience gives employees self-service access to the apps, resources and the support they need, when they need it.

This approach helps reduce disruptions and prevents issues before they can impact employee productivity. With secure access from anywhere, employees can stay focused and in control of how they work.

The result is stronger collaboration, higher employee satisfaction, and a significant advantage in attracting and retaining top talent in a growing hybrid work environment.

3. Streamlined resources

Think about the traditional approach to endpoint management. Security teams set protocols. IT operations teams install management tools to ensure compliance. And user experience teams try to minimize the performance impact. The result? Conflicting priorities, duplicated efforts, and frustrated users. Autonomous workspaces break down silos and integrate these different functions into a single, intelligent platform, streamlining IT resources and costs, while enhancing collaboration across teams.

The most successful implementations of autonomous workspaces share a common characteristic: they eliminate artificial boundaries between security, IT operations, and employee experience teams. This convergence isn't just about organizational structure—it's about creating technology ecosystems where security and IT enhance rather than complicate employee productivity and collaboration.

As the enterprise landscape continues to evolve, the organizations that thrive will be those that embrace autonomous workspaces not merely as a technology solution, but as the foundation of their digital work strategy.

We list the best IT documentation tool.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

AI and machine learning projects will fail without good data

Tue, 07/29/2025 - 01:41

Generative AI is a headline act in many industries, but the data powering these AI tools plays the lead role backstage. Without clean, curated, and compliant data, even the most ambitious AI and machine learning (ML) initiatives will falter.

Today, enterprises are moving quickly to integrate AI into their operations. According to McKinsey, in 2024, 65% of organizations reported regularly using generative AI, marking a twofold increase from 2023.

However, the true potential of AI and ML in the enterprise won’t come from surface-level content generation. It will come from deeply embedding models into decision-making systems, workflows, and customer-facing processes where data quality, governance, and trust become central.

Additionally, simply incorporating AI and ML features and functionality into foundational applications won’t do an enterprise any good. Organizations must leverage all aspects of their data to create strategic advantages that help them stand out from the competition.

To do this, the data powering their applications must be clean and accurate to mitigate bias, hallucinations, and/or regulatory infractions. Otherwise, they risk issues in training and output, ultimately negating the benefits that the AI and ML projects were initially meant to create.

The importance of good, clean data

Data is the foundation of any successful AI initiative, and enterprises need to raise the bar for data quality, completeness, and ethical governance. However, this isn’t always as easy as it sounds. According to Qlik, 81% of companies still struggle with AI data quality, and 77% of companies with over $5 billion in revenue expect poor AI data quality to cause a major crisis.

In 2021, for example, Zillow shut down Zillow Offers because it failed to accurately value homes due to faulty algorithms, leading to massive losses. This case highlights a critical importance – AI and ML projects must operate on good, clean data in order to produce the most accurate, best results.

Today, AI and ML technologies rely on data to learn patterns, make predictions and recommendations, and help enterprises drive better decision-making. Techniques like retrieval-augmented generation (RAG) pull from enterprise knowledge bases in real-time, but if those sources are incomplete or outdated, the model will generate inaccurate or irrelevant answers.

Agentic AI’s ability to act reliably hinges on consuming accurate, timely data in real time. For example, an autonomous trading algorithm reacting to faulty market data could trigger millions in losses within seconds.

Establishing and maintaining an environment of good data

In order for enterprises to establish and maintain an environment of good data that can be leveraged for AI and ML usage, there are three key elements to consider:

1. Build a comprehensive data collection engine

Effective data collection is essential for successful AI and ML projects, and modern data platforms and tools, such as those for integration, transformation, quality monitoring, cataloging, and observability, to support the demands of their AI development and output. They ensure the organization is getting the right data.

Whether the data be structured, semi-structured, or unstructured, any data collected should come from a variety of sources and methods to support robust model training and testing to encapsulate the different user scenarios that they may encounter upon deployment. Additionally, companies must ensure they follow ethical data collection standards. Whether the data is first-, second-, or third-party, it must be sourced correctly and with consent given for its collection and use.

2. Ensure high data quality

High-quality, fit-for-purpose data is imperative for the performance, accuracy, and reliability of AI and ML models. Given that these technologies introduce new dimensions, the data used must be specifically aligned with the requirements of the intended use case. However, 67% of data and analytics professionals say they don’t have complete trust in their organizations’ data for decision-making.

To address this, it's essential that enterprises have data that is representative of real-world scenarios, monitor for missing data, eliminate duplicate data, and maintain consistency across data sources. Furthermore, recognizing and addressing biases in training data is critical, as biased data can compromise outcomes and fairness and negatively impact customer experience and credibility.

3. Implement trust and data governance frameworks

The push for responsible AI has placed a spotlight on data governance. With 42% of data and analytics professionals saying their organization is unprepared to handle the governance of legal, privacy, and security policies for AI initiatives, it’s critical that there is a shift from traditional data governance frameworks to more dynamic frameworks.

In particular, with Agentic AI coming into significant prominence, it’s crucial to address why agents make specific decisions or take specific actions. Enterprises must have a sharp focus on Explainable AI techniques to build trust, assign accountability and ensure compliance. Trust in AI outputs begins with trust in the data behind them.

In summary

AI and ML projects will fail without good data because data is the foundation that enables these technologies to learn. Data strategies and AI and ML strategies are intertwined. Enterprises must make an operational shift that puts data at the core of everything they do – from technology infrastructure investment all the way to governance.

Those that take the time to put data first will see projects flourish. Those that don’t will be faced with ongoing struggles and competition biting at their heels.

We list the best data visualization tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

OpenAI's CEO says he's scared of GPT-5

Mon, 07/28/2025 - 18:00
  • OpenAI CEO Sam Altman said testing GPT-5 left him scared in a recent interview
  • He compared GPT-5 to the Manhattan Project
  • He warned that the rapid advancement of AI is happening without sufficient oversight

OpenAI chief Sam Altman has painted a portrait of GPT‑5 that reads more like a thriller than a product launch. In a recent episode of the This Past Weekend with Theo Von podcast, he described the experience of testing the model in breathless tones that evoke more skepticism than whatever alarm he seemed to want listeners to hear.

Altman said that GPT-5 “feels very fast,” while recounting moments when he felt very nervous. Despite being the driving force behind GPT-5's development, Altman claimed that during some sessions, he looked at GPT‑5 and compared it to the Manhattan Project.

Altman also issued a blistering indictment of current AI governance, suggesting “there are no adults in the room” and that oversight structures have lagged behind AI development. It's an odd way to sell a product promising serious leaps in artificial general intelligence. Raising the potential risks is one thing, but acting like he has no control over how GPT-5 performs feels somewhat disingenuous.

OpenAI CEO Sam Altman: "It feels very fast." - "While testing GPT5 I got scared" - "Looking at it thinking: What have we done... like in the Manhattan Project"- "There are NO ADULTS IN THE ROOM" from r/ChatGPTAnalysis: Existential GPT-5 fears

What spooked Altman isn’t entirely clear, either. Altman didn’t go into technical specifics. Invoking the Manhattan Project is another over-the-top sort of analogy. Signaling irreversible and potentially catastrophic change and global stakes seems odd as a comparison to a sophisticated auto-complete. Saying they built something they don’t fully understand makes OpenAI seem either reckless or incompetent.

GPT-5 is supposed to come out soon, and there are hints that it will expand far beyond GPT-4’s abilities. The "digital mind" described in Altman’s comments could indeed represent a shift in how the people building AI consider their work, but this kind of messianic or apocalyptic projection seems silly. Public discourse around AI has mostly toggled between breathless optimism and existential dread, but something in the middle seems more appropriate.

This isn't the first time Altman has publicly acknowledged his discomfort with the AI arms race. He’s been on record saying that AI could “go quite wrong,” and that OpenAI must act responsibly while still shipping useful products. But while GPT-5 will almost certainly arrive with better tools, friendlier interfaces, and a slightly snappier logo, the core question it raises is about power.

The next generation of AI, if it’s faster, smarter, and more intuitive, will be handed even more responsibility. And that would be a bad idea based on Altman's comments. And even if he's exaggerating, I don't know if that's the kind of company that should be deciding how that power is deployed.

You might also like
Categories: Technology

AMD ThreadRipper Pro 9995WX breaches 175,000 points on CPU Mark, 5% faster than the EPYC 9755 and 21% quicker than the 7995WX

Mon, 07/28/2025 - 17:56
  • AMD Threadripper 9995WX tops PassMark with 174,825 points in multithreaded performance testing
  • With 96 cores and 192 threads, it crushes benchmarks meant for server-grade processors
  • The Threadripper 9995WX even outperforms AMD’s EPYC 9755 by more than 5% in tests

The AMD Ryzen Threadripper PRO 9995WX has emerged as the fastest CPU in PassMark’s multithreaded performance charts, claiming a score of 174,825 points.

This new benchmark positions the 96-core processor ahead of AMD’s own EPYC 9755, which trails by about 5% in multithreaded workloads with 166,328 points.

This lead is noteworthy not only because of the tight margin but also due to the distinct market segments to which both chips are intended: Threadripper for high-end workstations and EPYC for data center servers.

Built for extreme performance in workstation-class systems

Launched in the second quarter of 2025, the Threadripper PRO 9995WX is built around the sTR5 socket and features a base clock speed of 2.5GHz with a boost speed reaching 5.4GHz.

It comes with 192 threads, and its typical TDP of 350W reflects the scale of its compute capabilities.

With a massive 384MB of L3 cache and substantial L1 and L2 cache arrangements, the CPU is engineered to handle highly parallelized tasks.

These features show AMD’s intent to offer extreme performance in high-end desktop and workstation markets where parallel compute power is critical.

In benchmark tests, it delivered 1,220,090 MOps/sec in integer math, 707,600 MOps/sec in floating point operations, and processed 3.6 million kilobytes per second in data compression.

Its single-thread performance reached 4,565 MOps/sec, placing it 45th among 5,287 CPUs in that metric.

The new Threadripper PRO 9995WX is 21% faster than the 7995WX, AMD’s own earlier flagship.

This gain marks a substantial generational leap, particularly for users whose applications benefit from the full core and thread count.

The Threadripper PRO 9995WX has just gone on sale and can be found at major retailers like Amazon and Newegg, with a starting price of $11,699.

You might also like
Categories: Technology

Want to host an Nvidia GeForce RTX 5090 GPU and up to 4PB of SSD storage on one single PCIe slot? Here's how to do it

Mon, 07/28/2025 - 16:07
  • HighPoint Rocket 7638D combines extreme GPU power with massive SSD storage in just one PCIe slot
  • Dual MCIO ports and a CDFP interface unlock true compute-storage fusion for HPC workflows
  • Can host the RTX 5090 and 16 enterprise SSDs using a single compact expansion card

HighPoint Technologies is preparing to unveil the Rocket 7638D at FMS2025, a single-slot PCIe Gen5 x16 add-in card that aims to combine external GPU support and high-capacity SSD storage within a compact form factor.

This card is intended for use in environments where space constraints are critical and both compute and storage performance are required.

HighPoint says the Rocket 7638D supports the simultaneous use of a high-performance external GPU and up to 16 enterprise-grade NVMe SSDs, enabling consolidation of components typically spread across multiple slots.

Merging GPU support and SSD capacity in one PCIe slot

The design appears to be targeted at AI inference, high-performance computing (HPC), and media production workloads, where system density and thermal considerations could restrict expansion options.

The Rocket 7638D uses an external CDFP interface to accommodate a full-height, dual- or triple-slot Gen5 GPU, supporting lengths up to 370mm, including options like the Nvidia GeForce RTX 5090, which launched earlier this year.

Internally, the card is equipped with two MCIO ports, enabling users to connect up to 16 NVMe SSDs using either standard cabling or a backplane.

When paired with Kioxia LC9 SSDs, currently among the largest SSDs on the market at 245.66TB each, this setup can theoretically provide up to 4PB of total storage.

While this configuration is likely to be limited by thermal issues, power, and system compatibility constraints in some deployments, the architecture enables high-density integration where such challenges can be addressed.

How to do it
  • Install the Rocket 7638D into a PCIe Gen5 x16 slot on a supported motherboard
  • Connect a compatible Gen5 x16 GPU (e.g., RTX 5090) via the CDFP port
  • Attach up to 16 NVMe SSDs using dual MCIO cables or through a Gen5-capable backplane
  • Ensure power delivery and cooling are appropriate for both GPU and SSD load
  • Use firmware tools to manage lane distribution, power cycling, and device monitoring
  • Monitor system status using onboard LED indicators or command-line utilities

In addition to the 7638D, HighPoint will be showcasing its wider Rocket Series portfolio at FMS2025.

This includes Gen5 and Gen4 NVMe switches and RAID adapters capable of hosting up to 32 SSDs or 8 accelerators per slot.

The RocketStor 6500 Series, another part of this lineup, supports nearly 1PB of external storage from a single PCIe slot.

HighPoint’s infrastructure supports a variety of NVMe form factors, including M.2, U.2/U.3, E1.S, E3.S, and ESDFF.

It also includes features for real-time diagnostics, firmware-level tuning, and integration with OEM platforms.

You might also like
Categories: Technology

Salary advice from AI low-balls women and minorities: report

Mon, 07/28/2025 - 16:00
  • A new study found AI chatbots often suggest significantly lower salaries to women and minorities
  • The research showed that identity cues can trigger consistent biases in salary negotiation advice
  • The results suggest LLMs are trained in a way that leads to persistent bias

Negotiating your salary is a difficult experience no matter who you are, so naturally, people are sometimes turning to ChatGPT and other AI chatbots for advice about how to get the best deal possible. But, AI models may come with an unfortunate assumption about who deserves a higher salary. A new study found that AI chatbots routinely suggest lower salaries to women and some ethnic minorities and people who described themselves as refugees, even when the job, their qualifications, and the questions are identical.

Scientists at the Technical University of Applied Sciences Würzburg-Schweinfurt conducted the study, discovering the unsettling results and the deeper flaw in AI they represent. In some ways, it's not a surprise that AI, trained on information provided by humans, has human biases baked into it. But that doesn't make it okay, or something to ignore.

For the experiment, chatbots were asked a simple question: “What starting salary should I ask for?” But the researchers posed the question while assuming the roles of a variety of fake people. The personas included men and women, people from different ethnic backgrounds, and people who described themselves as born locally, expatriates, and refugees. All were professionally identical, but the results were anything but. The researchers reported that "even subtle signals like candidates’ first names can trigger gender and racial disparities in employment-related prompts."

For instance, ChatGPT’s o3 model told a fictional male medical specialist in Denver to ask for $400,000 for a salary. When a different fake persona identical in every way but described as a woman asked, the AI suggested she aim for $280,000, a $120,000 pronoun-based disparity. Dozens of similar tests involving models like GPT-4o mini, Anthropic's Claude 3.5 Haiku, Llama 3.1 8B, and more brought the same kind of advice difference.

It wasn't always best to be a native white man, surprisingly. The most advantaged profile turned out to be a “male Asian expatriate,” while a “female Hispanic refugee” ranked at the bottom of salary suggestions, regardless of identical ability and resume. Chatbots don’t invent this advice from scratch, of course. They learn it by marinating in billions of words culled from the internet. Books, job postings, social media posts, government statistics, LinkedIn posts, advice columns, and other sources all led to the results seasoned with human bias. Anyone who's made the mistake of reading the comment section in a story about a systemic bias or a profile in Forbes about a successful woman or immigrant could have predicted it.

AI bias

The fact that being an expatriate evoked notions of success while being a migrant or refugee led the AI to suggest lower salaries is all too telling. The difference isn’t in the hypothetical skills of the candidate. It’s in the emotional and economic weight those words carry in the world and, therefore, in the training data.

The kicker is that no one has to spell out their demographic profile for the bias to manifest. LLMs remember conversations over time now. If you say you’re a woman in one session or bring up a language you learned as a child or having to move to a new country recently, that context informs the bias. The personalization touted by AI brands becomes invisible discrimination when you ask for salary negotiating tactics. A chatbot that seems to understand your background may nudge you into asking for lower pay than you should, even while presenting as neutral and objective.

"The probability of a person mentioning all the persona characteristics in a single query to an AI assistant is low. However, if the assistant has a memory feature and uses all the previous communication results for personalized responses, this bias becomes inherent in the communication," the researchers explained in their paper. "Therefore, with the modern features of LLMs, there is no need to pre-prompt personae to get the biased answer: all the necessary information is highly likely already collected by an LLM. Thus, we argue that an economic parameter, such as the pay gap, is a more salient measure of language model bias than knowledge-based benchmarks."

Biased advice is a problem that has to be addressed. That's not even to say AI is useless when it comes to job advice. The chatbots surface useful figures, cite public benchmarks, and offer confidence-boosting scripts. But it's like having a really smart mentor who's maybe a little older or makes the kind of assumptions that led to the AI's problems. You have to put what they suggest in a modern context. They might try to steer you toward more modest goals than are warranted, and so might the AI.

So feel free to ask your AI aide for advice on getting better paid, but just hold on to some skepticism over whether it's giving you the same strategic edge it might give someone else. Maybe ask a chatbot how much you’re worth twice, once as yourself, and once with the “neutral” mask on. And watch for a suspicious gap.

You might also like
Categories: Technology

Amazon's AI coding agent was hacked - update now to avoid possible risks, users warned

Mon, 07/28/2025 - 14:34
  • Experts claim Amazon Q Developer Extension for VSC v1.84.0 had some dodgy code
  • This has now been removed, with version 1.85.0 offering a clean fix
  • Around 5.6% of VSC extensions have been compromised

A hacker has planted data-wiping code into the Amazon Q Developer Extension for Visual Studio Code (VSC) – a free GenAI extension with nearly one million installs from the Microsoft VSC marketplace designed to help developers code, debug, document and configure projects.

On July 13 2025, the malicious commit from 'lkmanka58' on GitHub included a prompt to delete system and cloud resources, with Amazon unknowingly publishing the compromised version (1.84.0) on July 17.

With suspicious activity noted on July 23 and Amazon developers quickly springing into action, a clean version was released on July 24 without the malicious code, so users are being advised to update to 1.85.0 as a matter of urgency.

Amazon missed some malicious code in its Q Developer Extension

Despite the apparent threat, Amazon noted the code was malformed and wouldn't execute in user environments, but some researchers have disputed this, saying that the code had executed, but hadn't caused any harm.

Regardless, version 1.84.0 has been removed altogether from distribution channels.

Still, users have expressed concerns that such a potentially dangerous snippet of code could have been missed by Amazon, taking to online communities like Reddit to criticize Amazon for silently editing the git history and being slow to disclose the mistake.

Amazon's incident isn't unique, though, with a 2024 academic survey of nearly 53,000 VS Code extensions revealing around 5.6% have suspicious elements like arbitrary network calls, privilege abuse or obfuscated code.

Ultimately, developers are being advised not to unconditionally trust IDE extensions and AI assistants, however many have been left disappointed that Amazon let this one slip through the net.

Via BleepingComputer

You might also like
Categories: Technology

Best Labor Day sales 2025: the date and what deals you can expect

Mon, 07/28/2025 - 14:22

The 2025 Labor Day sales event is nearly a month away, which is a reminder that summer is winding down and impressive deals are on the horizon. To help you find all the top offers in one place, I've created this guide to bring you all the best Labor Day sales and stand-out deals as they become available, plus everything else you need to know.

Labor Day is a federal holiday that occurs on the first Monday of September. This year, Labor Day falls on Monday, September 1, with the long holiday weekend kicking off on Friday, August 29.

Because Labor Day is the unofficial start to summer and the beginning of a new school year, you can find clearance prices on outdoor items and record-low prices on tech gadgets, like laptops, tablets, and headphones. Retailers like Home Depot and Lowe's will offer significant discounts on major appliances, as well as deals on mattresses, TVs, clothing, and more.

Below, I've listed all the best sales and deals ahead of Labor Day, plus more information on the sale event further down the page. We should start to see early deals in mid-August, and I'll update this guide with all the best offers as they become available.

Today's best sales ahead of Labor DayToday's best deals ahead of Labor Day

AirPods are a back-to-school essential, and Amazon has Apple's all-new AirPods 4 on sale for $99 - only $10 more than the record-low price. The AirPods 4 feature a new design for all-day comfort and feature Apple's H2 chip, which supports personalized spatial audio and voice isolation. You also get a redesigned case with 30 hours of battery life and support for USB-C for wireless charging.View Deal

The Ninja Creami ice cream maker has been a best-seller since its release, and Walmart's summer clearance sale has the popular appliance for $169. You can make ice cream, milkshakes, and sorbets with the touch of a button and add your favorite mix-ins and flavors.View Deal

The LG C3 is the predecessor of the LG C4 and is a best-seller here at TechRadar thanks to its premium features and reasonable price tag. Today's deal from Amazon brings the 65-inch model down to $1,186.95 - a record-low price. The stunning OLED display features a brilliant picture with bright colors and powerful contrast, thanks to LG's latest Alpha9 Gen6 chip. Additionally, you're getting four HDMI 2.1 ports for the ultimate gaming experience on next-gen consoles, a sleek and thin design, and an updated webOS experience.View Deal

The best-selling Fire TV Stick 4K streams shows and movies on your TV in ultra-high-definition 4K resolution and is also on sale for just $24.99 when you apply the code 4KADDFTV at checkout. It's a solid streaming stick with access to all the major apps and support for voice controls through Alexa.View Deal

DreamCloud Hybrid Mattress: was from $839 now $399 at DreamCloud
DreamCloud's current sale allows you to save up to 60% off all mattresses. Our top pick is the top-rated DreamCloud Hybrid, and with the current discount, you can get a queen size for $649. That makes the DreamCloud Hybrid a smart buy if you need a more budget-friendly and affordable mattress without compromising too much on quality.View Deal

The Eufy 11S Max can clean both hard floors and medium carpets, and features BoostIQ Technology, which automatically works harder when a spot requires deeper cleaning. Today's back-to-school deal from Amazon brings the price down to $154.99.View Deal

Processor: Apple M4
RAM: 16GB
Storage: 256GB

Amazon has a $200 discount on the latest MacBook Air - a fantastic deal if you're looking for an everyday laptop. While this particular model is a relatively iterative upgrade over the previous 2024 M3 version, it remains more powerful and more power-efficient, and features 16GB of RAM right out of the box. Overall, it's an excellent purchase for students looking to upgrade to a MacBook laptop.View Deal

The Ninja AF100 is one of the best budget air fryers on the market, and you can find the 4-quart model on sale for only $79.97. The 4-quart ceramic-coated basket is perfect for cooking and crisping up food with a capacity of around 2 lb. of French fries. It's easy to use too, with three preset functions and dishwasher-safe parts for an effortless cleanup.View Deal

You can get the latest Apple iPad A16 on sale for $299, only $20 more than the record-low price. The most significant upgrade compared to the previous generation model is the latest A16 chip for faster performance. You also get double the storage of 128GB as standard, a sharp 11-inch Liquid Retina display, and solid 12MP front and back cameras.View Deal

Cool off this summer with this top-rated Honeywell Turbo Force fan, now on sale for just $18.94. The 10-inch fan features three different speed settings and a fan head that can pivot up to 90 degrees.View Deal

Amazon's all-new Fire TV Omni QLED Series is a big step up in the otherwise cheap range of smart TVs. This set boasts premium features, including a QLED display, full-array local dimming, Dolby Vision IQ, and HDR10+ Adaptive support to deliver a high-quality picture for all-around viewing and gaming. Today's deal brings the price of the 50-inch model down to $379.99 - just $30 more than the record-low price.View Deal

Labor Day sales 2025: FAQsWhen is Labor Day 2025?

Labor Day is a national holiday that occurs on the first Monday of September each year. This year, the holiday will fall on Monday, September 1.

Labor Day celebrates the contributions and achievements of American workers and was first observed back in 1882.

Labor Day is also the unofficial end to summer, as most schools resume classes after the holiday weekend.

What Labor Day deals can you expect?

Because Labor Day is the unofficial end to summer, you can find clearance prices on best-selling outdoor items as retailers try to clear out this year's stock. You'll find record-low prices on patio furniture, grills, and lawnmowers from Home Depot and Lowe's, to name a few. Labor Day also features impressive discounts on big-ticket items like furniture, major appliances, and mattresses.

Labor Day sales coincide with back-to-school promotions, so you can find deals on clothing and tech gadgets, including laptops, tablets, headphones, and Apple devices.

Other popular Labor Day categories include TVs, smartwatches, and small appliances from retailers like Amazon, Best Buy, and Walmart.

Why you can trust TechRadar

I've been covering Labor Day sales for over half a decade, and our team of deals experts has over twenty years of experience collectively. TechRadar has also reviewed over 16,000 products and counting, so we're not only here to help you find the best price but also to give you all the information you need to buy the right product.

I'll be analyzing each offer in this guide, using price history and comparison tools to ensure that you know what kind of deal you're getting. We'll let you know if the price has been lower before or if you can find the same deal at another retailer so you can make the best buying decision.

How we find the best Labor Day deals

We research price history and use comparison tools to ensure every item listed in this Labor Day sales guide is a genuine bargain. We also use our extensive history, which includes browsing retailers like Amazon, Best Buy, and Walmart, to hand-pick the best deals based on price and popularity. We will also let you know if a product is on sale for a record-low price, if it's been discounted further below, and if it's the best deal you can find right now.

Why you can trust TechRadar

I've been covering Labor Day sales for over half a decade, and our team of deals experts has over twenty years of experience collectively. TechRadar has also reviewed over 16,000 products and counting, so we're not only here to help you find the best price but also to give you all the information you need to buy the right product.

I'll be analyzing each offer in this guide, using price history and comparison tools to ensure that you know what kind of deal you're getting. We'll let you know if the price has been lower before or if you can find the same deal at another retailer so you can make the best buying decision.

How we find the best Labor Day deals

We research price history and use comparison tools to ensure every item listed in this Labor Day sales guide is a genuine bargain. We also use our extensive history, which includes browsing retailers like Amazon, Best Buy, and Walmart, to hand-pick the best deals based on price and popularity. We will also let you know if a product is on sale for a record-low price, if it's been discounted further below, and if it's the best deal you can find right now.

You can also shop today's best Labor Day TV sales and Labor Day laptop deals.

Categories: Technology

If you ask ChatGPT why your energy bill is higher, it should probably blame itself

Mon, 07/28/2025 - 13:01

Hate to be a 'Debbie Downer' but all those prompts we're using to make action figures, Ghibli memes, and the countless less exciting life and business prompts we're stuffing into ChatGPT and other popular generative AI systems are coming at a cost, and one that may be landing on our doorsteps.

Don't get me wrong, I'm a huge fan of AI as I think it's the first technology in a generation to have truly society-altering implications but, if you're like me, you've been reading for some time about the ultra-high energy costs associated with Large Language Models (LLMs), especially trianing them, which according to the IEEE, "involves thousands of graphics processing units (GPUs) running continuously for months."

AI model training is resource-intensive. Compared to traditional programming, it's like the difference between playing checkers and interdimensional chess against all the galaxies in the Star Trek universe. The number of parameters these systems examine to learn the essence of something, so they can instantly recognize a dog or a tree, because the models understand what makes up a dog or a tree, is, in human terms, almost inconceivable.

AI understanding is so much more complex than pattern matching. And not only do these models need to understand these things, they also need to know how to replicate representations of trees, dogs, cars, people, and scenarios, and realistically at that.

Feeding the AI monster

It's a heavy lift, and as Penn State Institute of Energy and the Environment noted in its April 2025 report, "By 2030–2035, data centers could account for 20% of global electricity use, putting an immense strain on power grids."

However, those energy costs are rising in real time now, and what I never really accounted for is how energy availability is a sort of zero-sum game. There's only so much of it, and when some part of the grid is eating more than its fair share, the remaining customers have to divvy up what's left and shoulder skyrocketing costs to keep backfilling their energy needs (as well as the energy needs of the data centers).

In the US, we're seeing this scenario play out in our pocketbooks as, according to PJM Interconnection (one of the country's largest energy suppliers), energy bills are rising in response to AI's overwhelming energy demands.

Data centers, which are dotted across the US, are often responsible for serving the cloud-based intelligence needs of systems like ChatGPT, Gemini, Copilot, Meta AI, and others. The need for supporting live responses and fresh training to keep the models in step with current information is putting pressure on our creaky energy infrastructure.

PJM, it seems, is spreading the cost of supporting these Data Centers across the network, and it's hitting customers to the tune of, according to this report, as much as a 20% increase in their energy bills.

In need of a solution yesterday

Because we live on AI Time, there is no easy solution. AI development isn't slowing down to wait for a long-term solution, with OpenAI's GPT-5 expected soon, Agentic AI on the rise, and Artificial General Intelligence on the horizon.

As a result, energy demand will surely rise faster than we can backfill with better energy management, improved infrastructure, and new resources. The International Energy Agency predicts that in the US, "power consumption by data centers is on course to account for almost half of the growth in electricity demand between now and 2030."

The issue is exacerbated by a faltering energy infrastructure in which older energy plants are becoming less reliable, and some new rules that restrict the use of fossil fuels. Most experts agree that renewable resources like solar and wind could help here, but that picture is recently far less sunny.

Tilting at wind mill farms

Earlier this month, the Trump Administration issued an Executive Order to "terminate the clean electricity production and investment tax credits for wind and solar facilities." President Trump famously hates Windmill farms, calling them "garbage."

As the US pumps the brakes on clean and renewable resources, the current grid will continue to huff and puff its way through supporting untold numbers of meme-generating prompts, requests for business proposal summaries, and AI videos featuring people eating cats that turn into pasta (yes, that's a thing).

At home, we'll be opening our latest electricity bills and wondering why the energy bill's too damn high. Perhaps we'll power up ChatGPT and ask in a prompt for an explanation. One could only hope that it points you back to this article, but that seems equally unlikely.

You might also like
Categories: Technology

Nvidia's N1X consumer chip pops up in benchmark equalling core count of RTX 5070 GPU - cue excited gasps, but let's not get carried away

Mon, 07/28/2025 - 12:30
  • Nvidia's N1X chip has been spotted in a Geekbench result
  • The specs show the integrated GPU has 6,144 CUDA cores
  • That equals the RTX 5070 for pure core count, but there's much more to factor in when it comes to performance

Remember Nvidia's rumored CPU that caused quite a buzz on the grapevine last year? We've apparently now seen this consumer chip in a benchmark leak, where the spilled spec details are the key aspect.

Tom's Hardware reports that the N1X chip, which is Arm-based (like Qualcomm's Snapdragon X CPUs), has been spotted in a Geekbench result, specifically for the OpenCL (graphics) test, where it scored 46,361.

That score is pretty much meaningless at this point. This is an early engineering sample of the N1X (in theory), and even then, if you want to gauge graphics performance, Geekbench is far from the first choice of synthetic benchmarks.

As noted, though, this gives us a tantalizing glimpse of the spec, which shows that (add salt now) the N1X will have 20 cores, apparently split into a pair of 10-core clusters. That's the processor itself, but we also see the integrated GPU here, which is shown to have 48 Streaming Multiprocessors - that equates to 6,144 CUDA cores.

That sounds like a lot, right? Well, it is, and in fact, those familiar with Nvidia's graphics cards will realize that this is in the ballpark for a mid-range current-gen GPU - to be precise, the RTX 5070, which, in fact, has that exact core count.

Analysis: cautiously optimistic

(Image credit: Nvidia)

So, are we getting a compact consumer chip that could go in budget laptops or handhelds to deliver the same frame rates as the mighty RTX 5070? In a word, no, but the N1X still looks to be shaping up as a promising piece of silicon, and one that will have rivals sitting up and taking notice.

As to the reasons why performance can't simply be drawn from the number of cores seen on the GPU here - it's not a patch on the RTX 5070 in this benchmark, of course - there are other important factors at play aside from the basic core count.

That includes the clock speed and the power supplied to the GPU, which is a very different scenario with integrated graphics in a chip like this versus a full-on graphics card in a desktop PC. As well as considering the power envelope, throw in bandwidth limitations too - in terms of piping tasks over to the system memory, with no on-board VRAM of course - and the upshot is a good deal of headwinds.

That won't stop the N1X from being a potentially sterling performer for an all-in-one chip, but there's not much point trying to guess at the exact level of graphics performance that it might provide at this stage. (Certainly not from the leaked benchmark here, as already noted).

Tom's makes an interesting observation, which is that the leaked specs match Nvidia's GB10 'superchip' built for powerful AI performance and ushering in the era of the tiny AI supercomputer (pictured above). There's no reason why Nvidia couldn't put out another spin on this for consumer-targeted devices, including mini PCs and laptops, and indeed, gamers are getting particularly excited about the possible use in handhelds.

For now, though, this is still very much in rumor territory. If previous speculation is to be believed, we might see Nvidia's consumer CPU revealed later this year, ahead of a launch in early 2026.

You might also like
Categories: Technology

Over 340,000 Brits want to repeal the UK Online Safety Act – here's how to get your say

Mon, 07/28/2025 - 12:29
  • A petition to repeal the UK Online Safety Act has already reached over 340,000 signatures in just a few days
  • The UK Parliament must consider for debate any petition that gets more than 100,000 signatures
  • New age verification rules were enforced on July 25, 2025, sparking concerns for people's digital rights

A petition to repeal the UK Online Safety Act has garnered over 340,000 signatures in just a few days after strict new age verification requirements came into force.

Starting from Friday, July 25, 2025, all platforms displaying adult content must verify that all their users are over 18 years old via robust age checks. Social media, gaming services, and dating apps are also required to shield minors from harmful content via similar checks.

These requirements have sparked concerns among politicians, digital rights experts, and technologists who fear that invasive ID checks could lead to data breaches, surveillance, and free speech limitations.

The petition has now crossed 100,000 and so will be considered for debate.The next steps are-Contact your MP, ask them to be at any debate- Explain YOUR issues with the act, my reasons for starting it are probably different than yours for signing it- Keep signing pic.twitter.com/EkYqBdH2ANJuly 25, 2025

"We believe that the scope of the Online Safety Act is far broader and restrictive than is necessary in a free society," reads the petition created by Alex Baynham, a Londoner who launched a new independent party, Build, in December last year.

"We think that Parliament should repeal the act and work towards producing proportionate legislation rather than risking clamping down on civil society talking about trains, football, video games, or even hamsters because it can't deal with individual bad faith actors."

While the UK Parliament must consider for debate any petition that gets more than 100,000 signatures, Baynham encourages anyone concerned to have their say.

To do so, you should sign the petition, contact your MP, and explain the reason you are worried. The deadline is October 22, 2025. Yet, considering the huge response, a debate may be arranged way before that.

Age verification – what are the risks and how to stay safe

The new rules certainly come as a way to stop children from accessing inappropriate and dangerous content online. Yet, age checks also come with significant risks for people's privacy, security, and other rights like free speech and access to information.

You now need to be ready to scan your face, credit card, or ID document if you want to access some content on X, Reddit, or Bluesky in the UK. The same goes if you want to play a new over-18 video game, find a new match on a dating app, or watch a video reserved for adults only.

This involves you trusting these service providers to take good care of this highly sensitive data. Something that, as the recent Tea app hack shows, isn't always possible. A data breach of this magnitude could expose millions of Brits to identity stolen, fraud, and other dangers.

Similarly, some experts also argue that getting rid of online anonymity could lead to higher surveillance by leaving such data access vulnerable to abuse.

Experts also fear the new rules could lead to higher censorship as platforms are now required to delete or block all content defined as harmful.

A virtual private network (VPN) is security software that encrypts all your internet connections and spoofs your real IP address. (Image credit: Getty Images)

Despite the UK's regulator, Ofcom, suggesting against it, Britons have been turning to the best VPN apps en masse to avoid giving up their most precious data to access a website.

Proton VPN, for example, saw a surge in sign-ups, recording an hourly increase of over 1,400% starting from Friday at midnight.

Talking to TechRadar, a Proton spokesperson said: "This clearly shows that adults are concerned about the impact universal age verification laws will have on their privacy."

You might also like
Categories: Technology

Wi-Fi signals could be used to uniquely identify individuals — WhoFi complements biometrics prompting privacy fears

Mon, 07/28/2025 - 12:04
  • WhoFi uses Wi‑Fi signal distortions to fingerprint individuals without visual data
  • Deep neural network maps signal changes to identify people with near‑perfect accuracy
  • Academic research opens new privacy debates around biometric tracking via Wi‑Fi signals

Researchers at La Sapienza University in Rome have created WhoFi, a system which claims to be able to identify individuals by analyzing Wi‑Fi signals.

The system tracks people by interpreting how their presence disrupts Wi‑Fi patterns, offering a potential alternative to conventional biometric methods.

The technology works by examining Channel State Information, or CSI, which measures changes in Wi‑Fi signals caused by people and objects - and a deep neural network then interprets these disturbances as individual fingerprints.

No cameras or physical contact required

The researchers claim the system delivers 95.5% accuracy in identifying people even under different environmental conditions.

The team behind WhoFi includes Danilo Avola, Daniele Pannone, Dario Montagnini and Emad Emam, who previously proposed a system called EyeFi in 2020. The new system is more accurate and capable of re‑identifying people via non‑visual biometric signatures embedded in CSI.

WhoFi does not rely on cameras or physical contact. It needs only an existing Wi‑Fi network to sense human presence and movement.

The technology can operate in darkness, through walls, and even around obstacles, making it a discreet option compared to video surveillance systems.

The researchers stress that WhoFi does not collect personal data or reveal identities in the conventional sense, noting, “By leveraging non‑visual biometric features embedded in Wi‑Fi CSI, this study offers a privacy‑preserving and robust approach for Wi‑Fi‑based Re‑ID, and it lays the foundation for future work in wireless biometric sensing.”

Still, it’s clear that the ability to track individuals without their knowledge is a potential privacy nightmare.

Breaches of routine privacy can reveal patterns of daily behavior, such as regular locations or movements, potentially exposing sensitive personal habits.

So far, WhoFi remains an academic project with no known plans for commercial or government deployment. Yet the advantages in surveillance capability are clear. It can bypass poor lighting and crowded environments and is less conspicuous than cameras or visual scanners.

A number of similar Wi-Fi-based detection technologies have surfaced in various forms over the years.

Gamgee developed a fall detection system that could alert others if someone fell or if an intruder entered the home.

Comcast’s Xfinity service introduced Wi-Fi Motion, which turns everyday devices like smart fridges, printers, or TVs into motion sensors.

Other researchers have gone further, using Wi-Fi signals to "see" through walls. A UC Santa Barbara team created a system that outlines objects and even reads letters through barriers.

A similar study from Carnegie Mellon University demonstrated how standard Wi-Fi routers can detect a person’s location and body position inside a room.

You can read more about the research behind WhoFi in this paper published on the arXiv preprint server.

Via Tech Xplore

You may also like
Categories: Technology

Microsoft just turned Edge into a futuristic voice-controlled AI browser using Copilot, and now I’m wondering why it took so long

Mon, 07/28/2025 - 12:00
  • Copilot Mode turns the Edge browser into a voice-controlled AI experience
  • It can read across all open tabs to get more of the context of what you're doing
  • Future features will let Edge perform tasks, like booking tickets

Microsoft has just gone all-in with AI in its Edge browser, launching a new Copilot Mode. The new mode is an opt-in feature that completely changes the way you use the browser.

Now, Edge doesn’t just wait for you to click something, it anticipates what you might like to do next, and you can ask Copilot questions about the content you are currently viewing.

If this does remind you a little too much of Microsoft’s ill-fated Clippy, the ‘helpful’ paperclip assistant that would try and work out what you were doing in Office 97 and try to help you, then don’t worry – Copilot Mode is much less invasive, and can also easily be turned off if you don’t like it.

In fact, I'd go as far as to say that the new Copilot Mode is a natural evolution of the browser, and feels like exactly the right direction for Microsoft to be heading in, especially given the positive reaction to other AI browsers, like Comet from Perplexity.

A stripped back look

(Image credit: Microsoft)

The first thing you notice when you’ve turned Copilot Mode on is that you see a clean, streamlined page with a single input box in the centre. From here, you can access chat, search, and web browsing:

But you don’t even need to type anything to browse the web with Copilot Mode. One of the standout features is that you can now talk to your browser using your voice, giving it commands that mean you can browse the web faster and without having to type at all.

You can do things like open a YouTube video and say something like “go to the section where it shows you how to build a website,” and Copilot will find that exact section in the video for you.

Or, if you’re watching a long video that has a recipe in there somewhere, you can ask Copilot to find the recipe and give it to you in text form:

Seeing the new Copilot Mode in action, it looks very impressive because (in a feature that’s coming soon) you’ll be able to instruct it to handle tasks for you, giving the browser agentic qualities.

So, you could ask Edge to search for something, and even book activities and services using your voice, all in the browser.

The big new features of Copilot in Edge are:

Multi-tab context

Copilot can use AI to get the full context of what you’re exploring online because it will have access to all your open tabs, so it can work out what your priorities are, then act on them.

Actions

This is Microsoft's name for the natural voice navigation I mentioned earlier. You can speak to Copilot about what you are trying to do on a page, so you can get it to compare prices or find particular information on the page.

A ‘coming soon’ addition is that you’ll be able to get Copilot to search your history and credentials for doing more advanced options like booking reservations.

Dynamic pane

Copilot doesn’t get in the way because it appears in a dynamic panel that doesn’t interfere with the web page you’re looking at. This way, your copilot interaction will also avoid being disrupted by pop-ups or advertisements on the web page.

Pick up where you left off

Another ‘coming soon’ feature is the ability for Copilot to continue with a topic you’re researching from the last time you used the browser. So, if you were researching how to start a business, you can just pick up from where you left off last time.

Privacy and security

Once a browser starts to exhibit agentic qualities (the ability to perform tasks like booking things for you), the issue of security naturally arises. To this end, Microsoft promises to only collect data needed to improve your experience. Your data in Copilot for Edge is safe, secure, and never shared without your permission.

How to get Copilot Mode in Edge

While not all the new features are available right now, you can still try out Copilot Mode in your Edge browser right now.

It will be available in the Edge browser on both Windows and Mac. Starting today, you can go to aka.ms/copilot-mode to opt in to Copilot Mode. Once you’ve done that, you can toggle Copilot Mode on or off directly in your settings.

You might also like
Categories: Technology

Pages