At Google I/O 2025 Google finally gave us what we’ve all been waiting for (well, what I’ve been waiting for): a proper Android XR showcase.
The new Google operating system made for Android headsets and Android glasses has been teased as the next big rival to Meta’s Horizon OS – the software that powers the Meta Quest 3 and Quest 3S – and we finally have a better picture of how it stacks up.
Admittedly the showcase was a little short, but we do know several new details about Android XR, and here are four you need to know.
1. Android XR has Gemini at its core (Image credit: Future)While I’d argue Google’s Android XR showcase wasn’t as in-depth as I wanted, it did show us what the operating system has running at its core: Google Gemini.
Google’s advanced AI is the OS’ defining feature (at least that’s how Google is positing it).
On-glasses-Gemini can recommend you a place to eat ramen then offer you on-screen directions to where to find it, it can perform live-translation, and on a headset it can use Google Maps' immersive view to virtually transport you to any destination you request.
Particularly on the glasses this completely hands-free approach – combined with cameras and a head-up display – looks to be Google Gemini in its most useful form. You can get the assistant’s help as quickly as you can ask for it, no fumbling to get your phone out required.
I want to see more but this certainly looks like a solid upgrade on the similar Meta AI feature the Ray-Ban Meta smart glasses offer.
2. Android XR is for more than Samsung (Image credit: Xreal)Ahead of Google I/O we knew Samsung was going to be a key Android XR partner – alongside Qualcomm, who’s providing all the necessary Snapdragon chipsets to power the Android XR hardware.
But we now know several other companies are collaborating with Google.
Xreal has showcased Project Aura, which will bring Android XR to an upgraded version of its tethered glasses that we’re familiar with (like the Xreal One) – with Aura being complete with a camera and Snapdragon processor.
Then Google also teased glasses from Gentle Monster and Warby Parker, implying it is taking Meta’s approach of partnering with fashion brands, rather than just traditional tech brands.
Plus, given that Gentle Monster and Warby Parker offer very different design aesthetics, this will be good news for people who want varied fashion choices for their new smart glasses accessories.
3. Project Moohan is still coming ‘later this year’ (Image credit: Google)The Android XR headset Project Moohan is still set to launch in 2025, but Google and Samsung have yet to confirm a specific release date.
I was hoping we’d get something more concrete, but continued confirmation that Moohan will be landing in 2025 is better than it being delayed.
Google and its partners weren’t keen to give us any firm dates, in fact. Xreal calling its Project Aura the second official Android XR glasses suggests it’ll land sometime after Moohan, but before anything else – however, we’ll have to wait and see what plays out.
4. Meta should be worried, but not terrified (Image credit: Google)Google certainly dealt XR’s biggest player – Meta, with its hugely popular Quest headset hardware – a few blows and gave its rival something to be worried about.
However, this showcase is far from a finisher, especially not in the headset department.
Meta’s Connect 2025 showcase in September is expected to show us similar glasses tech and features, and depending on release dates Meta might beat Android XR to the punch.
That said, competition is only going to be a good thing for us consumers, as these rivals battle over price and features to entice us to one side or the other. Unlike previous battles in the XR space this certainly seems like a balanced fight and I’m excited to see what happens next.
@techradar ♬ original sound - TechRadar You might also likeGoogle and Samsung's Android XR collab has been a major focus, but at Google I/O 2025 a new (yet familiar) partner emerged to showcase the second official Android XR device: Xreal with Project Aura.
Xreal and its Xreal One glasses currently top our list for the best smart glasses thanks to their impressive audio and visual quality.
However, while they include AR elements – they make your connected device (a phone, laptop, or console, among other options) float in front of you like you’re in a private movie theatre, which is fantastic by the way – they aren’t yet as versatile as other smart glasses propositions we’re being promised by Google, Meta, Snap and others.
Xreal Project Aura – a pair of XR glasses officially referred to as an optical see-through (OST) XR device – should shift Xreal’s range towards that of its rivals thanks to its advanced Qualcomm chipset, Xreal’s visual system expertise, and Google’s Android XR software. The combination of which should (hopefully) form a more fully realized spatial computing device than we’ve seen from Xreal before.
Samsung aren't the only Android XR glasses (Image credit: Google)As exciting as this announcement is – I’ll explain more below in a moment – we should keep our emotions in check until further details on Project Aura are revealed at the Augmented World Expo (AWE) in June, and in other announcements set to be made “later this year” (according to Xreal).
Simply because beyond its existence and its general design we know very little about Aura.
We can see it has in-built cameras, have been promised Qualcomm processors, and it appears to use the same dual-eye display technology exhibited by Xreal’s other glasses. Plus it'll tethered rather than fully wireless, though it should still offer all of the Android XR abilities Google has showcased.
But important questions like its cost and release date haven’t yet been detailed.
I’m hoping it’ll offer us a more cost-effective entry point to this new era of XR glasses, but we’ll have to wait and see before we know for certain if this is “a breakthrough moment for real-world XR” as Chi Xu, Co-founder and CEO of Xreal promises.
Still, even before knowing its specs and other key factors I’m leaning towards agreeing with Xreal’s CEO.
I love my Xreal One glasses (Image credit: Future / Hamish Hector) Meta should be worriedSo why is this Xreal Android XR reveal potentially so important in my eyes?
Because while Meta has promised its Horizon OS will appear on non-Meta headsets – from Asus, Lenovo, and Xbox – since that announcement we’ve seen nothing of these other headsets in over a year. That is, beyond a whisper on winds (read: a small leak) about Asus’ Project Tarius.
Android XR on the other hand has, before launch, not only confirmed collaborations between Google and other companies (Xreal and Samsung) but shown those devices in action.
They aren’t just promises, they’re real.
A threat to the Meta Quest 3? (Image credit: Meta)Now the key deciding factor will be if Android XR can prove itself as an operating system that rivals Horizon OS in terms of the breadth and quality of its XR apps. With Google, Samsung, Xreal, and more behind it, I’m feeling confident that it will.
If it lives up to my expectations, Android XR could seriously shake up Meta’s XR dominance thanks to the varied XR hardware options under its umbrella out the gate – that should lead to competition resulting in better devices and prices for us consumers as an end result.
We’ll have to continue to watch how Android XR develops, but it looks like Google is off to a very strong start. For the first time in a while Meta might finally be on the back foot in the XR space, and the ball is in its court to respond.
You might also likeYou may have already seen Google's Project Starline tech, which reimagines video calls in full 3D. It was first teased over four years ago, and at Google I/O 2025 we got the news that it's rolling out in full with a new name: Google Beam.
Since its inception, the idea of Google Beam has been to make it feel like you're in the same room as someone when you're on a video call with them. Rather than using headsets or glasses though, it relies on cameras, mics, and AI technology.
@techradar ♬ original sound - TechRadar"The combination of our AI video model and our light field display creates a profound sense of dimensionality and depth," says Google. "This is what allows you to make eye contact, read subtle cues, and build understanding and trust as if you were face-to-face."
Beam participants need to sit in a custom-made booth, with a large, curved screen that's able to generate a partly three-dimensional rendering of the person they're speaking to. The first business customers will get the equipment from HP later this year.
Real-time translation Google Beam in action (Image credit: Google)There's another element of Google Beam that's been announced today, and that's real-time translation. As you might expect, this is driven by AI technology, and makes it easier to converse with someone else in a different language.
As per the demo that Google has shown off, the translation is just a second or two behind the speech, and it works in the same way that a translation might be added afterwards on top of someone speaking in a video recording.
It's another impressive part of the Google Beam experience, and offers another benefit for organizations with teams and clients all across the world. According to Google, it can preserve voice, tone, and expression, while changing the language the audio is spoken in.
This part of the experience won't only be available in Google Beam though: it's rolling out now inside Google Meet now for consumers, though you are going to need either the Google AI Pro or the Google AI Ultra plan to access it.
You might also likeGoogle became a verb for search long before AI chatbots arrived to answer their first prompt, but now those two trends are merging as Google solidified AI's position in search with the full rollout of AI Mode for all US Google Search users. Google made the announcement as part of Google I/O, which is underway in California.
Finding results from a generative model that often gives you everything you need on the Google result page is a fundamental shift in traditional web search paradigms.
For over 25 years now, we've traditionally searched on a term, phrase, or even complete thought and found pages and pages of links. The first page is the links that matter most in that they'll mostly closely align with your query. It's no secret that companies, including the one I work at, fight tooth and nail to create content that lives on the first page of those results.
Things began to change in the realm of Google Search when Google introduced AI Overviews in 2023. As of this week, they're used by 1.5 billion monthly users, according to Google.
Where AI Overview was a light-touch approach to introducing generative AI to search, AI Mode goes deeper and further. The latest version of AI Mode, introduced at Google I/O 2025, adds more advanced reasoning and can handle even longer and more complex queries.
Suffice it to say, your Google Search experience may never be the same.
View from the top Google CEO Sundar Pichai (Image credit: Bloomberg/Getty Images)Google CEO Sundar Pichai, though, has a different view. In a conversation with reporters before Google I/O and in answer to a question about the rise of AI chatbots like Gemini and the role of search, Pichai said, "It's been a pretty exciting moment for search."
He said that engagement with AI Overviews and even the limited AI Modes tests has shown increased engagement, with people spending more time in search and inputting longer and longer queries.
No one asked if the rise of chatbots could mean the end of search as we know it, but perhaps Pichai inferred the subtext, adding, "It's very far from a zero-sum moment."
If anything, Pichai noted, "People, I think, are just getting more curious; people are using a lot of this a lot more. "
While AI Overview is often accused of having some factual issues, Google is promising that AI mode, which uses more powerful models than AI Overview, will be more accurate. "AI Mode uses more powerful models and uses more reasoning across – sort of doing more work– ...and so it reaches an even higher bar," said Google Search Head Liz Reid.
As for where search is going, Pichai sees features like AI Mode "expanding use cases". He also thinks that agentic AI is "giving people glimpses of a proactive world."
I think, by this, Pichai means that AI-powered search will eventually learn your needs and do your bidding, even if your query or prompt doesn't fully describe your need.
What that means in practice is still up for debate, but for Google and Picahi, the advent of AI in search is all upside.
"I do think it's made the Web itself more exciting. People are engaging a lot more across the board, and so it's a very positive moment for us."
You might also likeGoogle's AI image generation just levelled up, with a new version of Imagen 4 bringing with it a bunch of big upgrades including a higher resolution and better text handling.
The upgrade was announced at Google I/O 2025 today, and should noticeably improve Gemini’s image capabilities, which were already rivalling those of ChatGPT.
Taking over from the previous version 3, Imagen 4 has "remarkable clarity in fine details like intricate fabrics, water droplets and animal fur, and excels in both photorealistic and abstract styles”, according to Google. You can see the new level of detail in the preview images above and below.
Imagen 4 is also the first version of Google’s AI image generator that can go up to 2K resolution, meaning you’ll be able to make larger images for presentations and pictures that will look even better when printed out.
The detail on the water droplets in this image generated by Imagen 4 is quite impressive. (Image credit: Google)A real challenge for AI image generators in the past (apart from creating realistic fingers) has been representing text in a way that makes sense and is readable.
While Imagen 3 did make significant inroads into presenting typography in a better way, Imagen 4 promises to take text to the next level.
Google claims Imagen 4 will be “significantly better at spelling and typography, making it easier to create your own greeting cards, posters and even comics”.
Usage limitsWhen it comes to the usage limits on Imagen 4, we don’t expect the situation to be radically different from those with Imagen 3, but will update this post if we hear anything different.
Currently, if you are using Imagen 3 through the Gemini chatbot, daily limits vary depending on whether you’re a free Gemini user or a Gemini Advanced subscriber.
Free users can expect around 10-20 image generations per day, depending on how heavily the service is being used. Gemini Advanced subscribers can expect higher limits of up to 100-150 daily image generations.
As with Imagen 3, there are content restrictions on Imagen 4, especially around generating images of real individuals. However, Imagen 4 has no problems generating images of generic people.
Available today across Google appsImagen 4 isn’t only available in Gemini, either; from today you’ll be able to use it across Whisk, Vertex AI, Slides, Vids, Docs and more in Workspace.
And there’s more to come, too. Google says that it will “soon” be launching a super-fast variant of Imagen 4 that’s up to 10x faster than Imagen 3 at generating images.
You may also likeGoogle is adding some extra brainpower to Gemini with a new Deep Think Mode. The company unveiled the latest option for Google Gemini 2.5 Pro at Google I/O 2025, showing off just what its AI can do with extra depth.
Deep Think basically augments Gemini's AI 'mind' with additional brains. Gemini in Deep Think mode won't just spit out an answer to a query as fast as possible. Instead, it runs multiple possible lines of reasoning in parallel before deciding how to respond. It’s like the AI equivalent of looking both ways, or rereading the instructions before building a piece of furniture.
And if Google's tests are anything to go by, Deep Think's brainpower is working. It’s performing at a top-tier level on the 2025 U.S. math olympiad, coming out on top in the LiveCodeBench competitive programming test, scoring an amazingly high 84% on the popular MMMU, a sort of decathlon of multimodal reasoning tasks. Deep Think isn’t widely available just yet. Google is rolling it out to trusted testers only for now. But, presumably, once all the kinks are ironed out, everyone will have access to the deepest of Gemini's thoughts.
Gemini shines onDeep Think fits right into the rest of Gemini 2.5’s growing lineup and the new features arriving for its various models in the API used by developers to embed Gemini in their software.
For instance, Gemini 2.5 Pro now supports native audio generation out. That means it can talk back to you. The speech has an “affective dialogue” feature, which detects emotional shifts in your tone and adjusts accordingly. If you sound stressed, Gemini might stop talking like a patient customer service agent and respond more like an empathetic and thoughtful friend (or at least how the AI interprets such a response). And it will be better at knowing when to talk at all thanks to the new Proactive Audio feature, which filters out background noise so Gemini only chimes in when it’s sure you’re talking to it.
Paired with new security safeguards and the upcoming Project Mariner computer-use features, Gemini 2.5 is trying very hard to be the AI you trust not just with your calendar or code, but with your book narration or entire operating system.
Another element expanding across Gemini 2.5 is what Google calls a 'thinking budget.' Previously unique to Gemini 2.5 Flash, the thinking budget lets developers decide just how deeply the model should think before responding. It's a good way to ensure you get a full answer without spending too much. Otherwise, Deep Think could give you just a taste of its reasoning, or give you the whole thing and make it too expensive for any follow-ups.
In case it's not clear what those thoughts involve, Gemini 2.5 Pro and Flash will offer 'thought summaries' for developers, a document showing the exact details of what the AI was doing in terms of applying information through its reasoning process, so you can actually look inside the AI brain.
All of this signals a pivot from models that just talk fast to emphasizing ones that can reason deeper, if slower. Deep Think is part of that shift toward deliberate, layered reasoning. It’s not just trying to predict the next word anymore, it's applying that logic to ideas and the very process of coming up with answers to your questions. Google seems keen to make Gemini not only able to fetch answers, but to understand the shape of the question itself.
Of course, AI reasoning still exists in a space where a perfectly logical answer might come with a random side of nonsense, no matter how impressive the benchmark scores. But you can start to see the shape of what’s coming, where the promise of an actual 'co-pilot' AI comes to fruition.
You might also likeDeath Stranding 2: On the Beach director Hideo Kojima has said that his next game, Physint, is still at least five to six years away from release.
In a new interview with French magazine Le Film Français(via VGC), ahead of the launch of Death Stranding 2, Kojima was asked whether he would ever consider directing a film in the future.
The game director said he would, and that he "received many offers after leaving Konami."
Besides the Death Stranding sequel, Kojima is currently working on his action espionage game Physint, which he said will take another five to six years to finish before he can consider moving into filmmaking.
“Besides Death Stranding 2, there is Physint in development," Kojima said. "That will take me another five or six years. Maybe after that, I could finally decide to tackle a film. I grew up with cinema. Directing would be a kind of homage to it. Besides, I’m getting older, and I would prefer to do it while still young.”
Phyisint is a brand new "original IP" that was announced during the PlayStation State of Play in January 2024 and will be Kojima Productions' third major game.
Kojima is also developing OD, his horror project for Microsoft that was revealed back in 2023. The director didn't mention anything new about OD during his interview, but it's said to be a "totally new style of game" being developed alongside Xbox Game Studios and will star actors Sophia Lillis, Udo Kier, and Hunter Schaffer.
For now, Kojima fans can look forward to Death Stranding 2: On the Beach, which is set to launch on June 26, 2025, for PS5.
You might also like...The 2025 Europa League Final is here - Tottenham face off against Man Utd in all English final as both teams look to put behind dreadful domestic campaigns.
The final will not only see one team lift the trophy, but also secure Champions League football for 2025-26 - a sweet reward for UEFA's second-tier competition.
FREE coverage has been provided thanks to TNT Sports via Discovery Plus in the UK and Ireland.
Ready to catch all the action? We'll keep you up-to-date with all the latest from Bilbao including highlights, replays and live updates.
(Image credit: Photo by Alex Pantling - UEFA/UEFA via Getty Images)Tottenham and Man Utd face off tomorrow night in one of the most highly anticipated Europa League Final's in many a year.
The finalists have only received 15,000 tickets, but if you can't make it to Bilbao you can keep up with the action across a multitude of tv channels and streams.
TNT Sports have made it FREE via Discovery Plus in the UK and Ireland. While those in the US can keep up with the action using Paramount Plus.
Ange Postecgolou denies being a 'clown'.
The Australian has addressed the press 24 hours out from their crunch clash against Ruben Amorim's side tomorrow.
We'll show you how to catch all the action wherever you are right here.
Europa League Final: FREE in the UKDid you know the game is being broadcast for FREE on Discovery Plus in the UK and Ireland.
Windows 10 users need to be aware of a fresh bug in the latest update for the OS, even though it’s a glitch that’s going to be much more prevalent with business laptops rather than consumer machines.
That’s because if your Windows 10 PC does encounter the problem, it can be quite a nasty one to have to rescue your system from – and you can avoid any potentially technically traumatic episode by simply installing an emergency fix Microsoft has just rushed out.
Windows Latest reported the issue with the May update for Windows 10, which causes an affected PC to fail to install the upgrade, and then run an automatic repair – a process that can happen several times, confusingly.
Adding further to the confusion is that if you have BitLocker or Device Encryption turned on (so the data on your drive is encrypted), you’ll end up at the recovery screen. That recovery process asks for your key ID, and if you don’t have that info to hand, then you’re in something of a pickle, shall we say.
Let’s cover those all-important caveats first, though, the main one being that to be affected, your PC must be running an Intel vPro processor (10th-gen or newer). This is because the bug relates to Intel Trusted Execution Technology (TXT for short) which is part of the vPro array of security measures.
As the name suggests, vPro is a brand of chips mostly used for professional (business) notebooks, but they can be found in consumer laptops, too. As Microsoft notes: “Consumer devices typically do not use Intel vPro processors and are less likely to be impacted by this issue.”
It’s worth checking if your PC has such an Intel vPro chip inside, and if it has, if you haven’t already installed the May update for Windows 10 22H2, whatever you do, push pause on that.
Rather than grabbing the May cumulative update, to avoid the bug in question, make sure you install Microsoft’s emergency patch which was deployed yesterday.
This is KB5061768, which you can only install manually – it won’t be delivered by Windows Update. Get it from Microsoft’s update catalog here, and download the ‘Windows 10 version 1903 and later’ variant which is correct for your PC. (That’s likely the 64-bit (or x64) version – check your processor type in the Device Specifications section of System > About in the Settings app, but if you don’t have a 64-bit CPU and OS, you want the x86 version, ignore the Arm variant).
(Image credit: MAYA LAB / Shutterstock) Breaking down the problem – and what to do if you’re already hit, and locked out of your PCWhat’s actually happening with this glitch? There’s some problem with the May update for Windows 10 which is causing a process (lsass.exe, a security-related service) to be terminated unexpectedly. This is prompting the automatic repair process to run to try and fix things, though as noted above, your Windows 10 PC may make several repeated failed attempts to install the update before it gives up and rolls back to the previous (April) update (hopefully).
That’s messy, but things are worse for those using Device Encryption or BitLocker, who could end up stuck at the recovery screen if they don’t have their recovery key to hand.
So, what happens if you’ve missed the boat to install this emergency fix from Microsoft, as you’ve already installed the May update for Windows 10, and now you can’t get into your system (past the recovery screen) to download and apply said fix?
Well, in this case, Microsoft advises that to start Windows 10 successfully, you’ll need to turn off Intel Trusted Execution Technology and another setting, Intel VT for Direct I/O, in your PC’s BIOS. However, that apparently requires entering your BitLocker recovery key (again, problematic if you don’t have it on hand).
If you’re stuck in this particular dead-end, according to Windows Latest, it’s possible to simply turn off Intel Trusted Execution Technology (TXT) in your BIOS, without touching the other setting (Intel VT), and then you can successfully restart your PC to get back to the desktop.
The first step here is to get into the BIOS, and the method to do this varies depending on your PC (check the manuals supplied with your machine). The key to access the BIOS can be one of a number of possibilities, but it’s often F2, F10, or F12, which you press repeatedly as the system just starts to boot up.
Once in the BIOS, you need to find the Intel TXT (or Trusted Execution Technology) setting. This is likely in Security > Virtualization, or System Security Settings, or some label pertaining to Security or System Configuration. It’ll most likely be a security-related title, so check carefully through any such option screens looking for Intel TXT. When you locate this, turn it off, but as mentioned, you can leave Intel VT for Direct I/O alone.
Now choose the option to save changes to the BIOS and reboot your PC, and you should be back in Windows 10, where you can now install Microsoft’s patch (KB5061768) from the update catalog. Once that’s done, you can go back into your BIOS and switch Intel TXT back on.
All things considered, to avoid any potential messing around like this, it’s a far better idea to install the fix before you grab the May cumulative update for Windows 10.
This is not the first time Microsoft has visited a bug like this on Windows 10 users (or indeed Windows 11 PCs). It’s also worth remembering that if you’re running Windows 11, and you upgrade to the latest version, 24H2, using a clean install, this applies the Device Encryption feature automatically. Note that an in-place upgrade to Windows 11 24H2 won’t do this, only a clean install of Windows 11 24H2. Furthermore, it has to be an installation linked to a Microsoft account, too, as that’s where the encryption recovery key info is saved (which is why you must be very careful about deleting a Microsoft account, as the key vanishes with it).
Device Encryption is basically a ‘lite’ version of BitLocker, providing encryption for Windows 11 Home PCs, but it only covers the data on the main system drive.
You may also like...Ransomware remains one of the most disruptive and costly cyber threats facing businesses and public sector organizations. In June 2024, a ransomware attack on Synnovis, an NHS laboratory services provider, resulted in £32.7 million in damages – over seven times its annual profits. This incident caused widespread disruption to medical procedures across London hospitals, further reinforcing the real-world consequences of such attacks.
This is just one example of the many high-profile incidents that have occurred over the years, despite successful efforts by the UK Government and their allies to use various tools to disrupt and counter the operations of ransomware gangs.
One tool under consideration by the UK Government is extending a ban on ransom payments beyond central government to all public sector bodies and Critical National Infrastructure (CNI) operators.
The aim is clear: reducing the financial incentives that sustain ransomware operations. While disrupting the revenue stream for cybercriminals is a logical step, it raises a critical question: will this make the public sector and CNI more resilient?
The pitfalls of paying ransomWhile paying a ransom may seem an appealing way to quickly recover your operations, it is a risky gamble. There is no guarantee that cybercriminals will restore access to systems, refrain from selling your stolen data, or even re-exploit an organization. Furthermore, organizations risk making payments to a sanctioned entity that might have obfuscated their affiliation
If public sector organizations are stripped of the option to pay, they need to be equipped with the resources to defend against and recover from attacks. That might require additional funding to bolster security and resilience programs, timely access to specialist expertise, and the use of real-world threat intelligence to guide decisions. The NHS, for example, presents a particularly complex challenge - could a blanket ban on payments be maintained in cases where a ransomware attack might impact public safety?
Additionally, if ransom payments become increasingly banned, they may be excluded from cyber insurance coverage. Organizations could face steeper premiums as insurers adjust for potentially increased recovery costs. Forensic investigations, system rebuilds, and operational downtime might exceed the cost of a ransom demand.
The supply chain dimension of ransomware attacksComprehensive supply chain security should be a critical part of an organization's resilience strategy. Even if an organization has strong cybersecurity defenses, it is still vulnerable if its suppliers do not.
The government is weighing up whether to extend ransom payment prohibitions to critical suppliers of public sector bodies and CNI. If suppliers fall victim to ransomware, how confident can organizations be that those suppliers can recover quickly without paying?
A ransomware attack on a critical supplier can trigger a domino effect. Many businesses lack visibility into these hidden dependencies, only realizing their exposure when a disruption occurs. A single compromised supplier could paralyze multiple organizations downstream, causing widespread outages and significant business challenges.
Without clear visibility of supply chain risks, businesses can only prepare for a limited range of scenarios and are unable to identify and prepare for risks resulting from dependencies from suppliers existing at the 4th party level and beyond, i.e. subcontractors and suppliers’ suppliers.
Industry-wide collaboration can increase resilienceRegardless of whether ransom payments get banned, the key to enhancing operational resilience to ransomware attacks lies in proactive, collaborative defense. When businesses share information about suppliers, they may spot risks that a single company might miss on its own. By exchanging timely insights, organizations can detect and respond to emerging threats before they escalate into serious incidents.
Mapping out these connections help reveal concentration risks where an attack could cause widespread damage. Organizations may then initiate discussions with targeted suppliers on their ability to recover from a ransomware attack without the ability to pay a ransom.
Additionally, by taking a broad view across the industry, this enables organizations to make informed decisions on their overall supplier base. This may include whether to diversify their set of suppliers to reduce concentration risks or introduce additional controls to reduce exposure to ransomware attacks.
Organizations can better prepare for additional risk scenarios that are only illuminated after consolidating supply chain information with their peers and seeing a comprehensive and holistic view of their supply chain. While many businesses recognize that a supplier might be the limiting factor in their overall security, it is imperative for them to understand that this potential limiting factor may be beyond their current visibility.
Banning ransom payments may remove some of the financial incentives for cybercriminals, but it won’t make ransomware disappear. However, organizations are right to scrutinize their suppliers’ ability to resume operations without paying a ransom. Therefore, the real challenge lies in building organizational resilience – and that requires a shift in mindset.
Businesses must move beyond siloed thinking and treat cybersecurity as a shared responsibility. Only by working collaboratively with peers, suppliers, and regulators, and by broadening visibility across the supply chain to identify and address potential risks, can we reduce the impact of ransomware and make it less viable business model for criminals.
We've featured the best malware removal.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Announced at Microsoft's annual developer conference, Build 2025, GitHub launched a new and updated version of its Copilot AI assistant designed to streamline the integration of computer-aided coding even further.
"GitHub Copilot now includes an asynchronous coding agent, embedded directly in GitHub and accessible from VS Code," the company wrote.
GitHub CEO Thomas Dohmke explained how the agent gets to work in the background when you assign a GitHub issue to Copilot or prompt it in VS Code, adding that it enhanced productivity without putting organizations' security at risk.
GitHub's Copilot agent sits quietly in the background, ready to spring into action"Having Copilot on your team doesn’t mean weakening your security posture – existing policies like branch protections still apply in exactly the way you’d expect," Dohmke explained.
The new tool works by booting a secure dev environment via GitHub Actions, cloning the repo, analyzing the codebase and pushing to a draft pull request. Users can observe session logs for greater visibility, validation and progress, with the Copilot agent promising to help across feature implementation, bug fixes, test extensions, refactoring and documentation improvements.
Dohmke also noted that users can give the coding agent access to broader context outside of GitHub by using Model Context Protocol (MCP).
The Copilot agent acts much like a human colleague in that it will tag you for review, where you can then leave a further comment asking it to make more changes, which it processes automatically.
Emphasizing the enterprise-grade security measures, GitHub noted: "The agent’s internet access is tightly limited to a trusted list of destinations that you can customize." GitHub Actions workflows also need developer approval.
Copilot Enterprise and Copilot Pro+ will be the first account types to get access to GitHub's new powerful agent, with each model request the agent makes costing one premium request from June 4, 2025.
GPT-4.1, GPT-4o, Claude 3.5 Sonnet, Claude 3.7 Sonnet and Gemini 2.5 Pro each account for one premium request, however more powerful and complex models have considerably higher multipliers. For example, one question using o1 costs 10 premium requests, and GPT-4.5 has a 50x multiplier. On the flip side, Gemini 2.0 Flash has a 0.25x multiplier, meaning four questions cost one premium request.
You might also likeA new Quordle puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Tuesday's puzzle instead then click here: Quordle hints and answers for Tuesday, May 20 (game #1212).
Quordle was one of the original Wordle alternatives and is still going strong now more than 1,100 games later. It offers a genuine challenge, though, so read on if you need some Quordle hints today – or scroll down further for the answers.
Enjoy playing word games? You can also check out my NYT Connections today and NYT Strands today pages for hints and answers for those puzzles, while Marc's Wordle today column covers the original viral word game.
SPOILER WARNING: Information about Quordle today is below, so don't read on if you don't want to know the answers.
Quordle today (game #1213) - hint #1 - Vowels How many different vowels are in Quordle today?• The number of different vowels in Quordle today is 4*.
* Note that by vowel we mean the five standard vowels (A, E, I, O, U), not Y (which is sometimes counted as a vowel too).
Quordle today (game #1213) - hint #2 - repeated letters Do any of today's Quordle answers contain repeated letters?• The number of Quordle answers containing a repeated letter today is 0.
Quordle today (game #1213) - hint #3 - uncommon letters Do the letters Q, Z, X or J appear in Quordle today?• No. None of Q, Z, X or J appear among today's Quordle answers.
Quordle today (game #1213 - hint #4 - starting letters (1) Do any of today's Quordle puzzles start with the same letter?• The number of today's Quordle answers starting with the same letter is 2.
If you just want to know the answers at this stage, simply scroll down. If you're not ready yet then here's one more clue to make things a lot easier:
Quordle today (game #1213) - hint #5 - starting letters (2) What letters do today's Quordle answers start with?• N
• C
• D
• D
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
Quordle today (game #1213) - the answers (Image credit: New York Times)The answers to today's Quordle, game #1213, are…
If I had chosen COULD as a start word instead of WOULD I would/could have finished today’s Quordle a little more quickly, but that’s my only gripe.
NOVEL was the only word I struggled to find, but with three letters in the correct positions it didn’t take long to uncover it. How was it for you?
The Daily Sequence was far more challenging after I took seven tries to get the first word.
How did you do today? Let me know in the comments below.
Daily Sequence today (game #1213) - the answers (Image credit: New York Times)The answers to today's Quordle Daily Sequence, game #1213, are…
A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Tuesday's puzzle instead then click here: NYT Connections hints and answers for Tuesday, May 20 (game #709).
Good morning! Let's play Connections, the NYT's clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.
What should you do once you've finished? Why, play some more word games of course. I've also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc's Wordle today page covers the original viral word game.
SPOILER WARNING: Information about NYT Connections today is below, so don't read on if you don't want to know the answers.
NYT Connections today (game #710) - today's words (Image credit: New York Times)Today's NYT Connections words are…
What are some clues for today's NYT Connections groups?
Need more clues?
We're firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today's NYT Connections puzzles…
NYT Connections today (game #710) - hint #2 - group answersWhat are the answers for today's NYT Connections groups?
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
NYT Connections today (game #710) - the answers (Image credit: New York Times)The answers to today's Connections, game #710, are…
I found this Connections to be the easiest for a while – possibly because I own a MacBook that opens like a clam, and I'm forever blocking people, and I take a lot of medicine.
Including CLAM in the category THINGS THAT OPEN LIKE A CLAM seems like a bit of a cheat and not a very Connections thing to do, but I’m struggling to think what could take its place other than describing very particular brands of backpack that open that way rather than the traditional duffel bag style.
Still, it helped me get a purple group very early, which made me feel clever, so zero complaints from me.
I’m guessing that some PC users may have found FOLDERS ON A MAC puzzling through pure dint of the fact that they find anything to do with a Mac puzzling.
As a user of both operating systems I can reveal that having a “recycle bin” instead of TRASH aside they are both the same. Especially if you are just using them to play Connections on!
How did you do today? Let me know in the comments below.
Yesterday's NYT Connections answers (Tuesday, May 20, game #709)NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.
On the plus side, you don't technically need to solve the final one, as you'll be able to answer that one by a process of elimination. What's more, you can make up to four mistakes, which gives you a little bit of breathing room.
It's a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.
It's playable for free via the NYT Games site on desktop or mobile.
A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Tuesday's puzzle instead then click here: NYT Strands hints and answers for Tuesday, May 20 (game #443).
Strands is the NYT's latest word game after the likes of Wordle, Spelling Bee and Connections – and it's great fun. It can be difficult, though, so read on for my Strands hints.
Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc's Wordle today page for the original viral word game.
SPOILER WARNING: Information about NYT Strands today is below, so don't read on if you don't want to know the answers.
NYT Strands today (game #444) - hint #1 - today's theme What is the theme of today's NYT Strands?• Today's NYT Strands theme is… Three's a crowd
NYT Strands today (game #444) - hint #2 - clue wordsPlay any of these words to unlock the in-game hints system.
• Spangram has 13 letters
NYT Strands today (game #444) - hint #4 - spangram position What are two sides of the board that today's spangram touches?First side: left, 3rd row
Last side: right, 1st row
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.
NYT Strands today (game #444) - the answers (Image credit: New York Times)The answers to today's Strands, game #444, are…
Sometimes it can take a while to see the spangram in its entirety. I’d tapped out double, doubles, and doublers before I saw DOUBLE TROUBLE.
Today’s theme is, of course, based around the phrase “two’s company, three’s a crowd” but I was uncertain what we were looking for originally – so began by looking for words that would give me a hint.
After seeing the word PATCH I looked for other words with the same A-T-C-H ending and got MATCH, quickly followed by PAIR and PARTNERS.
Incidentally, I asked Google who the most famous TWINS in the world are and it responded with Mary-Kate and Ashley Olsen. My favorite British twins are Xand and Chris van Tulleken, two celebrity British doctors who I struggle to tell apart and whose names I struggle to spell, but who are both wonderful medical mythbusters and podcasters. Not as famous as the Olsens and unlikely to start a boho chic fashion empire, but equally interesting.
How did you do today? Let me know in the comments below.
Yesterday's NYT Strands answers (Tuesday, May 20, game #443)Strands is the NYT's not-so-new-any-more word game, following Wordle and Connections. It's now a fully fledged member of the NYT's games stable that has been running for a year and which can be played on the NYT Games site on desktop or mobile.
I've got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you're struggling to beat it each day.
Marshall, known for its amp-making heritage and rock ‘n’ roll-inspired speakers, is taking its first steps into an all-new product category: soundbars.
The audio brand’s very first soundbar, the Marshall Heston 120, is coming to your living rooms from June 3 2025 and will be available for an eye-watering $999 / £899 (about AU$1599). Marshall’s Dolby Atmos-enabled soundbar is over 100cm long – suitable for the best 55-inch TVs and up – and promises a “colossal audio experience” with both “immersive and spacious sound”.
However, it doesn’t harness a separate sub or rear speakers to supply this, with Marshall instead opting for an all-in-one design. As a result, it feels that this is a natural competitor to the excellent-sounding Sonos Arc Ultra, which holds the title of ‘best all-in-one soundbar’ in our guide to the best soundbars available today.
Getting hands on with the Heston 120 (Image credit: Future)I was lucky enough to be among the very first to hear the Marshall Heston 120 at Marshall’s headquarters in Stockholm. First of all, I was struck by its luxury, retro design – something I’ve always loved about products like the Marshall Monitor III ANC and the Marshall Emberton III.
Its faux leather outer casing combined with sleek golden details makes it stand out in a market full of chunky black plastic bars.
There’s a lot of attention to detail with design, too. For instance, Marshall has installed three tactile dials for controlling volume, EQ and source. These use haptic feedback for a satisfying user experience, and are made of knurled metal – another nod to Marshall’s amp-related roots. There are also buttons for different sound modes such as Music, Movie, Night, or Voice.
But what you’re probably most keen to find out, is how did the Heston 120 sound? Well, I only got a brief demo in a space that almost mimicked a living room. But from what I heard, this thing is pretty impressive.
Marshall showed off the Heston 120’s capabilities across three formats: stereo music; Dolby Atmos music; and Dolby Atmos movies. Ed Camphor, Audio Technology and Tuning Lead at Marshall Group, told me that “our focus was very much on getting a good level of polish on every format”, and that certainly seemed to be the case.
(Image credit: Future)For instance, when listening to stereo music, I was instantly smacked with punchy, impactful bass – the kind that so many soundbars struggle to replicate, particularly without the help of a dedicated sub.
Dolby Atmos music impressed me too – when tuning into bury a friend by Billie Eilish, vocal pans were tracked accurately with rumbling, deep bass and haunting screams piercing through.
Finally, we watched a portion of Star Wars: Episode I - The Phantom Menace on Disney Plus. The directionality of soaring spaceships in one scene was delivered with precision, and the soundbar recreated big sound effects such as ships overtaking and crashing cleanly, in a true-to-life manner. Unfortunately, Jar Jar Binks’ dialogue was crystal-clear, all the way through the scene.
Of course, these are only my initial impressions from a demo, so if you want my full and unfiltered thoughts, you’ll have to wait for my full review. That’s coming soon…
Into the nitty gritty… (Image credit: Future)So, in terms of tech specs, the Marshall Heston 120 makes use of 11 active drivers, which includes height channels to capture the verticality needed for ‘true’ Dolby Atmos and side channels for truly expansive audio. Altogether, you’re getting a maximum power output of 150W in a 5.1.2 configuration. Of course, there’s Dolby Atmos compatibility for movies and music alike, but the Heston 120 also supports DTS:X content as well, which is an advantage it has over the Sonos Arc Ultra (Sonos continues to avoid DTS support).
There are so many ways to play through the Heston 120 too. There are HDMI eARC and HDMI passthrough ports (another plus it has over the Arc Ultra, which only has one HDMI port), RCA stereo and mono slots, as well as both Bluetooth 5.3 and Wi-Fi 6 compatibility.
You can play music over Apple AirPlay 2, and Marshall has also integrated a range of streaming services, including Spotify Connect, Tidal Connect and Airable. These can also be bound to preset buttons for easy access. There’s even Auracast.
One more nice little nugget of info is that Marshall will revamp its companion app in tandem with the launch of the Heston 120 soundbar. This unlocks detailed EQ options, remote control of volume, source and sound modes, as well as room calibration options to get the best sound for your living space.
The app is so fleshed out, in fact, that the Heston 120 will not come with a separate remote – all you need is your phone and you’ll be ready to go.
Marshall may be launching the Heston 120 as a standalone soundbar, but it has confirmed that later down the line, you’ll be able to snap up the Heston Sub 200 – a separate subwoofer – to really feel that low-end eruption.
On top of that, a smaller soundbar, the Heston 60, will be available to those who are working with a little less room. Both will release later in 2025 and we’ll be sure to keep you updated with more details as they come.
The Marshall Heston 120 soundbar is available for pre-order now and will go on sale from June 3rd 2025 via Marshall’s own website. It will later become available with select retailers from September 16th 2025.
(Image credit: Future) You might also likeAs artificial intelligence becomes more integral to businesses across all industries, small and medium-sized companies are slowly integrating it. In 2024, only 26% of these types of businesses used the technology, despite 76% recognizing its value.
However, as AI's benefits become more pronounced, these businesses will only benefit from integrating it into their operations. As a critical tool, AI can help these businesses build and foster stronger relationships with clients, develop innovative solutions that allow them to compete better with larger institutions and increase efficiency, allowing them to focus on business growth.
Transformative potential AI has for small-and medium-sized businessesIn the coming years, small and medium-sized businesses must incorporate AI to remain competitive in an ever-changing business landscape. The good news is that AI can enable smaller organizations to break through competitively and provide more personalized offerings to clients across industries.
The impact across industries is telling. In the accounting and finance industry, the shift to AI can empower businesses to move from traditional number-crunching services to personalized advisory relationships. Within sales and marketing, AI can go beyond providing predictive insights and can offer real-time personalization to improve sales conversion rates.
AI can provide employees with seamless service and connectivity in the IT industry, where building a digital workplace is the standard. Furthermore, within customer service, AI-powered agents and chatbots help maintain a consistent brand voice across all client engagements while automating inquiries and communications to provide answers at a previously impossible pace.
The bottom line: no matter the industry, AI is improving business outcomes, and small- to medium-sized businesses have much to gain from the technology.
Where to start: clean data is crucial for successful AI integrationIt is clear that AI has many benefits, but an AI algorithm is only as good as the data it learns from. Clean, well-structured data is essential for AI models to function accurately and efficiently. AI systems can produce biased, misleading or outright incorrect results without it. Data must be accurate, complete, consistent, unique, valid and timely.
One of the most significant risks of poor data quality is bias. If an AI model is trained on incomplete, inconsistent or skewed data, it will replicate and even amplify those biases. Overrepresenting data from one source while unnecessarily reducing the representation of another can have serious consequences, from discriminatory hiring practices to inaccurate medical diagnoses.
In short, AI models rely on patterns within datasets to make predictions and decisions. The outputs will be unreliable if the data contains errors—such as duplicates, missing values or incorrect labels.
Furthermore, inconsistent and inaccurate data could slow down processing times, increase business costs, and require extensive human intervention to correct mistakes. On the other hand, when data is clean, AI models can train faster and operate more effectively, saving both time and resources. Whether it’s customer interactions, financial transactions or healthcare records, people need to know that AI-driven decisions are based on reliable information.
Poor data quality erodes confidence, while clean data strengthens the credibility of AI systems. Good, clean data is the foundation of successful AI. Without it, even the most sophisticated models will fail to deliver meaningful results. Ensuring high data quality should be a top priority for any organization looking to use AI.
Steps to take to improve data qualityFor small and medium-sized businesses to reap the benefits of AI, they must use modern data management tools to guard data quality, including implementing high data quality standards, data structuring and data governance policies.
The first step is to clean and structure the data into a format that AI algorithms can efficiently process and analyze to extract meaningful insights and make accurate predictions. Data is gathered from various sources, including databases, files and application programming interfaces. Once collected, data cleaning is performed to remove inconsistencies, errors and irrelevant information. The data is then converted into a format suitable for AI algorithms, such as numerical values, vectors or graphs.
Data structuring techniques vary based on the type and purpose of the data. For example, relational databases store data in tables with rows and columns, making them ideal for structured data. In contrast, NoSQL databases offer more flexibility by storing data in various formats, making them suitable for unstructured or semi-structured data.
Finally, data storage ensures that the structured data is efficiently organized and accessible for AI processing. Each step is critical for optimizing AI performance and delivering accurate, meaningful insights.
Ensuring data governanceOrganizations need a robust data governance framework to maintain high data quality. This internal governance structure—like a cross-functional committee or task force—sets policies, processes and accountability measures.
First, this framework should assign clear roles and responsibilities for managing data, which will help ensure accountability and safeguard critical information. Next, businesses must enforce data controls and standardize formatting and data structures across systems to promote consistency.
Once organizations have established their framework, they should maintain real-time updates and scheduled data refreshes, keeping data relevant. Additionally, compliance with validation rules and predefined formats must be maintained.
Finally, businesses of all sizes must provide user-friendly interfaces, clear documentation and efficient retrieval systems to ensure data is accessible and valuable. Comprehensive data coverage across all relevant systems and processes is necessary.
AI has the potential to transform small and medium-sized businesses significantly. However, the success of these AI initiatives depends heavily on the quality and structure of the data they utilize. By improving data quality through robust standards, effective structuring, comprehensive governance policies and modern management tools, these businesses can fully leverage AI to gain a competitive edge and drive innovation.
We've featured the best small business software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
O2 UK has fixed a vulnerability in its VoLTE and Wi-Fi Calling implementations that allowed malicious actors to discover people’s locations and other identifiers.
Back in 2017, the company introduced the IP Multimedia Subsystem (IMS) service, called “4G Calling”. The service provides better audio quality, and more reliable phone calls. However, Daniel Williams, a security researcher, recently analyzed the feature and discovered that during the call, he was able to pull all sorts of information about his conversation partner, straight from the network.
That data includes IMSI, IMEI, and cell location.
With Aura's parental control software, you can filter, block, and monitor websites and apps, set screen time limits. Parents will also receive breach alerts, Dark Web monitoring, VPN protection, and antivirus.
Preferred partner (What does this mean?)View Deal
Applying a fix"The responses I got from the network were extremely detailed and long, and were unlike anything I had seen before on other networks," Williams said in a detailed blog post. "The messages contained information such as the IMS/SIP server used by O2 (Mavenir UAG) along with version numbers, occasional error messages raised by the C++ services processing the call information when something went wrong, and other debugging information."
Luckily enough, the vulnerability was not present since early 2017 but was rather introduced in February 2023.
To get cell location, Williams used the Network Signal Guru app on a Pixel 8 device. He pulled raw IMS signaling messages during a call, and used them to find the last cell tower the call recipient connected to. He then cross-referenced that data with a map of cell towers, pinpointing a person’s location within 100 m2 in an urban environment. In a rural environment, though, the information was somewhat less precise.
Williams said he reached out to O2 UK multiple times and, at first, got no response. The company later reported the issue had been fixed, which Williams also confirmed.
"Our engineering teams have been working on and testing a fix for a number of weeks – we can confirm this is now fully implemented, and tests suggest the fix has worked, and our customers do not need to take any action," Virgin Media O2 told BleepingComputer.
Via BleepingComputer
You might also likeCybercriminals are distributing a tainted version of a popular password manager, through which they’re able to steal data and deploy ransomware. This is according to security researchers WithSecure Threat Intelligence, who recently observed one such attack in the wild.
In an in-depth analysis published recently, the researchers said a client of theirs downloaded what they thought was KeePass - a popular password manager. They clicked on an ad from the Bing advertising network, and landed on a page that looked exactly like the KeePass website.
The site, however, was a typosquatted version of the legitimate password manager. Since KeePass is open-source, the attackers kept all of the legitimate tool’s functionalities, but with a little extra Cobalt Strike on the side.
With Aura's parental control software, you can filter, block, and monitor websites and apps, set screen time limits. Parents will also receive breach alerts, Dark Web monitoring, VPN protection, and antivirus.
Preferred partner (What does this mean?)View Deal
Purview and DefenderThe fake password manager exported all of the saved passwords in a cleartext database, which was later relayed to the attackers through the Cobalt Strike beacon. The attackers then used the login credentials to access the network and deploy ransomware, which is when WithSecure was brought in.
WithSecure said that the campaign has the fingerprints of an initial access broker (IAB), a type of hacking group that obtains access to organizations and then sells it to other hacking collectives. This particular group is most likely associated with Black Basta, an infamous ransomware operator, and is now being tracked as UNC4696.
This group was previously linked to Nitrogen Loader campaigns, BleepingComputer reported. Older Nitrogen campaigns were linked to the now defunct BlackCat/ALPHV group.
So far, this was the only observed attack, but that doesn’t mean there aren’t others, WithSecure warns: "We are not aware of any other incidents (ransomware or otherwise) using this Cobalt Strike beacon watermark – this does not mean it has not occurred."
The typosquatted website that’s hosting the malicious KeePass version was still up and running at this time, and was still serving malware to unsuspecting users. In fact, WithSecure said that behind the site was extensive infrastructure, created to distribute all sorts of malware posing as legitimate tools.
Via BleepingComputer
You might also likeIt seems that the big robot vacuum manufacturers all got together and decided that what we really need is a bot with a mechanical arm. One that can move clutter out of its path as it cleans, and even sort your mess out and relocate it where it needs to go. We saw a few arm-equipped robot vacuums at this year's CES – the event where everyone showcases their upcoming launches – but Roborock surprised everyone by announcing that its own model wasn't just at the wacky invention stage, it would actually be going on sale within the year.
Fast-forward a few months, and the Saros Z70 is indeed now available to buy. So does it deliver on its potential, or has Roborock rushed it through before the tech is ready? Is this innovative bot ready to compete with the rest of the best robot vacuums on the market? I've spent the past two weeks testing it out – you can get the full low-down in my Roborock Saros Z70 review.
While it's not perfect, there are plenty of great things about it – including a few that surprised me. Read on for 3 things I loved about this handy robovac, plus 3 that I think still need work if it's going to be genuinely useful.
3 things I loved 1. The pincering is excellentThe hardware part of the pincer arm is very well designed. In my tests I found I could remote control the robot over to a bit of clutter and tell it to pick it up, and it would – almost without fail – recognize it and adjust its positioning and pincer so it could pick it up. I could then resume control and drive the bot where I wanted the clutter to go.
Should the pickup fail, Roborock has included manual adjustment options so you can operate the arm yourself. These are intuitive and precise, and the grip is gentle but firm. The arm also tucks neatly away behind a hatch when it's not in use, so it can't get caught on anything while the bot's on its travels. There's big potential for people with limited mobility here.
2. There are plenty of safety measuresMost people I talk to about this robovac seem afraid that the OmniGrip will be overzealous and try and tidy away the cat. Roborock has built in plenty of safety features to ensure this doesn't happen. First, all the arm features are off by default, so nothing at all will happen until you specify exactly what you want it to do.
It's designed only to try and tidy very specific objects, having identified them using Roborock's (generally excellent) object recognition tech, and the arm has a weight sensor that prevents it from lifting objects over 300g. The pincering itself is designed to be 'firm yet gentle', to prevent damage to objects, so kind of like one of those fairground claw games, but with a much higher success rate. Finally, there's a physical 'Emergency stop' button on the robot itself, and a child lock.
3. It's easy to useThis is a new and potentially intimidating bit of tech, so kudos should go to Roborock for making it impressively accessible. It has placed the robot arm options front and center in the companion app, encouraging users to explore and become familiar with them. Plus, the controls themselves are logical and intuitive.
4. It's an unexpectedly great security cameraA lot of high-end robot vacuums can double as home security cams, but you're a little limited by the fact that your view is at ground level. Here, Roborock has added a camera on the arm itself. Not only can the arm reach much higher up, it can also tilt vertically, thus offering a much more expansive field of view than if you were using the front-mounted camera alone. Of course, you can only spy on what's happening inside your home, but useful nonetheless.
3 things that need improving 1. It doesn't work on its ownWhile the remote control-assisted pincering worked very well in my tests, really, the Saros Z70 needs to be able to tidy up unassisted if it's to be genuinely useful to most people. Theoretically, you can ask the robot to identify objects suitable for tidying while on a whole-home clean, then once it's finished, embark on a second run to pick them up and put them in a designated spot.
Unfortunately, this bit doesn't really work yet. It seems all the conditions need to be absolutely perfect in order for the process to be successful. So the bot needs to see the items and correctly identify them, then be able to find them again, then successfully pick them up, and then find its way to the correct relocation spot. I haven't yet managed a run where one of these factors doesn't fail.
Roborock also told me that the process has a lower success rate on carpet than hard floors, due to a "hardware limitation". That feels like a significant caveat.
If something appears to get stuck on the arm, then all other functions are locked until you manually reset it by pressing physical buttons on the robot. For instance, on one occasion during my tests the arm picked up a sandal, then as it rotated with it, the sandal got caught on a doorstop and the strap twisted, so when the bot tried to drop it, it couldn't. I had to go and rescue it before I could proceed. It's probably a logical safety measure, but it's not ideal to have to physically get involved to fix the matter.
I'm hopeful the automation functions will improve with updates, but right now, this bot can't really be left alone to tidy for you.
2. It can only pick up a few thingsAt time of writing, the list of supported objects is very short – sandals / light slippers, socks, small towels, and crumpled tissue paper. It makes sense that Roborock would be cautious about adding more objects, because it needs to be confident the bot can correctly identify them and doesn't end up trying to grip something it shouldn't. But it does limit its usefulness a little, as does the weight (and presumably size) limitation.
It's not so much Roborock's fault as a limitation of the form factor. Logically, it follows that the bot won't be able to tackle anything too big or heavy. But it still affects how helpful such a design might be in the longer term.
(Image credit: Future) 3. The priceAt the moment, the Roborock Saros Z70 comes with a price tag of $2,599 / AU$3,999 (UK price TBC, but potentially around £1,950 based on what it costs elsewhere). It's the most expensive robovac we've tested, by some margin, and out of reach of most people. At the moment, it's the only robot vacuum on the market to feature a robotic arm, so an eye-watering list price isn't surprising. However, I'm not sure I'd buy it at that price, at least until some of the issues were ironed out.
Because the robot vacuum market is competitive, with new models being released regularly, I'm used to seeing good deals and prices dropping fairly quickly as even better bots hit the market. There look to be more arm-equipped robovacs in the pipeline from other brands, so if the idea proves a hit, we might see a more competitive pricing landscape emerge. I suspect it'll take a while, though.
You might also like...Nanoleaf has launched two new smart lights to liven up your home indoors and out: the Nanoleaf Rope Light, which you can position on your wall in any shape you like, and which can be set to your choice of colors and gradients, and the Nanoleaf Solar Garden Lights, which add a little glow to your yard after dark.
Although Nanoleaf makes smart bulbs to fit your ordinary ceiling lights and lamps, the company is best known for fun and funky products like the Nanoleaf x Fantaqi EXPO illuminated display cases for showing off your collectibles, the Nanoleaf Ultra Black Shapes that can be arranged on your walls in any way you see fit, and the Nanoleaf Smart Holiday String Lights, which add festive cheer indoors or out.
The latest addition to Nanoleaf's collection of fun interior smart lights is the Rope Light – a five-meter LED string that you can bend and twist into any shape, and attach to your wall.
(Image credit: Nanoleaf)The Nanoleaf Rope Light has 420 LEDs and 70 addressable color zones, allowing you to create smooth gradients and animations. It's compatible with the Nanoleaf app, which allows you to apply scenes or make your own custom color palettes by drawing with your fingertip.
You can also use the Rope Light to mirror the colors on your PC monitor via the Nanoleaf desktop app. We'll be testing it soon to see how it compares with the best Ambilight alternatives, and whether it deserves a spot in our guide to the best smart lights.
It costs $69.99 (about £50 / AU$110), and is available now direct from Nanoleaf.
How does your garden glow?The Rope Light looks great, and I look forward to testing it, but personally I'm most excited by the Nanoleaf Solar Garden Lights. These weather-resistant smart lights resemble a bunch of tulips and can be staked into the ground wherever you like.
As the name suggests, each cluster of lights is connected to a solar panel, which you can position to catch the optimum amount of sunshine, and has a subtle black and gray finish to blend in with the plants and ornaments in your garden.
Although they'd look striking lining the edge of your garden path, judging by Nanoleaf's photos, the Solar Garden Lights look particularly striking when placed right beside a plant, where the lights seem to be growing out of it organically. Unlike most garden lights, I can imagine them working well in a pot, so you can enjoy them even if you only have a balcony or paved yard.
(Image credit: Nanoleaf)They aren't compatible with the Nanoleaf app (which is perhaps a bit of a shame), but their daylight sensors mean you can sync them so they only turn on after dark, and they come with a remote control that lets you change their colors, apply scenes, and set timers.
The Solar Garden Lights are also available today direct from Nanoleaf, and cost $49.99 (about £40 / AU$80) for a set of two light clusters.
You might also like