In the ever-evolving landscape of financial markets, the introduction of artificial intelligence (AI) has been a game-changer in the fight against market manipulation. As stock trading practices diversify, globalization expands and competition intensifies with the daily addition of modern businesses, the complexity of monitoring and maintaining fair play across markets has increased exponentially.
However, as global exchanges have invested in adopting and developing AI tools, so too have their criminal counterparts. Market manipulators have become more sophisticated in their tactics, employing highly advanced pump and dump and spoof trading strategies to influence market conditions to their advantage.
To get ahead of illicit activity, the human immune system has emerged as an unlikely source of inspiration for enhancing AI powered detection tools.
Detecting and Preventing Market ManipulationAI's role in financial markets is akin to a vigilant sentinel, tirelessly scanning vast amounts of data for signs of manipulation. By leveraging machine learning algorithms and complex pattern recognition, AI systems can identify irregularities and potential manipulative behaviors that would be nearly impossible for humans to spot due to the sheer volume and speed of high frequency stock market trading.
These AI systems are trained on historical data, learning from past instances of market manipulation to recognize the subtle signals that may indicate foul play. They can monitor multiple markets simultaneously, track the behavior of individual traders, and correlate seemingly unrelated events to uncover hidden patterns. This comprehensive monitoring capability is crucial in a landscape where a single manipulated trade can have far-reaching consequences.
Despite its potential, applying AI to market surveillance has many challenges. Financial markets are complex, dynamic systems with a multitude of variables at play. The bespoke nature of AI models required for each unique scenario means that there is no one-size-fits-all solution. AI systems must be tailored to the specific characteristics of each market and the types of manipulation that may occur within them.
Moreover, the AI must be capable of adapting to new strategies employed by market manipulators. Just as viruses evolve to bypass the immune system, so do manipulative tactics to evade detection. This necessitates AI systems that can learn and adapt in real-time, a feat that requires significant computational power and advanced algorithms.
Learning from the Human Immune SystemThe human immune system is a marvel of natural engineering, capable of identifying and neutralizing a vast array of pathogens. It is this remarkable adaptability that has inspired the development of AI systems for market surveillance. The immune system's ability to remember past infections and recognize new ones that share similar characteristics is mirrored in the way AI can learn from historical market data and adjust to new forms of manipulation.
Just as the immune system has different mechanisms to deal with various threats, AI systems can employ a range of strategies to tackle different types of market manipulation. The abstract term used for such mechanisms is Artificial Immune Systems (AIS), and are computational intelligence methods modelled after the immune system. These systems develop a set of pattern detectors by learning from normal data, incorporating an inductive bias that applies exclusively to this baseline data, which may shift over time (due to its non-stationary nature).
The Dendritic Cell Algorithm (DCA), a biologically inspired subset of AIS, mirrors the human immune response by monitoring, adapting, and identifying potential threats. From statistical analysis to behavioral analytics, AI leverages this adaptive framework to help preserve the integrity of financial markets.
In recently published research, we explored how DCA can identify market manipulation patters. The model performs anomaly detection for a selective set of outputs obtained from DCA while examining multiple types of manipulative patterns. The uniqueness of this approach is in reducing the dimensions of the input dataset and avoiding the inconsistency in selecting the thresholds for the parameters involved.
It is also unbiased towards specific types of manipulation, as any knowledge about the anomalies injected is not provided to the model a priori. The distinctiveness of the results is visible when compared with existing models, for a variety of evaluation metrics from area under the ROC curve to false alarm rate.
The Balance Between Human Oversight and AI EmpowermentWhile AI can process and analyze data at speeds and volumes beyond human capability, it is not infallible as it lacks the human ability to understand nuances. The balance between human oversight and AI empowerment is critical in stock exchange surveillance. Human expertise is essential for interpreting the findings of AI, providing context, and making judgement calls on whether identified patterns truly constitute manipulation.
Humans can also provide the ethical and regulatory framework within which AI operates, ensuring that surveillance practices remain fair and just. As financial markets continue to grow in complexity, the need for sophisticated surveillance tools becomes ever more pressing.
AI, with its ability to learn from the past and adapt to new threats, offers a powerful solution to this challenge. However, it is the combination of AI's analytical prowess and human expertise that will ultimately ensure the fairness and integrity of financial markets. As technology continues to advance, this partnership will only become stronger, safeguarding the financial ecosystem against those who seek to undermine it.
We list the best monitors for trading.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Gaming accessory brand Logitech G has announced the Logitech G522 Lightspeed, a new wireless gaming headset intended to supersede the popular Logitech G733 Lightspeed.
The G522 Lightspeed features redesigned earcups, with a wider shape and an added layer of memory foam for enhanced comfort. It has a lightweight, adjustable fabric headband, which now rests flatter than its predecessor and has built-in ridges for better cooling.
The exterior of each ear cup features four eye-catching customizable RGB lighting zones, which can be tweaked to the color of your choice in the Logitech G Hub desktop software. It's also compatible with the Logitech G mobile app.
Under the hood, the headset is packing Logitech G's highest-fidelity 40mm Pro-G drivers with 24-bit / 48kHz signal processing for enhanced audio clarity and detail.
The headset comes bundled with a removable omnidirectional microphone, which offers an impressive 16-bit / 48kHz bandwidth. It's the same microphone found in the excellent, but much more expensive, Astro A50 X, which impressed with its crystal clear recordings in my hands-on testing.
On the Logitech G522 Lightspeed, the microphone has the added benefit of a built-in red LED indicator that illuminates when it's muted.
As its name would suggest, the headset can connect to PC or PlayStation 5 via Logitech's Lightspeed wireless dongle (which is included in the box), but also supports traditional Bluetooth for the aforementioned platforms in addition to Nintendo Switch and mobile. There's also the option for wired play via its USB Type-C connector.
Logitech claims up to 40 hours of battery life with the default lighting on, or up to 90 hours with it disabled, which is a pretty impressive figure. It's not quite the up to 200 hours promised by the competing HyperX Cloud III S, but it's still more than enough juice for a few weeks' worth of intense gaming sessions.
The Logitech G522 Lightspeed hits shelves on June 16 in white or black colorways. It costs $179 / £139.99 / AU$299.95, putting it in the midrange price bracket.
Its expansive feature set seems very promising, but only time will tell whether it becomes one of the best PC gaming headsets or best PS5 headsets around.
You might also like...While hybrid work models have helped teams collaborate across locations, persistent challenges remain with teams still wrestling with misalignment and communication gaps that slow progress and delay achieving notable outcomes.
To build more adaptive, high-performing teams—regardless of where they work—organizations are turning to Agile practices. Agile's emphasis on continuous feedback, quick adjustments, and strong collaboration makes it an ideal framework for bridging the gaps that often arise in hybrid work environments.
But embracing Agile isn’t a one-and-done fix. As work evolves, so should the way we apply these methods. The real opportunity isn’t just about keeping up, it’s about using these changes as a launchpad for better ways of working.
Breaking free from inefficienciesAccording to a recent survey by Lucid Software, nearly half of UK businesses report that teams can take up to three hours to decide on how to move forward on business goals, highlighting that meetings may drag on and clear next steps often don’t follow.
The survey also revealed miscommunication and poor planning are significant barriers to productivity, with 41% of respondents citing unclear project requirements, scope changes and miscommunication with colleagues as the top reasons for redoing work. These issues not only demand extra time and effort but also leave 1 in 5 of workers feeling that their team’s plans rarely align with the company’s strategic goals.
While 45% of workers believe that adopting new collaboration tools could significantly cut decision-making time, tools alone won’t solve the problem. To truly address communication challenges, a shift in mindset is crucial.
Agile frameworks offer exactly that. By breaking work into smaller, manageable increments and fostering regular feedback cycles, Agile enables teams to adapt quickly to change, clarify goals, and align efforts more effectively across stakeholders. This approach reduces wasted time, minimizes costly misalignments, and accelerates progress towards strategic objectives.
Agile in motionAgile practices have been gaining popularity, with 51% of respondents indicating their organizations actively use Agile to organize and deliver work. Yet, despite its growing presence, only 49% of UK businesses have adopted Agile and even among those that have, the benefits of Agile aren’t consistently felt across teams. One big reason? Resistance to change.
Much of that resistance often stems from middle management. Middle managers are often caught between evolving expectations from leadership and long-standing habits rooted in traditional management practices. The shift to Agile requires more than just new skills, it’s about evolving how we perceive, interpret, and respond to the complexities of work and leadership.
This resistance is often driven by fear of losing control or uncertainty about how to navigate this shift, making it crucial to provide middle managers with the right tools and support to embrace the new Agile mindset.
This is where mindset matters. Adopting agility requires both horizontal development (e.g. learning a new topic or tool) and vertical development (e.g. holding a new perspective). The concept of vertical development, popularized by researchers like Robert Kegan and Lisa Lahey, expands a person’s ability to lead amidst complexity. It enables them to interpret shifting conditions, not just follow a fixed playbook. For agile to stick, organizations must invest in both forms of development for those involved.
To enhance the effectiveness of Agile, leaders should work to create buy-in from all team members and ensure that Agile practices are consistently applied across the organization with meaningful training and solutions that facilitate successful implementation. This can start by identifying key change agents within teams who can help model and reinforce Agile principles, while also setting up regular feedback loops to accelerate progress and address any obstacles. When done right, Agile isn’t just a framework—it’s a foundation for better, faster, more human ways of working.
The power of a common visual frameworkToo often, traditional methods persist simply because ‘it’s the way it’s always been done.’ But as work grows more complex and distributed, those default approaches, especially meetings, aren’t enough to keep everyone aligned.
Team meetings remain the go-to methods for tracking progress, with 74% of respondents relying on them. However, this approach doesn’t work equally for all roles. Only 53% of entry-level employees report having high visibility into their work, indicating that even regular stand-ups may not provide everyone with the clarity they need. This highlights a critical need for more effective approaches to decision-making and alignment — ones that don’t depend on everyone being in the same room.
That’s where visual collaboration solutions come in. Agile teams are already ahead of the curve here — 69% report using visual tools as opposed to only 41% of general knowledge workers. Visual collaboration supports Agile by providing a shared, always-on workspace that enables teams to track tasks in real-time, visualize workflows and adjust priorities as needed.
What excites me most is seeing how these tools are transforming team dynamics. Team members who might stay quiet during video conferencing calls now actively shape ideas and decisions through visual contributions, creating a stronger sense of ownership and alignment. This visual engagement fosters a more collaborative and responsive environment, key principles of Agile practices.
Forging ahead with a united workforceEven if teams interpret and apply Agile practices differently, the underlying principles can still guide better ways of working. Leaders may feel confident in their team’s direction, but when newer employees don’t understand the direction or feel misaligned with the company’s values, that misalignment can ripple across the organization. In fact, what those employees experience often reveals how well Agile is truly being lived—not just implemented.
For example, if a team struggles to prioritize or frequently misses deadlines, it may signal that Agile practices aren’t being fully integrated, even if they’re technically in place. For any organization, bridging these gaps is essential. Leaders should lean on shared tools and frameworks that promote clarity, build skills and foster better communication. A visual roadmap, for instance, can make abstract goals clearer by laying out specific, achievable steps, showing progress, and aligning team efforts.
Addressing these challenges early helps prevent problems like misalignment and employee burnout, ultimately enabling teams to accelerate work and drive efficient outcomes.
Start here: a low-barrier entry point to agilityNot every organization is ready for a full agile transformation. That’s okay. You don’t have to adopt every practice to benefit from agile thinking. Start small by using a shared visual board to clarify weekly priorities. You can also replace a long meeting with asynchronous feedback using sticky notes or comments. Most importantly, ask your team what’s blocking progress and listen.
Agility isn’t the goal. Value is. But agility is how you get there, consistently, sustainably, and together. Instead of trying to replicate the office in a hybrid model, it’s time to rethink how work can happen more intentionally and effectively. The future belongs to those who can align quickly, learn continuously, and move forward with shared purpose. That’s how agile teams stay aligned, fast, and focused.
We compiled a list of the best Microsoft Teams alternatives.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The seven-year wait for fans of The Librarians franchise is finally over, as The Librarians: The Next Chapter starts a new page. Originally intended to be shown on The CW, US viewers can tune into The Librarians: The Next Chapter on TNT or via Sling TV. Read on for how to watch The Librarians: The Next Chapter online from anywhere with a VPN.
Premiere date: Sunday, May 25
US TV channel: TNT
Stream: Sling TV (US) | CTV (CA) | Foxtel (AU)
Use NordVPN to watch any stream
21 years after the first The Librarian film hit the small screen, this latest spin off series sees a new time traveling addition to the popular franchise. Callum McGowan (Marie Antoinette) leads the fantasy fun as a historical librarian who finds himself in the modern day.
As Vikram Chamberlain, TNT says that 'he inadvertently releases magic across the continent' before being given a new team to help him 'clean up the mess he made'. As the official trailer suggests (which you can watch further down this page), his team of Librarians will be challenged by a plague of lost souls, demons and other evil forces as Chamberlain seeks a way back to his own time.
We’ve got all the info on where to watch The Librarians: The Next Chapter online and stream episodes from anywhere.
How to watch The Librarians: The Next Chapter online in the USUS viewers can watch The Librarians: The Next Chapter on TNT. It kicks off with a '2 Night Series Premiere' on Sunday 25 and Monday 26 of May — both at 11.30pm ET / 8.30pm PT. After that, the remainder of the 12-episode run will go out weekly on Mondays at 9pm ET/PT. You can see a full schedule at the bottom of this article.
Don’t have cable? You can also watch TNT via Sling TV via your choice of its Blue or Orange plans. Both cost from $46/month, with your first month half price.
Away from the US? Use a VPN to watch The Librarians: The Next Chapter on Hulu from abroad.
How to watch The Librarians: The Next Chapter online anywhereIf you’re traveling abroad when The Librarians: The Next Chapter airs, you’ll be unable to watch the show like you normally would due to annoying regional restrictions. Luckily, there’s an easy solution.
Downloading a VPN will allow you to stream online, no matter where you are. It's a simple bit of software that changes your IP address, meaning that you can access on-demand content or live TV just as if you were at home.
Use a VPN to watch The Librarians: The Next Chapter from anywhere.
Editors ChoiceNordVPN – get the world's best VPN
We regularly review all the biggest and best VPN providers and NordVPN is our #1 choice. It unblocked every streaming service in testing and it's very straightforward to use. Speed, security and 24/7 support available if you need – it's got it all.
The best value plan is the two-year deal which sets the price at $3.09 per month, and includes an extra 3 months absolutely FREE. There's also an all-important 30-day no-quibble refund if you decide it's not for you.
- Try NordVPN 100% risk-free for 30 daysVIEW DEAL ON
How to watch The Librarians: The Next Chapter online in CanadaCanadian viewers can watch The Librarians: The Next Chapter on CTV's Sci-Fi channel on Mondays at 9pm ET/PT.
Rather stream the show? Use the CTV.ca website or app. You'll need to enter your cable provider details.
US viewer currently traveling in Canada? Download a VPN to connect to your streaming service back home and watch The Librarians: The Next Chapter no matter where you are.
Can I watch The Librarians: The Next Chapter in the UK?At the time of writing, no broadcaster has been announced for The Librarians: The Next Chapter in the UK.
If you're a resident of somewhere that does have The Librarians: The Next Chapter streaming but are currently in the UK, you can use a VPN to watch your regular service.
How to watch The Librarians: The Next Chapter online in AustraliaFoxtel subscribers can watch The Librarians: The Next Chapter on FOX8 in Australia. Episodes will go out on Thursdays at 9pm AEST from May, 29.
Foxtel Now's pricing begins from AU$35 per month.
Foxtel shows usually land on the Binge streaming service as well. However, at the time of writing, this has not been confirmed for The Librarians: The Next Chapter
If you’re visiting Australia from abroad and want to watch on your home service, simply download a VPN to stream The Librarians: The Next Chapter just as you would back home.
What you need to know about The Librarians: The Next Chapter The Librarians: The Next Chapter trailer Can I watch The Librarians: The Next Chapter for free?The Librarians: The Next Chapter isn't listed to watch on any free-to-air channels or streaming services.
The Librarians: The Next Chapter castThe Librarians: The Next Chapter is set for a 12-episode run from Sunday, May 25 to Monday, August 4.
Google and Samsung’s Project Moohan Android XR headset isn’t entirely new – my colleague Lance Ulanoff already broke down what we knew about it back in December 2024. But until now, no one at TechRadar had the chance to try it out.
That changed shortly after Sundar Pichai stepped off the Google I/O 2025 stage. I had a brief but revealing seven-minute demo with the headset.
After scanning my prescription lenses and matching them with a compatible set from Google, they were inserted into the Project Moohan headset, and I was quickly immersed in a fast-paced demonstration.
It wasn’t a full experience – more a quick taste of what Google’s Android XR platform is shaping up to be, and very much on the opposite end of the spectrum compared to the polished demo of the Apple Vision Pro I experienced at WWDC 2023.
Project Moohan itself feels similar to the Vision Pro in many ways, though it’s clearly a bit less premium. But one aspect stood out above all: the integration of Google Gemini.
“Hey Gemini, what tree am I looking at?” (Image credit: Future)Just like Gemini Live on an Android like the Pixel 9 – Google’s AI assistant takes center stage in Project Moohan. The launcher includes two rows of core Google apps – Photos, Chrome, YouTube, Maps, Gmail, and more—with a dedicated icon for Gemini at the top.
You select icons by pressing your thumb and forefinger together, mimicking the Apple Vision Pro’s main control. Once activated, the familiar Gemini Live bottom bar appears. Thanks to the headset’s built-in cameras, Gemini can see what you’re seeing.
In the press lounge at the Shoreline Amphitheater, I looked at a nearby tree and asked, “Hey Gemini, what tree is this?” It quickly identified a type of sycamore and provided a few facts. The whole interaction felt smooth and surprisingly natural.
You can also grant Gemini access to what’s on your screen, turning it into a hands-free controller for the XR experience. I asked it to pull up a map of Asbury Park, New Jersey, then launched into immersive view – effectively dropping into a full 3D rendering akin to Google Earth. Lowering my head gave me a clear view below, and pinching and dragging helped me navigate around.
I jumped to a restaurant in Manhattan, asked Gemini to show interior photos, and followed up by requesting reviews. Gemini responded with relevant YouTube videos of the eatery. It was a compelling multi-step AI demo – and it worked impressively well.
That’s not to say everything was flawless. There were a few slowdowns, but Gemini was easily the highlight of the experience. I came away wanting more time with it.
Hardware impressions (Image credit: Google)Though I only wore the headset briefly, it was evident that while it shares some design cues with the Vision Pro, Project Moohan is noticeably lighter – though not as high-end in feel.
After inserting the lenses, I put the headset on like a visor—the screen in front, and the back strap over my head. A dial at the rear let me tighten the fit easily. Pressing the power button on top adjusted the lenses to my eyes automatically, with an internal mechanism that subtly repositioned them within seconds.
From there, I used the main control gesture – rotating my hand and tapping thumb to forefinger – to bring up the launcher. That gesture seems to be the primary interface for now.
Google mentioned eye tracking will be supported, but I didn’t get to try it during this demo. Instead, I used hand tracking to navigate, which, as someone familiar with the Vision Pro, felt slightly unintuitive. I’m glad eye tracking is on the roadmap.
Google also showed off a depth effect for YouTube videos that gave motion elements—like camels running or grass blowing in the wind – a slight 3D feel. However, some visual layering (like mountain peaks floating oddly ahead of clouds) didn’t quite land. The same effect was applied to still images in Google Photos, but these lacked emotional weight unless the photos were personal.
Where Project Moohan stands out @techradar ♬ original sound - TechRadarThe standout feature so far is the tight Gemini integration. It’s not just a tool for control – it’s an AI-powered lens on the world around you, which makes the device feel genuinely useful and exciting.
Importantly, Project Moohan didn’t feel burdensome to wear. While neither Google nor Samsung has confirmed its weight – and yes, there’s a corded power pack I slipped into my coat pocket – it remained comfortable during my short time with it.
There’s still a lot we need to learn about the final headset. Project Moohan is expected to launch by the end of 2025, but for now, it remains a prototype. Still, if Google gets the pricing right and ensures a strong lineup of apps, games, and content, this could be a compelling debut in the XR space.
Unlike Google’s earlier Android XR glasses prototype, Project Moohan feels far more tangible, with an actual launch window in sight.
I briefly tried those earlier glasses, but they were more like Gemini-on-your-face in a prototype form. Project Moohan feels like it has legs. Let’s just hope it lands at the right price point.
You might also likeBeaten and restrained by Taitra security guards, I'm hauled back to the MSI booth from whence I came, the laptop I'd tried to spirit away handed back to MSI while members of the North American PR team look at me in stony silence. I lift up my head and meet their eyes, one by one.
"It belongs in a museum!" I yell over the clamor and din of the Computex 2025 showfloor.
One of the reps that I've known for years shouts to be heard: "John, what the hell, man? Have you lost your mind?"
"It belongs in a museum!"
(Image credit: Future / John Loeffler)OK, so that scene didn't play out anything like that yesterday when I first set eyes on the MSI Prestige 13+ AI Ukiyo-e Edition laptop, but it damn well could have. All that I needed was a means of escape through the packed crowd at the MSI booth, all of whom gawked along with me at what is undoubtedly the most beautiful laptop any of us has ever seen.
The MSI Prestige 13+ AI is already one of the best laptops MSI's put out in recent years, but the one on display at Computex was something entirely different. Splashed across the lid is a hand-laquered reproduction of The Great Wave off Kanagawa by the Japanese artist and printmaker Hokusai, a master of the ukiyo-e art style that dominated Japan from the 17th to 19th centuries.
(Image credit: Future / John Loeffler)I'm not as into Japanese art and culture as many of my friends are, a few of whom speak varying degrees of Japanese as a second language and all of whom own pretty much every Manga that has been released in the United States (as well as many that they've had to pay extra to order directly from Japanese shops), but I do love ukiyo-e..
I grew up in New York City and spent a lot of time going to the Metropolitan Museum of Art throughout my childhood, and the Met has a rather impressive collection of ukiyo-e prints, including an original print of The Great Wave, first produced in 1831.
Something about the bourgeois market scenes, manor intrigues, and quaint personal moments between friends and lovers that defined the ukiyo-e style resonates with me to this day.
But it was always the depictions of vulnerable humanity in the presence of unassailable natural forces that spoke most strongly to me. And no work of art captures that as well as The Great Wave, with its unstoppable water cresting over a pair of fishing boats, the owners of which are nowhere to be seen. The only proof of their existence is the boats left behind, pilotless and at the mercy of nature.
Image 1 of 4(Image credit: Future / John Loeffler)Image 2 of 4(Image credit: Future / John Loeffler)Image 3 of 4(Image credit: Future / John Loeffler)Image 4 of 4(Image credit: Future / John Loeffler)The Prestige 13+ AI Ukiyo-e Edition reproduces this masterful scene thanks to the work of OKADAYA, a Japanese company renowned for its lacquerwork on fine chinaware and pottery.
Similar to how ukiyo-e prints were made in steps and layers back in the day, OKADAYA's process for creating The Great Wave on the Prestige 13+ AI lid involves applying eight thin layers of lacquer by hand, incrementally building up the coloring and texture of the scene before polishing it to a smooth, resilient finish.
Image 1 of 3(Image credit: Future / John Loeffler)Image 2 of 3(Image credit: Future / John Loeffler)Image 3 of 3(Image credit: Future / John Loeffler)The process isn't limited to just the lid, either. The keys of the keyboard have also been stepped up to a polished, piano-key-like finish with gold-colored key labels to match the MSI logo on the inside of the device and on the lid, as well as the labels for the device's ports.
(Image credit: Future / John Loeffler)While the artwork on the device steals the show (and by show, I mean Computex, as the Prestige 13+ AI Ukiyo-e Edition won Computex's Best Choice Award this year), the underlying laptop is still impressive as well, with up to an Intel Lunar Lake SoC, up to 32GB LPDDR5x memory, 1TB PCIe 4.0 SSD storage, and a 13.3-inch 2.8K OLED display.
(Image credit: Future / John Loeffler)As an Artisan Collection product, the new laptop will have a limited run of 1,000 units, with each getting its production number laser-etched onto the bottom of the device. Given the handscrafting that's gone into these laptops, you can imagine that they won't be cheap, and I wouldn't be surprised if the majority of them have already been purchased before they even made their debut at this year's show.
Still, even if it's not possible to own one yourself (unless you get very lucky), maybe one of the buyers could do their good deed for the year and donate one of these masterpieces to a museum somewhere so we can all enjoy the artistry that's gone into this device.
Having seen it up close and held it myself, I can tell you it wouldn't be out of place among the finest ukiyo-e prints on display at the Met, and it's something I'd happily take the time to go see whenever I'm there.
You might also like...Headphone maker Skullcandy holds a soft spot in my heart. It was my go-to brand for wired earbuds when I was a teenager (this was long before the best wireless earbuds dominated the audio market) because they were affordable, available in a range of funky colors and, at the time at least, I thought they sounded great.
When I entered the world of tech journalism in my early 20s, I was exposed to a plethora of new brands and I started earning adult money. This meant I found myself drifting away from the company that I’d long considered ‘budget’ and ‘the headphones to get if you don’t mind them getting damaged’ – instead investing in more premium offerings from the likes of Sennheiser and Bose.
So when Skullcandy came back onto my radar with the announcement of the Method 360 ANC earbuds, proudly stating they’d been designed and tuned in collaboration with none other than Bose, I was hit with a wave of nostalgia – a 20-year gap can count as nostalgic, right?
Not only that, but I also wondered if the maker of my first go-to headphones could reclaim its title and knock my current favorite LG ToneFree T90S – which themselves replaced the Apple AirPods Pro 2 – out of my ears.
The quick answer? They’ve come awfully close, but their large charging case has made my final decision much trickier than I expected.
Sounding sweetFrom a sound quality and ANC perspective, Skullcandy’s collaboration with Bose is an absolute hit. Bose has long been a front runner when it comes to audio performance, but is arguably best known for making some of the best noise-cancelling headphones in recent memory.
I’m inclined to believe that Bose has had free reign with the audio and ANC smarts for the Method 360 ANC because from the moment I put the buds into my ears in the office, everything around me was silenced. Colleagues talking to each other near my desk, the office speaker blaring out questionable songs, it all disappeared.
(Image credit: Future)My trusted LG earbuds perform similarly, but they require the volume to be increased a little further for similar noise-cancelling effects. And when nothing is playing, the Skullcandy earbuds do a better job of keeping external sounds to a minimum.
It took me a little longer to formulate a definitive opinion on the sound quality, partly because of the design (more on that later) and partly because I’d become so accustomed to the sound of the LG ToneFree T90S.
After inserting and removing both pairs from my ears more times than I can remember, I settled on the notion that Skullcandy’s latest effort sounds more engaging, a little clearer on the vocals and just simply fun.
The increasing intensity of the violins at the start of Massive Attack’s Unfinished Sympathy reveal them to be dynamically adept, and they show off their rhythmic talents when playing the deliciously upbeat Bread by Sofi Tukker.
Plus, despite not supporting spatial audio (I have to agree with my colleague Matt Bolton when he says the company “blew the perfectly good name it could've given the next-gen version where it did add this feature”), the earbuds do give songs some sense of space. I was able to confidently place the hi-hat sounds, hums and drum beats in the opening of Hayley Williams’ Simmer around my head, for example.
Overall, I’m very impressed with the sound performance of the Skullcandy buds, especially considering their $129.99 / £99 / AU$189.99 price tag, which places them firmly in affordable territory.
It’s not unjust to expect limitations where sound or features are concerned at certain price points, but I think the Method 360 ANC delivers a sound that belies their price tag.
You'll be able to read our full thoughts on the Skullcandy Method 360 ANC in our full review, which is on its way.
The peculiar case of the peculiar case“If you like how they sound, how come the Skullcandy Method 360 ANC aren’t your new daily pair of earbuds?” I hear you ask. Well, it’s predominantly because of their case, but also a little to do with a design choice inherited from Bose.
When I first saw images of the case following their announcement, I was a little perplexed. All of the wireless earbuds I’m aware of come with a case that can easily be slipped into a pocket – apart from the Beats Powerbeats Pro 2 that is – yet the one supplied by Skullcandy looked enormous.
(Image credit: Future)Now I’ve unboxed my own pair, I can confirm the case is pretty damn big. Not heavy, just big.
It’s an interesting choice, especially since the earbuds themselves don’t take up that much space. I’m also not sold on the fact you have to slide the internal section of the case out to access the buds.
What’s more, I feel the most logical way to hold the case is horizontally when sliding the earbuds out, with the carabiner clip on the right as it feels more weighted and natural in my right hand.
Doing so reveals the earbud for the right ear, with the left earbud on the other side. That means I have to pick out the right earbud with my left hand, then flip the case over and do the opposite for the left bud.
And what’s even more confusing is the earbuds appear to fit into their charging spots upside down. So not only do I have to pass each bud from the ‘wrong’ hand to the right one, but I also have to flip them around the right way. There are just too many steps involved for what has always been a seamless and convenient process with other earbuds.
What’s also interesting is that, since the Method’s launch, I’ve noticed a second, more affordable pair of earbuds appear on Skullcandy’s website called the Dime Evo. They employ a similar sliding-case design, but both earbuds are on the same side, which I can only assume will make the removal process that little bit easier.
(Image credit: Future)Based on Skullcandy’s imagery for the Method 360 ANC, it’s targeting a young, cool demographic who walk around with sling bags over their shoulder, upon which they can attach the earbuds via the integrated carabiner clip.
As much as I would love to say I fall into that group, the fact is I don’t – well, not anymore. And because I’m not someone who wants to clip their headphones onto even the belt loop of my pants, the case design is completely lost on me.
I would further argue that the target audience is a little niche, too, which is a shame considering how good I think the earbuds sound. I’m saddened for Skullcandy that not enough pairs of ears are going to get to hear them.
Hey, I did say it was predominantly the case I had issues with.
(Image credit: Future)As for the aforementioned inherited design trait – that would be the Stability Bands found on the Bose QuietComfort Ultra Earbuds and QuietComfort Earbuds II.
Despite their intention to provide a more stable and secure fit, I initially had a lack of confidence in their ability. I often found myself wanting to readjust them in my ears to make sure they were locked in, which also meant I pressed the on-ear controls at the same time and paused my music.
It’s not just me who’s had an issue with them – my colleague and self-confessed Bose fangirl, Sharmishta Sarkar, has previously written about her shortcomings with the design too.
I eventually settled on the largest size of Stability Band (which I could only determine by sight, as there’s no indication of which size is which on the included book of spares) and so far, so good. They definitely feel more secure in my ears compared to when I tried other sizes and passive noise cancelation has also been improved.
However, the design choice has confirmed I get along best with earbud designs that insert further into my ear canal.
Awarding cool points (Image credit: Future)I like the Skullcandy Method 360 ANC earbuds. I can’t say I like the design of the case, nor do I like their mouthful of a name (Skullcandy Method would have been just fine in my opinion), but considering the biggest selling point for a pair of earbuds is how they sound, I can find little to fault.
I will most likely use them whenever I’m in the office, as I can leave the case on the desk with the skull logo facing me directly. While I might not feel cool enough to clip the case to my person, that logo alone takes me back to my teenage years. For me, that’s cool enough.
You might also likeChuwi, a company better known for budget devices than flagship powerhouses, has unveiled its latest effort to break into the high-performance segment: the GameBook 9955HX.
Promoted as a laptop for coders, gamers, and professional creators, this new model is powered by the AMD Ryzen 9 9955HX processor, a Zen 5-based chip featuring 16 cores and 32 threads, with a boost frequency of up to 5.4GHz. It also includes a large 64MB L3 cache and a configurable TDP that can peak around 55W.
As of the time of writing, the device's price remains undisclosed - however, given Chuwi’s history of undercutting bigger brands, it’s reasonable to expect this model to be priced lower than similar offerings from MSI or Asus.
Chuwi GameBook 9955HXFor graphics, the GameBook 9955HX integrates the Nvidia GeForce RTX 5070 Ti Laptop GPU, based on the latest Blackwell RTX architecture, making it well-suited for video editing and graphics-intensive tasks.
The GPU offers 12GB of GDDR7 VRAM, a 140W TGP, and supports features such as full ray tracing, DLSS 4, and Multi Frame Generation.
Chuwi says this setup can deliver up to 191 FPS in 1440p gaming with ray tracing enabled, and 149 FPS at 4K, placing it firmly in the performance laptop category.
For creators working with AI-accelerated tools, advanced 3D rendering, or video post-production, this could prove to be a top contender, provided its cooling system and thermal management are up to the task.
The display is a 16-inch 2.5K IPS panel with a 300Hz refresh rate, 100% sRGB color coverage, and a 16:10 aspect ratio. Peak brightness reaches 500 nits, though claims regarding color accuracy have yet to be verified through independent calibration tests.
Internally, the GameBook comes equipped with 32GB of DDR5 RAM at 5600MHz, upgradeable to 64GB, and a 1TB PCIe 4.0 SSD. Storage expansion is supported via two M.2 slots, one of which supports PCIe 5.0, offering a level of future-proofing not typically seen in Chuwi’s lineup.
Connectivity includes Wi-Fi 6E, Bluetooth 5.2, a 2.5Gb Ethernet port, two USB-C ports (supporting 100W and 140W power delivery), three USB-A 3.2 Gen 1 ports, HDMI 2.1, and Mini DisplayPort 2.1a. There's also a 3.5mm audio jack, DC-in, and a Kensington lock slot.
Other features include a full-sized RGB-backlit keyboard, a 2MP IR webcam with a privacy shutter, a 77.77Wh battery, and stereo speakers. The laptop measures just over 21mm thick and weighs 2.3kg.
You might also likeThe CEO of Dell Technologies has told TechRadar Pro that AI offers a great opportunity for organizations to re-evaluate themselves to positive effect
Speaking at a media Q&A session at Dell Technologies World 2025, Michael Dell looked to reassure us that AI will never fully replace human workers, and in fact may offer them a whole new outlook.
In a wide-ranging discussion, Dell also laid out his views on political instability affecting the technology industry, and some of his key leadership principles.
"Always some change"“The way I think about this is that if you look at every progress, that’s for any technology, you always have some change that goes on,” Dell said in response to our question about AI affecting or even replacing human workers.
“My way of thinking is there’s probably a 10 percent effect for that - but I think 90 percent of that is actually growth and expansion and opportunity, and ultimately what I think you’re going to see is more opportunities, more economic growth.”
“There are a lot of things that we don’t do, that we used to do, because we have the tools, and we’re more effective as a species because of that - (using AI) is just another example of that.”
“One of the keys beyond productivity and efficiency I think for organizations, is to reimagine themselves, and say, alright, what is the trajectory of these capabilities, where is it going, and what should our activity look like in three years, five years time, given this capability.”
“You know, a lot of roles today just didn’t exist 10, 20, 30 years ago - and no-one was forecasting that.”
(Image credit: Future / Mike Moore)Having spoken with Nvidia CEO Jensen Huang in his opening keynote, Dell was also asked if the two shared any overarching leadership principles.
“I think anytime there’s a new technology, you have to leap ahead (and think), what is the likely impact of this, and how do we need to change? And if we don’t have a passion around that, or there isn’t a crisis in your organization - make one! We think it can make us a better company.”
Dell was also asked about how changing global economic and political situations might affect the company’s future outlook
“We agree that those are issues and challenges,” he said, “in my general view, the importance of this technology is greater than all those problems - and I heard somebody say recently, tokens are bigger than tariffs - and that would sort of summarize our view of it.”
“Are all those things helpful to our business? No, they’re not - but there’s a limit of what we can do about that, right? We can certainly do the things we’re supposed to do, and focus on the things we can control - we’re seeing plenty of companies that are dealing with all those challenges just as we are, and powering ahead in any case.”
You might also likeThe Samsung Galaxy S25 Edge is earning plaudits aplenty for its stunning titanium design and improbable thinness (not least from us here at TechRadar), but the phone’s smaller-than-hoped-for battery continues to raise eyebrows.
At 3,900mAh, the cell in the Galaxy S25 Edge is only a smidgen smaller than the one in the standard Galaxy S25, but in Samsung’s new phone, that same battery has to power a much larger 6.7-inch display.
Understandably, that’s led to question marks over the Edge’s endurance, but Samsung is confident that its new handset will provide more than enough battery life for all but the most hardcore users.
In an exclusive interview with TechRadar, Kadesh Beckford, Smartphone Product Specialist at Samsung MX, played down the idea that the Galaxy S25 Edge compromises on battery life to deliver a more aesthetically pleasing design.
“Even though we’ve made this device incredibly thin, we’ve tried to ensure that customers have [suitable] battery life available to them based upon their needs,” Beckford explained. “With 24 hours of video playback time and an all-day battery, [the Edge] is going to last you literally from the moment you wake up to the moment you go to bed. So, we're actually giving [consumers] what they need [in terms of battery life].
“And also,” Beckford continued, “with the lithium graphite technology and the thermal interface material in there, it keeps the device cool, so pretty much no matter what you're doing on your phone, [it’ll] last all day long and then some.”
(Image credit: Future)‘All day long and then some’ is a bold claim for a 6.7-inch handset with a 3,900mAh cell – and it’s one we’re currently putting to the test for our full Samsung Galaxy S25 Edge review – but Beckford says his enthusiasm for the phone’s endurance is based on his own real-world experience with the device ahead of its official launch.
“I’ve played Genshin Impact on the Galaxy S25 Edge, I’ve played PUBG,” he explained. “It moves so smoothly, it’s unbelievable, and the phone has lasted me all day – sometimes into the following day as well. I’ve seen those elements [in action].
“Do also remember that, traditionally, phones at this level of thinness don’t support wireless charging. With the custom Snapdragon 8 Elite chipset, we’ve still been able to include a thermal interface and wireless charging, with wireless power share as well, so I can even charge up my Galaxy Buds [using the Galaxy S25 Edge]. That there is real innovation.”
Traditionally, phones at this level of thinness don’t support wireless charging.
Kadesh Beckford
Beckford concluded: “You’ve also got the ability to add a Qi2 case [to the Galaxy S25 Edge] for convenience at home or in your car. So, you can easily connect it up, and your device will last all day.”
Of course, being compatible with convenient charging methods isn’t the same as offering good battery life outright, but Beckford’s point around real-world practicality stands. For the majority of users, the Galaxy S25 Edge will deliver all-day battery life, and the phone’s wireless charging and Qi2 compatibility should ensure that it can be charged anytime, anywhere if you do find yourself wanting for juice.
The Galaxy S25 Edge supports 25W wired, 15W wireless, and 4.5W reverse wireless charging (Image credit: Future)Just how quickly the Galaxy S25 Edge can be charged to 100% is another matter entirely. The phone supports 25W wired, 15W wireless, and 4.5W reverse wireless charging – that’s comparable to the standard Galaxy S25 but a way off the Galaxy S25 Plus and S25 Ultra, which both support 45W wired charging.
Whichever way you look at it, then, you will be sacrificing some endurance by choosing the Galaxy S25 Edge over one of the best Samsung phones. But, as Samsung suggests, that downgrade isn’t likely to feel dramatic for those who already charge their smartphone on a daily basis. Check out our soon-to-be-published Galaxy S25 Edge review for our own verdict on the matter.
You might also likeGoogle has finally taken the lid off Android XR at Google I/O 2025 to show us what the operating system will be capable of when it launches on Android XR headsets and glasses later this year.
We didn’t see quite everything I was hoping we would, but we did learn what Google’s silver bullet in XR will be: Google Gemini. Its AI-centric approach was demoed across both hardware types – AR glasses and mixed-reality headsets.
Starting with the latter, Google gave a public version of Project Moohan headset demonstrations it has been running privately for media and tech experts. It highlights some standard headset advantages, like the benefits of an immersive screen for multi-tasking – we were shown a user accessing YouTube, Google Maps, and a travel blog as they research a location they’re planning to visit.
Then, in one impressive moment, that user asks Gemini if it can “take me to Florence.” Gemini obliges by opening Google’s immersive 3D map and gives them a birds-eye view of the city and some landmarks (including the iconic Cathedral of Santa Maria del Fiore, which Assassin’s Creed 2 players will be very familiar with).
(Image credit: Lance Ulanoff / Future)Then there’s the glasses. Across a few different scenarios Google highlighted how Android XR specs can make your life easier with hands-free controls and a head-up display.
You can draft and send messages to your contacts, access live translation with on-screen subtitles, search for Google Maps recommendations and then get directions to a location, and take pictures using the glasses’ camera and see a preview of the shot right away.
It’s reminiscent of what Meta’s Ray-Ban glasses are capable of, and it’s exactly what we've been expecting Meta’s rumored smart glasses with a display will be capable of if (read: when) they’re showcased later this year.
(Image credit: Google)I was expecting more from Google in the headset department, frankly.
Android XR certainly seems neat, but I’ve yet to see a reason why it’s better than – or, to an extent, on a par with – with the competition (*cough* Meta Quest 3 *cough*). However, I’m at least a little hopeful that by the time Project Moohan is ready to launch for consumers (with it again only being teased for release “later this year”) some of Android XR’s letdowns will be addressed.
For now, the headset certainly seems like it’s playing second fiddle to the true Android XR star: the glasses.
(Image credit: Google)Not only are most of the showcased Android XR features (which look very useful) made for a device you’d wear all the time, I’m surprised by how much choice Google is already offering us in terms of hardware.
Samsung is working on Android XR tech but so is Xreal with its Project Aura, and stylish eyewear brands Gentle Monster and Warby Parker. And Google’s promise of it “starting with” these brands suggests more partners are on the way.
This abundance of choice is fantastic for two key reasons.
First up, with more choice prices will have to remain competitive. Meta’s display-equipped smart glasses are reportedly set to cost over $1,000 with insiders expecting a cost in the $1,300-$1,400 range (which would be around £1,000-£1,100 or AU$2,050-AU$2,200).
Meta's glasses are cool, but don't offer Android XR's variety yet (Image credit: Meta)With more glasses options to choose from we may see prices drop to more affordable levels more quickly than if there were just one or two players in the game.
Second, glasses, like other fashion accessories, need to prioritize style to some degree. Utility is important, but if we’re expected to wear smart glasses all day everyday then just like any other accessory they need to suit our identity.
By partnering with a range of different brands out the gate – the aesthetics of Gentle Monster and Warby Parker are almost polar opposites to one another – Android XR tech should appeal to a wider audience than the Meta’s Ray-Ban-only approach, because it will boast glasses designed to suit a wider range of fashion niches.
(Image credit: Google)It’s still early days for Android XR, and there are crucial details we’re still missing, but Google has certainly come out swinging with its latest operating system
I’ll be paying close attention to Google’s Android XR demos, and looking for more concrete information on the upcoming hardware. For now, though, Google certainly has me on the hook.
You might also likeComputex 2025 is here, and it was only a matter of time until one of the huge tech companies during the expo revealed a new device that takes laptop design to the next level – and this one may be worth keeping tabs on.
On its website (translated from Chinese), Huawei announced the new MateBook Fold Ultimate Design laptop, featuring an 18-inch (when expanded) 3K OLED display running on Harmony OS 5. Notably, unlike other foldable laptops like the Asus Zenbook Duo, the MateBook Fold Ultimate Design unfolds into an entire single screen.
Essentially, this means that instead of what would be two screens connected via regular laptop hinges, it's a 'water-drop hinge' that allows it to open and close smoothly, and lay completely flat for an 18-inch screen experience. This mechanism is arguably a step up above Microsoft's new Surface Pro, which acts as a tablet but is also a 2-in-1 when you use its keyboard (which is sold separately).
When in its laptop form, you'll have a 13-inch OLED screen at your disposal using its touchscreen keyboard (or the keyboard that's included in the package). However, the MateBook Fold Ultimate Design looks like more than just a 2-in-1 laptop; you'll be able to transform it from a regular laptop into a portable 18-inch display for casual viewing on the device in a matter of seconds.
Closing the laptop entirely gives it a classy and thin notebook or diary-style design, as if it's built as a disguise, further setting it apart in terms of its design from competitors. To add a cherry on top, it has 1,600 nits of peak brightness, and a 74.69WHr battery – and both features could easily stand alone as major selling points.
If I didn't know what it really was, you could easily tell me it's just a notepad... (Image credit: Huawei)According to a reliable tech analyst, Ming-Chi Kuo, Huawei is planning a production target for the MateBook Fold of between 180,000 and 200,000 units, and its life cycle will primarily depend on user feedback regarding the software functionality.
It's an important factor to note, since the MateBook Fold is by no means an inexpensive laptop. It's currently only available in China, starting at ¥23,999, which converts to around $3,330 / £2,490 / AU$5,200, but its new features will be hard to turn down if you can afford it.
Analysis: If this is the future of laptop design, I'm here for it (Image credit: Huawei)I've played plenty of futuristic games like Cyberpunk 2077 and seen enough movies like Mission Impossible to suggest that Huawei's new laptop could be a game-changer. It isn't like other companies, such as Asus, haven't introduced similar devices – the difference is, none of them utilize the 'water-drop hinge' mechanism Huawei has introduced.
It simply makes the Asus Zenbook Duo and the Lenovo Yoga Book 9i look like clumsy setups, that require a stand to stay upright and hinges for both screens. The MateBook Fold is the first laptop foldable laptop I've seen that has caught my eye – if only I could afford it.
The only drawback here is that it will likely set you back thousands of dollars if it eventually launches globally. However, if it sells well enough and gains the traction that I anticipate, we could easily see Huawei's competitors and others follow suit soon – and that's exactly what I'm hoping for.
You may also like...Want proof that Google really has gone all-in on AI? Then look no further than today's Google I/O 2025 keynote.
Forget Android, Pixel devices, Google Photos, Maps and all the other Google staples – none were anywhere to be seen. Instead, the full two-hour keynote spent its entire time taking us through Gemini, Veo, Flow, Beam, Astra, Imagen and a bunch of other tools to help you navigate the new AI landscape.
There was a lot to take in, but don't worry – we're here to give you the essential round-up of everything that got announced at Google's big party. Read on for the highlights.
1. Google Search got its biggest AI upgrade yet (Image credit: Google)‘Googling’ is no longer the default in the ChatGPT era, so Google has responded. It’s launched its AI Mode for Search (previously just an experiment) to everyone in the US, and that’s just the start of its plans.
Within that new AI Mode tab, Google has built several new Labs tools that it hopes will stop us from jumping ship to ChatGPT and others.
A ‘Deep Search’ mode lets you set it working on longer research projects, while a new ticket-buying assistant (powered by Project Mariner) will help you score entry to your favourite events.
Unfortunately, the less popular AI Overviews is also getting a wider rollout, but one thing’s for sure: Google Search is going to look and feel very different from now on.
2. Google just made shopping more fun @techradar ♬ original sound - TechRadarShopping online can go from easy to chaotic in moments, given the huge amount of brands, retailers, sellers and more – but Google is aiming to use AI to streamline the process.
That's because the aforementioned AI Mode for Search now offers a mode that will react to shopping-based prompts, such as ‘I’m looking for a cute purse’ and serve up products and images for inspiration and allow users to narrow down large ranges of products; that is if you live in the US as the mode is rolling out there first.
The key new feature in the AI-powered shopping experience is a try-on mode that lets you upload a single image of yourself, from which Google’s combination of its Shopping Graph and Gemini AI models will then enable you to virtually try on clothes.
The only caveat here is the try-on feature is still in the experimental stage and you need to opt-in to the ‘Search Labs’ program to give it a go.
Once you have the product or outfit in mind, Google’s agentic checkout feature will basically buy the product on your behalf, using the payment and delivery details stored in Google Pay; that is, if the price meets your approval – as you can set the AI tech to track the cost of a particular product and only have it buy it if the price is right. Neat.
3. Beam could reinvent video calls (Image credit: Google)Video calls are the bane of many people's lives, particularly if you work in an office and spend 60% of your time in such calls. But Google's new Beam could make them a lot more interesting.
The idea here is to present calls in 3D, as if you're in the same room as someone when you're on a call with them; a bit like with VR. However, there's no need for a VR headset or glasses here, with Beam instead using cameras, mics, and – of course – AI to work its magic.
If that all sounds rather familiar, it's because Google has teased this before, under the name Project Starline. But this is no longer a far away concept as it's here, and almost ready for people to use.
The caveat is that both callers will need to sit in a custom-made booth that can generate the 3D renders that are needed. But it's all pretty impressive nonetheless, and the first business customers will be able to get the kit from HP later in 2025.
4. Veo 3 just changed the game for AI videoAI video generation tools are already incredibly impressive, given they didn't even exist a year or two ago, but Google new Veo 3 model looks like taking things to the next level.
As with the likes of Sora and Pika, the tool's third-generation version can create video clips and then tie them together to make longer films. But unlike those other tools, it can also generate audio at the same time – and expertly sync sound and vision together.
Nor is this capability limited to sound effects and background noises, because it can even handle dialogue – as demonstrated in the clip above, which Google demoed in its I/O 2025 keynote.
"We’re emerging from the silent era of video generation," said Google DeepMind CEO Demis Hassabis – and we're not going to argue with that.
5. Gemini Live is here – and it's free (Image credit: Future)Google Gemini Live, the search giant’s AI-powered voice assistant, is now available for free on both Android and iOS. Previously a paid-for option, this move opens up the AI to a wealth of users.
With Gemini Live, you can talk to the generative AI assistant using natural language, as well as use your phone camera to show it things from which it’ll extract information to serve up related data. Plus, the ability to share one’s phone screen and camera with other Android users via Gemini Live has now been extended to compatible iPhones.
Google will start rolling out Gemini Live for free from today, with iOS users being able to access the AI and its screen sharing features in the coming weeks.
6. Flow is an awesome new AI filmmaking tool (Image credit: Google)Here's one for all the budding movie directors out there: at I/O 2025, Google took the covers off Flow, an AI-powered tool for filmmakers that can create scenes, characters and other movie assets from a natural language text prompt.
Let’s say you want to see doctors perform an operation in the back of a 1070s taxi; well, pop that into Flow and it’ll generate the scene for you, using the Veo 3 model, with surprising realism.
Effectively an extension of the experimental Google Labs VideoFX tool launched last year, Flow will be available for subscribers to Google Al Pro and Google Al Ultra plans in the US, with more countries to come.
And it could be a tool that’ll let budding directors and cinematic video makers more effectively test scenes and storytelling, without needing to shoot a lot of clips.
Whether this will enhance filmmaking planning or yield a whole new era of cinema, where most scenes are created using generative AI rather than making use of sets and traditional CGI, has yet to be seen. But it looks like Flow could open up movie making to more than just keen amateurs and Hollywood directors.
Gemini is already a pretty good choice for AI image generation; depending on who you ask, it's either slightly better or slightly worse than ChatGPT, but essentially in the same ballpark.
Well, now it might have moved ahead of its rival, thanks to a big upgrade to its Imagen model.
For starters, Imagen 4 brings with it a resolution boost, to 2K – meaning you'll be better able to zoom into and crop its images, or even print them out.
What's more, it'll also have "remarkable clarity in fine details like intricate fabrics, water droplets and animal fur, and excels in both photorealistic and abstract styles”, Google says – and judging by the image above, that looks pretty spot on.
Finally, Imagen 4 will give Gemini improved abilities at spelling and typography, which has bizarrely remained one of the hardest puzzles for AI image generators to solve so far. It's available from today, so expect even more AI-generated memes in the very near future.
8. Gemini 2.5 Pro just got a ‘groundbreaking new ‘Deep Think’ upgrade (Image credit: Shutterstock/JLStock)Enhanced image capabilities aren't the only upgrades coming to Gemini, either – it's also got a dose of extra brainpower with the addition of a new Deep Think Mode.
This basically augments Gemini 2.5 Pro with a function that means it’ll effectively think harder about queries posed at it, rather than trying to kick out an answer as quickly as possible.
This means the latest pro version of Gemini will run multiple possible lines of reasoning in parallel, before deciding on how to respond to a query. You could think of it as the AI looking deeper into an encyclopaedia, rather than winging it when coming up with information.
There is a catch here, in that Google is only rolling out Deep Think Mode to trusted testers for now – but we wouldn't be surprised if it got a much wider release soon.
9. Gemini AI Ultra is Google’s new ‘VIP’ plan for AI obsessives (Image credit: Shutterstock/Sadi Santos)Would you spend $3,000 a year on a Gemini subscription? Google thinks some people will, because it's rolled out a new Gemini AI Ultra plan in the US that costs a whopping $250 a month.
The plan isn't aimed at casual AI users, obviously; Google says it offers "the highest usage limits and access to our most capable models and premium features" and that it'll be a must if "you're a filmmaker, developer, creative professional or simply demand the absolute best of Google Al with the highest level of access."
On the plus side, there's a 50% discount for the first three months, while the previoiusly available Premium plan also sticks around for $19.99 a month, but now renamed to AI Pro. If you like the sound of AI Ultra, it will be available in more countries soon.
10. Google just showed us the future of smart glasses (Image credit: Google)Google finally gave us the Android XR showcase it has been teasing for years.
At its core is Google Gemini – on-glasses-Gemini can find and direct you towards cafes based on your food preferences, it can perform live translation, and find answers to questions about things you can see. On a headset, it can use Google Maps to transport you all over the world.
Android XR is coming to devices from Samsung, Xreal, Warby Parker, and Gentle Monster, though there’s no word yet on when they’ll be in our hands.
11. Project Astra also got an upgrade (Image credit: Future)Project Astra is Google’s powerful mobile AI assistant that can react and respond to the user’s visual surroundings, and this year’s Google I/O has given it some serious upgrades.
We watched as Astra gave a user real-time advice to help him fix his bike, speaking in natural language. We also saw Astra argue against incorrect information as a user walked down the street mislabeling the things around her.
Project Astra is coming to both Android and iOS today, and its visual recognition function is also making its way to AI Mode in Google Search.
12. …As did Chrome (Image credit: Future)Is there anything that hasn’t been given an injection of Gemini’s AI smarts? Google’s Chrome browser was one of the few tools that hadn’t it seems, but that’s now changed.
Gemini is now rolling out in Chrome for desktop from tomorrow to Google AI Pro and AI Ultra subscribers in the US.
What does that mean? You’ll apparently now be able to ask Gemini to clarify any complex information that you’re researching, or get it to summarize web pages. If that doesn’t sound too exciting, Google also promised that Gemini will eventually work across multiple tabs and also navigate websites “on your behalf”.
That gives us slight HAL vibes (“I’m sorry, Dave, I’m afraid I can’t do that”), but for now it seems Chrome will remain dumb enough for us to be considered worthy of operating it.
13. …And so did Gemini CanvasAs part of Gemini 2.5, Canvas – the so-called ’creative space inside the Gemini app – has got a boost via the new upgraded AI models in this new version of Gemini.
This means Canvas is more capable and intuitive, with the tool able to take data and prompts and turn them into infographics, games, quizzes, web pages and more within minutes.
But the real kicker here is that Canvas can now take complex ideas and turn them into working code at speed and without the user needing to know specific coding languages; all they need to do is describe what they want in the text prompt.
Such capabilities open up the world of ‘vibe coding’, where one can create software without needing to know any programming languages, and it also has the capability of prototyping new ideas for apps at speed and just through prompts.
You might also likeThe launch of the Nintendo Switch 2 is almost here, and it'll come with a handful of titles ready for gamers to dive into from day one. Luckily, we now have an early look at one in particular, from game developer CD Projekt Red.
In a YouTube video by Nintendo Life, CD Projekt Red's Cyberpunk 2077 is revealed running on the Nintendo Switch 2, with visual quality that rivals other PC handhelds like the Steam Deck. This is thanks to Nvidia's custom T239 chip, which allows the new handheld to take advantage of DLSS upscaling, for better-than-native performance while upscaling from a lower internal resolution.
Considering earlier expectations that were based on the hardware rumors (which turned out to be legitimate), Cyberpunk 2077 has impressed many gamers with its lighting and environment details. However, it's worth noting that Nvidia's DLSS upscaling is rumored to be used quite aggressively, which is clear to see in some of the blurry sequences in the gameplay showcase.
This is to be expected as the Switch 2 is already pushing above its weight in running a game like Cyberpunk 2077. But there are still very evident performance dips, particularly during vehicle traversal, which highlights the potential issue – if DLSS is indeed used aggressively and performance is not up to par, seeing dips into what looks like the upper end of 20fps, then is it really impressive after all?
PlayStation 5 on the left, Nintendo Switch 2 on the right... (Image credit: Nintendo Life)It's worth noting that it isn't exactly clear which segments of the gameplay below are either docked or handheld (it's likely the former considering the 4K video quality), and there will be a choice between quality and performance modes.
This is also still a work in progress and will likely be drastically different from the launch version, but it will be interesting to see how this fares against the MSI Claw 8 AI+ – which delivers great visuals and performance playing Cyberpunk 2077 – along with other upcoming handhelds like the recently-announced MSI Claw A8 using AMD's Ryzen Z2 Extreme.
Analysis: It's better than I expected, but doesn't warrant the Switch 2's costNow, before you say I have a Nintendo Switch 2 agenda, I do think games like Cyberpunk 2077 have the potential to further exceed their performance and visual expectations on the device. Despite that, the handheld's $449.99 / £395.99 / AU$699.95 price has me asking a basic question – wouldn't it be better to buy a PS5, Xbox Series X at around the same price, for a better experience?
I could go into the handheld PC comparisons, and the Claw 8 AI+'s processing power, but I'd hate to sound like a broken record. Spoiler alert; it's purported to be the better and more powerful device.
However, the simple fact here is that the Switch 2's Cyberpunk 2077 isn't in the same ballpark visually and performance-wise as either of Sony's or Microsoft's consoles. In that sense, the Switch 2's value as a gaming console rival is lost if it costs nearly the same and yet provides a worse experience.
Before you point out that the MSI Claw 8 AI+ costs more than the PS5 and Xbox Series X, it's not in the category of game console (it also doesn't come with a dock for extra performance), and its price compared to the Switch 2 is still warranted considering the power packed in such a compact device. If the Switch 2's price were much lower, I'd be far more impressed with Cyberpunk 2077's performance, but tariffs or not, that's not the case.
DLSS seems to be the one factor that will do the heavy lifting with the Switch 2, and I'd argue it's the one reason why its version of Cyberpunk 2077 can be compared to other handhelds using either XeSS or FSR (neither of which are on the level of Nvidia's DLSS). Even then, without tools like Frame Generation, it still leaves me unimpressed with the Switch 2, but I'll happily eat my words if I'm proven wrong with its capabilities.
You might also like...A Widow's Game is a gripping new Netflix movie which is giving serious Gone Girl vibes, and I'm so excited to watch when it arrives on one of the best streaming services.
Arriving on May 30, it definitely has the potential to be one of the best Netflix movies, as it's inspired by a very interesting case I hadn't heard of before known as "the black widow of Patraix."
The Spanish-language movie is directed by Carlos Sedes and written by the team that brought us the Netflix drama series The Asunta Case. That series follows a couple who reported their daughter missing, which unravels the truth about a seemingly picture-perfect family.
Check out the new trailer below.
What is the plot of A Widow's Game? (Image credit: Netflix)Set in 2017, the body of a man is found in a parking lot. He's been stabbed seven times and the authorities believe all signs point to a crime of passion. With a veteran inspector heading up the crime, she's soon led to a suspect no one expected: Maje, the young widow who had been married to the victim for less than a year.
The cast is led by Pan's Labyrinth star Ivana Baquero, who plays Maje, and Criminal's Carmen Machi, who is Eva, the case inspector. The cast also includes Tristán Ulloa, Joel Sánchez, Álex Gadea, Pablo Molinero, Pepe Ocio, Ramón Ródenas, Amparo Fernández and Miquel Mars.
I love a good crime drama and I'm very excited to see this one unfold and how the titular widow is brought to justice. If she is, of course!
You might also likeThree spouseware apps - Cocospy, Spyic, and Spyzie, have gone dark. The apps, which are all basically clones of one another, are no longer working. Their websites are gone, and their cloud storage, hosted on Amazon, is deleted.
The news was broken by TechCrunch earlier this week, who said that the reason behind the disappearance is not blatantly obvious, but it could be linked to data breaches that happened earlier this year.
“Consumer-grade phone surveillance operations are known to shut down (or rebrand entirely) following a hack or data breach, typically in an effort to escape legal and reputational fallout,” the publication wrote.
With Aura's parental control software, you can filter, block, and monitor websites and apps, set screen time limits. Parents will also receive breach alerts, Dark Web monitoring, VPN protection, and antivirus.
Preferred partner (What does this mean?)View Deal
The grey zone“LetMeSpy, a spyware developed out of Poland, confirmed its “permanent shutdown” in August 2023 after a data breach wiped out the developer’s servers. U.S.-based spyware maker pcTattletale went out of business and shut down in May 2024 following a hack and website defacement.”
Spouseware, or spyware, is a type of application that operates in the grey zone. It is advertised as a legitimate software, used to keep track of minors, people with special needs, and similar. However, most of the time it is just a cover for illegal activities, such as spying on other members of the household, love interests, and similar.
Given its nature, the development team and key people are usually hidden, which makes it difficult for members of the media to get a comment or a statement.
In late February this year, two of the apps - Cocospy and Spyic - were found exposing sensitive user data: email addresses, text messages, call logs, photographs, and other sensitive information. Furthermore, researchers were able to exfiltrate 1.81 million of email addresses used to register with Cocospy, and roughly 880,000 addresses used for Spyic. Besides email addresses, the researcher managed to access most of the data harvested by the apps, including pictures, messages, and call logs.
Just a week later, similar news broke for Spyzie. The app was found leaking email addresses, text messages, call logs, photographs, and other sensitive data, belonging to millions of people who, without their knowledge or consent, have had these apps installed on their devices. The people who installed those apps, in most cases partners, parents, significant others, have also had their email addresses exposed in the same manner.
Via TechCrunch
You might also likeGoogle Search is under pressure – not only are many of us replacing it with the likes of ChatGPT Search, Google's attempts to stave off competition with the features like AI Overviews have also backfired due to some worrying inaccuracies.
That's why Google has just given Search its biggest overhaul for over 25 years at Google I/O 2025. The era of the 'ten blue links' is coming to close, with Google now giving its AI Mode (previously stashed away in its Labs experiments) a wider rollout in the US.
AI Mode was far from the only Search news at this year's I/O – so if you been wondering what the next 25 years of 'Googling' looks like, here are all of the new Search features Google's just announced.
A word of warning: beyond AI Mode, many of the features will only be available to Labs testers in the US – so if you want to be among the first to try them "in the coming weeks", turn on the AI Mode experiment in Labs.
1. AI Mode in Search is rolling out to everyone in the US (Image credit: Google)Yes, Google has just taken off the stabilizers off its AI Mode for Search – which was previously only available in Labs to early testers – and rolled it out to everyone in the US. There's no word yet on when it's coming to other regions.
Google says that "over the coming weeks" (which sounds worryingly vague) you'll see AI Mode appear as a new tab in Google Search on the web (and in the search bar in the Google app).
We've already tried out AI Mode and concluded that "it might be the end of Search as we know it", and Google says it's been refining it since then – the new version is apparently powered by a custom version of Gemini 2.5.
@techradar ♬ original sound - TechRadar 2. Google also has a new 'Deep Search' AI Mode (Image credit: Google)A lot of AI chatbots – including ChatGPT and Perplexity – now offer a Deep Research mode for longer research projects that require a bit more than a quick Google. Well, now Google has its own equivalent for Search called, yes, 'Deep Search'.
Available in Labs "in the coming months" (always the vaguest of release windows), Deep Search is a feature within AI Mode that's based on the same "query fan-out" technique as that broader mode, but according to Google takes it to the "next level".
In reality, that should mean an "expert-level, fully-cited report" (Google says) in only a few minutes, which sounds like a big time-saver – as long as the accuracy is a bit better than Google's AI Overviews.
3. Search Live lets you quiz Google with your camera (Image credit: Google)Google already lets you ask questions about the world with Google Lens, and demoed its Project Astra universal assistant at Google I/O 2024. Well, now it's folding Astra into Google Search so you can ask questions in real-time using your smartphone's camera.
'Search Live' is another Labs feature and will be marked by a 'Live' icon in Google's AI Mode or in Google Lens. Tap it and you'll be able to point your camera and have a back-and-forth chat with Google about what's in front of you, while getting links sent to you with more info.
The idea sounds good in theory, but we're still yet to try it out beyond its prototype incarnation last year and the multimodal AI project is cloud-based, so your mileage may vary depending on where you're using it. But we're excited to see how far it's come in the last year or so with this new Labs version in Search.
@techradar ♬ original sound - TechRadar 4. AI Overviews are going global (Image credit: Future)We're not exactly wild about AI Overviews, which are the little AI-generated paragraphs you often see at the top of your search results. They're sometimes inaccurate and have resulted in some infamous clangers, like recommending that people add glue to their pizzas. But Google is ploughing ahead with them and announced that AI Overviews are getting wider rollout.
The new expansion means the feature will be available in more than 200 countries and territories and more than 40 languages worldwide. In other words, this is the new normal for Google Search, so we'd better get used to it.
Google's Liz Reid (VP, Head of Search) acknowledged in a press briefing before Google I/O 2025 that AI Overviews have been a learning experience, but claims they've improved since those early incidents.
"Many of you may have seen that a set of issues came up last year, although they were very much education and quite rare, we also still took them very, very seriously and made a lot of improvements since then", she said.
5. Google Search will soon be your ticket-buying agent (Image credit: Google)Finding and and buying tickets and still something of painful experience in Google Search. Fortunately, Google is promising a new mode that's powered by Project Mariner, which is an AI agent that can surf the web just like a human and complete tasks.
Rather than a separate feature, this will apparently live within AI Mode and kick in when you ask questions like "Find two affordable tickets for this Saturday's Reds game in the lower level".
This will see it scurry off and analyze hundreds of ticket options with real-time pricing. It can also fill in forms, leaving you with the simple task of hitting the 'purchase' button (in theory, at least).
The only downside is that this is another of Google's Lab projects that will launch "in the coming months", so who knows when we'll actually see it in action.
6. Google Shopping is getting an AI makeover (Image credit: Google)Google gave its Shopping tab within Google Search a big refresh back in October 2024, and now many of those features are getting another boost thanks to some new integration with AI Mode.
The 'virtual try-on' feature (which now lets you upload a photo of yourself to see how new clothing might look on you, rather than models) is back again, but the biggest new feature is an AI-powered checkout feature that tracks prices for you, then buys things on your behalf using Google Pay when the price is right (with your confirmation, of course).
We're not sure this is going to help cure our gear-acquisition syndrome, but it it does also have some time-saving (and savings-wrecking) potential.
7. Google Search is getting even more personalized (if you want it to)Like traditional Search, Google's new AI Mode will offer suggestions based on your previous searches, but you can also make it a lot more personalized. Google says you'll be able to connect it to some of its other services, most notably Gmail, to help its answer your queries with a more tailored, personal touch.
One example Google gave was asking AI Mode for "things to do in Nashville this weekend with friends". If you've plugged it into other Google services, it could use your previous restaurant bookings and searches to lean the results towards restaurants with outdoor seating.
There are obvious issues here – for many, this may be a privacy invasion too far, so they'll likely not opt into connecting it to other services. Also, these 'personal context' powers sound like they have the 'echo chamber' problem of assuming you always want to repeat your previous preferences.
Still, it could be another handy evolution of Search for some, and Google says you can always manage your personalization settings at any time.
You might also likeGoogle clearly wants to inject artificial intelligence into more creative tools, as evidenced by the introduction of Flow at today’s Google I/O 2025.
Flow is the search giant’s new ‘AI filmmaking tool’ that uses Google’s AI models, such as Veo, Imagen, and Gemini to help creative types explore storytelling ideas in movies and videos without needing to go out and film clips and cinematic scenes or sketch out a lot of storyboard scenes by hand.
Effectively an extension of the experimental Google Labs VideoFX tool launched last year, Flow lets users add in text prompts in natural, everyday language to create scenes, such as "astronauts walk out of the museum on a bridge,” and the AI tech behind Flow will create such a scene.
Flow lets filmmakers bring their own assets into it, from which characters and other images can be created. Once a subject or scene is created, it can be integrated into clips and scenes in a fashion that’s consistent with the video or film as a whole.
There are other controls beyond the creation of assets and scenes, with Flow offering direct manipulation of camera angles, perspectives and motion, easy editing of scene to hone in on features or widen up a shot to include more action - this appears to work as easily as a cropping tool - and offers the ability to manage all the ‘ingredients’ and prompt for Flow.
Flow will be available for subscribers of Google Al Pro and Google Al Ultra plans in the US, with more countries slated to get access to the filmmaking AI soon.
AI-made movies? Image 1 of 8Google Flow in action (Image credit: Google Flow)Image 2 of 8(Image credit: Google Flow)Image 3 of 8(Image credit: Google Flow)Image 4 of 8(Image credit: Google Flow)Image 5 of 8(Image credit: Google)Image 6 of 8(Image credit: Google)Image 7 of 8(Image credit: Google Flow)Image 8 of 8(Image credit: Google)From seeing videos of Flow in action, it appears to be a powerful tool that brings an idea into a visual form, and with surprising realism. Powered by natural language prompts means budding filmmakers can create shots and science that would in the past have required dedicated sets or at least some deft CGI work.
In effect, Flow could be one of those AI tools that opens up the world of cinema to a wider range of creatives, or at least gives amateurs more powerful creative tools to bring their ideas to life.
However, this does raise the question of whether Flow would be used to create ideas for storytelling that would then be brought into silver screen life via physical sets, actors, and dedicated cinema CGI. Or if Flow will be used to create whole movies with AI, effectively letting directors be the sole producers of films, and bypass the need for actors, camera people, and the wealth of crew that are integral to traditional movie making.
As such, AI-powered tools like Flow could breathe new life into the world of cinema that one might argue has got a little stale, at least on the big production commercial side, and at the same time disrupt the roles and work required in the movie-making industry.
You might also likeGoogle just announced that its AI voice assistant, Gemini Live, is now available for free on iOS and Android.
Gemini Live has been available to paid subscribers for a while now, but you can now chat with AI, use your smartphone's camera to show it things, and even screen share without spending any money.
The major announcement happened at Google I/O, the company's flagship software event. This year, Google I/O has focused heavily on Gemini and the announcement of AI Mode rolling out to all US Google Search users.
@techradar ♬ original sound - TechRadarGemini Live is one of the best AI tools on the market, competing with ChatGPT Advanced Voice Mode. Where Gemini Live thrives is in its ability to interact with what you see on screen and in real life.
Before today, you needed an Android device to access Live's camera, but now that has all changed, and iPhone users can experience the best that Gemini has to offer.
Google says the rollout will begin today, with all iOS users being able to access Gemini Live and screen sharing over the following weeks.
More Gemini Live integration in your daily lifeFree access and iOS rollout weren't the only Gemini Live features announced at Google I/O. In fact, new functionality for the voice assistant could be a headline new addition.
Over the coming weeks, Google says Gemini Live will "integrate more deeply into your daily life. " Whether that's by adding events to your Google Calendar, accessing Google Maps, or interacting with more of the Google ecosystem, Gemini Live is going to become an essential part of how AI interacts with your device.
While Google didn't say if this functionality will be available on iOS, it's safe to assume that, for now, increased system integration will be limited to Android.
Gemini Live's free rollout, along with its upgrades, is one of, if not the, best announcements of Google I/O, and I can't wait to see how it improves over the next few months.
How to use Gemini Live (Image credit: Google)Accessing Gemini Live is simple, you just need access to the Gemini app on iOS or Android.
AI video generation tools such as Sora and Pika can create alarmingly realistic bits of video, and with enough effort, you can tie those clips together to create a short film. One thing they can't do, though, is simultaneously generate audio. Google's new Veo 3 model can, and that could be a game changer.
Announced on Tuesday at Google I/O 2025, Veo 3 is the third generation of the powerful Gemini video generation model. With the right prompt, it can produce videos that include sound effects, background noises, and, yes, dialogue.
Google briefly demonstrated this capability for the video model. The clip was a CGI-grade animation of some animals talking in a forest. The sound and video were in perfect sync.
If the demo can be converted into real-world use, this represents a remarkable tipping point in the AI content generation space.
"We’re emerging from the silent era of video generation," said Google DeepMind CEO Demis Hassabis in a press call.
Lights, camera, audioHe isn't wrong. Thus far, no other AI video generation model can simultaneously deliver synchronized audio, or audio of any kind, to accompany video output.
It's still not clear if Veo 3, which, like its predecessor, Veo 2, should be able to output 4K video, surpasses current video generation leader OpenAI Sora in the video quality department. Google has, in the past, claimed that Veo 2 is adept at producing realistic and consistent movement.
Regardless, outputting what appears to be fully produced video clips (video and audio) may instantly make Veo a more attractive platform.
It's not just that Veo 3 can handle dialogue. In the world of film and TV, background noises and sound effects are often the work of Foley artists. Now, imagine if all you need to do is describe to Veo the sounds you want behind and attached to the action, and it outputs it all, including the video and dialogue. This is work that takes animators weeks or months to do.
In a release on the new model, Google suggests you tell the AI "a short story in your prompt, and the model gives you back a clip that brings it to life."
If Veo 3 can follow prompts and output minutes or, ultimately, hours of consistent video and audio, it won't be long before we're viewing the first animated feature generated entirely through Veo.
Veo is live today and available in the US as part of the new Ultra tier ($249.99 a month) in the Gemini App and also as part of the new Flow tool.
Google also announced a few updates to its Veo 2 video generation model, including the ability to generate video based on reference objects you provide, camera controls, outpainting to convert from portrait to landscape, and object add and erase.
@techradar ♬ original sound - TechRadar You might also like