Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
All the latest content from the TechRadar team
Updated: 1 hour 47 min ago

Skullcandy's won me over with the Bose-tuned Method 360 ANC, but one design choice baffles me

Tue, 05/20/2025 - 20:21

Headphone maker Skullcandy holds a soft spot in my heart. It was my go-to brand for wired earbuds when I was a teenager (this was long before the best wireless earbuds dominated the audio market) because they were affordable, available in a range of funky colors and, at the time at least, I thought they sounded great.

When I entered the world of tech journalism in my early 20s, I was exposed to a plethora of new brands and I started earning adult money. This meant I found myself drifting away from the company that I’d long considered ‘budget’ and ‘the headphones to get if you don’t mind them getting damaged’ – instead investing in more premium offerings from the likes of Sennheiser and Bose.

So when Skullcandy came back onto my radar with the announcement of the Method 360 ANC earbuds, proudly stating they’d been designed and tuned in collaboration with none other than Bose, I was hit with a wave of nostalgia – a 20-year gap can count as nostalgic, right?

Not only that, but I also wondered if the maker of my first go-to headphones could reclaim its title and knock my current favorite LG ToneFree T90S – which themselves replaced the Apple AirPods Pro 2 – out of my ears.

The quick answer? They’ve come awfully close, but their large charging case has made my final decision much trickier than I expected.

Sounding sweet

From a sound quality and ANC perspective, Skullcandy’s collaboration with Bose is an absolute hit. Bose has long been a front runner when it comes to audio performance, but is arguably best known for making some of the best noise-cancelling headphones in recent memory.

I’m inclined to believe that Bose has had free reign with the audio and ANC smarts for the Method 360 ANC because from the moment I put the buds into my ears in the office, everything around me was silenced. Colleagues talking to each other near my desk, the office speaker blaring out questionable songs, it all disappeared.

(Image credit: Future)

My trusted LG earbuds perform similarly, but they require the volume to be increased a little further for similar noise-cancelling effects. And when nothing is playing, the Skullcandy earbuds do a better job of keeping external sounds to a minimum.

It took me a little longer to formulate a definitive opinion on the sound quality, partly because of the design (more on that later) and partly because I’d become so accustomed to the sound of the LG ToneFree T90S.

After inserting and removing both pairs from my ears more times than I can remember, I settled on the notion that Skullcandy’s latest effort sounds more engaging, a little clearer on the vocals and just simply fun.

The increasing intensity of the violins at the start of Massive Attack’s Unfinished Sympathy reveal them to be dynamically adept, and they show off their rhythmic talents when playing the deliciously upbeat Bread by Sofi Tukker.

Plus, despite not supporting spatial audio (I have to agree with my colleague Matt Bolton when he says the company “blew the perfectly good name it could've given the next-gen version where it did add this feature”), the earbuds do give songs some sense of space. I was able to confidently place the hi-hat sounds, hums and drum beats in the opening of Hayley Williams’ Simmer around my head, for example.

Overall, I’m very impressed with the sound performance of the Skullcandy buds, especially considering their $129.99 / £99 / AU$189.99 price tag, which places them firmly in affordable territory.

It’s not unjust to expect limitations where sound or features are concerned at certain price points, but I think the Method 360 ANC delivers a sound that belies their price tag.

You'll be able to read our full thoughts on the Skullcandy Method 360 ANC in our full review, which is on its way.

The peculiar case of the peculiar case

“If you like how they sound, how come the Skullcandy Method 360 ANC aren’t your new daily pair of earbuds?” I hear you ask. Well, it’s predominantly because of their case, but also a little to do with a design choice inherited from Bose.

When I first saw images of the case following their announcement, I was a little perplexed. All of the wireless earbuds I’m aware of come with a case that can easily be slipped into a pocket – apart from the Beats Powerbeats Pro 2 that is – yet the one supplied by Skullcandy looked enormous.

(Image credit: Future)

Now I’ve unboxed my own pair, I can confirm the case is pretty damn big. Not heavy, just big.

It’s an interesting choice, especially since the earbuds themselves don’t take up that much space. I’m also not sold on the fact you have to slide the internal section of the case out to access the buds.

What’s more, I feel the most logical way to hold the case is horizontally when sliding the earbuds out, with the carabiner clip on the right as it feels more weighted and natural in my right hand.

Doing so reveals the earbud for the right ear, with the left earbud on the other side. That means I have to pick out the right earbud with my left hand, then flip the case over and do the opposite for the left bud.

And what’s even more confusing is the earbuds appear to fit into their charging spots upside down. So not only do I have to pass each bud from the ‘wrong’ hand to the right one, but I also have to flip them around the right way. There are just too many steps involved for what has always been a seamless and convenient process with other earbuds.

What’s also interesting is that, since the Method’s launch, I’ve noticed a second, more affordable pair of earbuds appear on Skullcandy’s website called the Dime Evo. They employ a similar sliding-case design, but both earbuds are on the same side, which I can only assume will make the removal process that little bit easier.

(Image credit: Future)

Based on Skullcandy’s imagery for the Method 360 ANC, it’s targeting a young, cool demographic who walk around with sling bags over their shoulder, upon which they can attach the earbuds via the integrated carabiner clip.

As much as I would love to say I fall into that group, the fact is I don’t – well, not anymore. And because I’m not someone who wants to clip their headphones onto even the belt loop of my pants, the case design is completely lost on me.

I would further argue that the target audience is a little niche, too, which is a shame considering how good I think the earbuds sound. I’m saddened for Skullcandy that not enough pairs of ears are going to get to hear them.

Hey, I did say it was predominantly the case I had issues with.

(Image credit: Future)

As for the aforementioned inherited design trait – that would be the Stability Bands found on the Bose QuietComfort Ultra Earbuds and QuietComfort Earbuds II.

Despite their intention to provide a more stable and secure fit, I initially had a lack of confidence in their ability. I often found myself wanting to readjust them in my ears to make sure they were locked in, which also meant I pressed the on-ear controls at the same time and paused my music.

It’s not just me who’s had an issue with them – my colleague and self-confessed Bose fangirl, Sharmishta Sarkar, has previously written about her shortcomings with the design too.

I eventually settled on the largest size of Stability Band (which I could only determine by sight, as there’s no indication of which size is which on the included book of spares) and so far, so good. They definitely feel more secure in my ears compared to when I tried other sizes and passive noise cancelation has also been improved.

However, the design choice has confirmed I get along best with earbud designs that insert further into my ear canal.

Awarding cool points

(Image credit: Future)

I like the Skullcandy Method 360 ANC earbuds. I can’t say I like the design of the case, nor do I like their mouthful of a name (Skullcandy Method would have been just fine in my opinion), but considering the biggest selling point for a pair of earbuds is how they sound, I can find little to fault.

I will most likely use them whenever I’m in the office, as I can leave the case on the desk with the skull logo facing me directly. While I might not feel cool enough to clip the case to my person, that logo alone takes me back to my teenage years. For me, that’s cool enough.

You might also like
Categories: Technology

Acer partner unveils Ryzen 9 laptop with a 5070 TI GPU which will get creators excited, but I just hope it is affordable

Tue, 05/20/2025 - 18:04
  • Chuwi GameBook promises elite performance at a price lower than premium gaming laptops
  • Ryzen 9 9955HX with 32 threads powers this gaming and creator machine
  • RTX 5070 Ti with 12GB GDDR7 makes this a serious contender for 4K gaming

Chuwi, a company better known for budget devices than flagship powerhouses, has unveiled its latest effort to break into the high-performance segment: the GameBook 9955HX.

Promoted as a laptop for coders, gamers, and professional creators, this new model is powered by the AMD Ryzen 9 9955HX processor, a Zen 5-based chip featuring 16 cores and 32 threads, with a boost frequency of up to 5.4GHz. It also includes a large 64MB L3 cache and a configurable TDP that can peak around 55W.

As of the time of writing, the device's price remains undisclosed - however, given Chuwi’s history of undercutting bigger brands, it’s reasonable to expect this model to be priced lower than similar offerings from MSI or Asus.

Chuwi GameBook 9955HX

For graphics, the GameBook 9955HX integrates the Nvidia GeForce RTX 5070 Ti Laptop GPU, based on the latest Blackwell RTX architecture, making it well-suited for video editing and graphics-intensive tasks.

The GPU offers 12GB of GDDR7 VRAM, a 140W TGP, and supports features such as full ray tracing, DLSS 4, and Multi Frame Generation.

Chuwi says this setup can deliver up to 191 FPS in 1440p gaming with ray tracing enabled, and 149 FPS at 4K, placing it firmly in the performance laptop category.

For creators working with AI-accelerated tools, advanced 3D rendering, or video post-production, this could prove to be a top contender, provided its cooling system and thermal management are up to the task.

The display is a 16-inch 2.5K IPS panel with a 300Hz refresh rate, 100% sRGB color coverage, and a 16:10 aspect ratio. Peak brightness reaches 500 nits, though claims regarding color accuracy have yet to be verified through independent calibration tests.

Internally, the GameBook comes equipped with 32GB of DDR5 RAM at 5600MHz, upgradeable to 64GB, and a 1TB PCIe 4.0 SSD. Storage expansion is supported via two M.2 slots, one of which supports PCIe 5.0, offering a level of future-proofing not typically seen in Chuwi’s lineup.

Connectivity includes Wi-Fi 6E, Bluetooth 5.2, a 2.5Gb Ethernet port, two USB-C ports (supporting 100W and 140W power delivery), three USB-A 3.2 Gen 1 ports, HDMI 2.1, and Mini DisplayPort 2.1a. There's also a 3.5mm audio jack, DC-in, and a Kensington lock slot.

Other features include a full-sized RGB-backlit keyboard, a 2MP IR webcam with a privacy shutter, a 77.77Wh battery, and stereo speakers. The laptop measures just over 21mm thick and weighs 2.3kg.

You might also like
Categories: Technology

Dell CEO tells us how AI can make us “more effective as a species” - “there are a lot of things we don’t do, that we used to do, because we now have the tools”

Tue, 05/20/2025 - 16:44
  • Dell CEO lays out view of the future of AI
  • AI will never replace human workers, but will aid them instead, Michael Dell says
  • Companies should see AI as a great way to reinvent themselves too

The CEO of Dell Technologies has told TechRadar Pro that AI offers a great opportunity for organizations to re-evaluate themselves to positive effect

Speaking at a media Q&A session at Dell Technologies World 2025, Michael Dell looked to reassure us that AI will never fully replace human workers, and in fact may offer them a whole new outlook.

In a wide-ranging discussion, Dell also laid out his views on political instability affecting the technology industry, and some of his key leadership principles.

"Always some change"

“The way I think about this is that if you look at every progress, that’s for any technology, you always have some change that goes on,” Dell said in response to our question about AI affecting or even replacing human workers.

“My way of thinking is there’s probably a 10 percent effect for that - but I think 90 percent of that is actually growth and expansion and opportunity, and ultimately what I think you’re going to see is more opportunities, more economic growth.”

“There are a lot of things that we don’t do, that we used to do, because we have the tools, and we’re more effective as a species because of that - (using AI) is just another example of that.”

“One of the keys beyond productivity and efficiency I think for organizations, is to reimagine themselves, and say, alright, what is the trajectory of these capabilities, where is it going, and what should our activity look like in three years, five years time, given this capability.”

“You know, a lot of roles today just didn’t exist 10, 20, 30 years ago - and no-one was forecasting that.”

(Image credit: Future / Mike Moore)

Having spoken with Nvidia CEO Jensen Huang in his opening keynote, Dell was also asked if the two shared any overarching leadership principles.

“I think anytime there’s a new technology, you have to leap ahead (and think), what is the likely impact of this, and how do we need to change? And if we don’t have a passion around that, or there isn’t a crisis in your organization - make one! We think it can make us a better company.”

Dell was also asked about how changing global economic and political situations might affect the company’s future outlook

“We agree that those are issues and challenges,” he said, “in my general view, the importance of this technology is greater than all those problems - and I heard somebody say recently, tokens are bigger than tariffs - and that would sort of summarize our view of it.”

“Are all those things helpful to our business? No, they’re not - but there’s a limit of what we can do about that, right? We can certainly do the things we’re supposed to do, and focus on the things we can control - we’re seeing plenty of companies that are dealing with all those challenges just as we are, and powering ahead in any case.”

You might also like
Categories: Technology

‘It will last you from the moment you wake up to the moment you go to bed’: Samsung doesn’t want you to worry about the Galaxy S25 Edge’s battery life

Tue, 05/20/2025 - 15:01

The Samsung Galaxy S25 Edge is earning plaudits aplenty for its stunning titanium design and improbable thinness (not least from us here at TechRadar), but the phone’s smaller-than-hoped-for battery continues to raise eyebrows.

At 3,900mAh, the cell in the Galaxy S25 Edge is only a smidgen smaller than the one in the standard Galaxy S25, but in Samsung’s new phone, that same battery has to power a much larger 6.7-inch display.

Understandably, that’s led to question marks over the Edge’s endurance, but Samsung is confident that its new handset will provide more than enough battery life for all but the most hardcore users.

In an exclusive interview with TechRadar, Kadesh Beckford, Smartphone Product Specialist at Samsung MX, played down the idea that the Galaxy S25 Edge compromises on battery life to deliver a more aesthetically pleasing design.

“Even though we’ve made this device incredibly thin, we’ve tried to ensure that customers have [suitable] battery life available to them based upon their needs,” Beckford explained. “With 24 hours of video playback time and an all-day battery, [the Edge] is going to last you literally from the moment you wake up to the moment you go to bed. So, we're actually giving [consumers] what they need [in terms of battery life].

“And also,” Beckford continued, “with the lithium graphite technology and the thermal interface material in there, it keeps the device cool, so pretty much no matter what you're doing on your phone, [it’ll] last all day long and then some.”

(Image credit: Future)

‘All day long and then some’ is a bold claim for a 6.7-inch handset with a 3,900mAh cell – and it’s one we’re currently putting to the test for our full Samsung Galaxy S25 Edge review – but Beckford says his enthusiasm for the phone’s endurance is based on his own real-world experience with the device ahead of its official launch.

“I’ve played Genshin Impact on the Galaxy S25 Edge, I’ve played PUBG,” he explained. “It moves so smoothly, it’s unbelievable, and the phone has lasted me all day – sometimes into the following day as well. I’ve seen those elements [in action].

“Do also remember that, traditionally, phones at this level of thinness don’t support wireless charging. With the custom Snapdragon 8 Elite chipset, we’ve still been able to include a thermal interface and wireless charging, with wireless power share as well, so I can even charge up my Galaxy Buds [using the Galaxy S25 Edge]. That there is real innovation.”

Traditionally, phones at this level of thinness don’t support wireless charging.

Kadesh Beckford

Beckford concluded: “You’ve also got the ability to add a Qi2 case [to the Galaxy S25 Edge] for convenience at home or in your car. So, you can easily connect it up, and your device will last all day.”

Of course, being compatible with convenient charging methods isn’t the same as offering good battery life outright, but Beckford’s point around real-world practicality stands. For the majority of users, the Galaxy S25 Edge will deliver all-day battery life, and the phone’s wireless charging and Qi2 compatibility should ensure that it can be charged anytime, anywhere if you do find yourself wanting for juice.

The Galaxy S25 Edge supports 25W wired, 15W wireless, and 4.5W reverse wireless charging (Image credit: Future)

Just how quickly the Galaxy S25 Edge can be charged to 100% is another matter entirely. The phone supports 25W wired, 15W wireless, and 4.5W reverse wireless charging – that’s comparable to the standard Galaxy S25 but a way off the Galaxy S25 Plus and S25 Ultra, which both support 45W wired charging.

Whichever way you look at it, then, you will be sacrificing some endurance by choosing the Galaxy S25 Edge over one of the best Samsung phones. But, as Samsung suggests, that downgrade isn’t likely to feel dramatic for those who already charge their smartphone on a daily basis. Check out our soon-to-be-published Galaxy S25 Edge review for our own verdict on the matter.

You might also like
Categories: Technology

Google’s Android XR glasses look like its most exciting gadget in years – but the headset leaves me wanting more

Tue, 05/20/2025 - 15:01
  • Android XR has been showcased by Google at I/O 2025
  • It relies heavily on AI to deliver many of its features, like live translation
  • The headset looks interesting, but the glasses are the true star

Google has finally taken the lid off Android XR at Google I/O 2025 to show us what the operating system will be capable of when it launches on Android XR headsets and glasses later this year.

We didn’t see quite everything I was hoping we would, but we did learn what Google’s silver bullet in XR will be: Google Gemini. Its AI-centric approach was demoed across both hardware types – AR glasses and mixed-reality headsets.

Starting with the latter, Google gave a public version of Project Moohan headset demonstrations it has been running privately for media and tech experts. It highlights some standard headset advantages, like the benefits of an immersive screen for multi-tasking – we were shown a user accessing YouTube, Google Maps, and a travel blog as they research a location they’re planning to visit.

Then, in one impressive moment, that user asks Gemini if it can “take me to Florence.” Gemini obliges by opening Google’s immersive 3D map and gives them a birds-eye view of the city and some landmarks (including the iconic Cathedral of Santa Maria del Fiore, which Assassin’s Creed 2 players will be very familiar with).

(Image credit: Lance Ulanoff / Future)

Then there’s the glasses. Across a few different scenarios Google highlighted how Android XR specs can make your life easier with hands-free controls and a head-up display.

You can draft and send messages to your contacts, access live translation with on-screen subtitles, search for Google Maps recommendations and then get directions to a location, and take pictures using the glasses’ camera and see a preview of the shot right away.

It’s reminiscent of what Meta’s Ray-Ban glasses are capable of, and it’s exactly what we've been expecting Meta’s rumored smart glasses with a display will be capable of if (read: when) they’re showcased later this year.

(Image credit: Google)

I was expecting more from Google in the headset department, frankly.

Android XR certainly seems neat, but I’ve yet to see a reason why it’s better than – or, to an extent, on a par with – with the competition (*cough* Meta Quest 3 *cough*). However, I’m at least a little hopeful that by the time Project Moohan is ready to launch for consumers (with it again only being teased for release “later this year”) some of Android XR’s letdowns will be addressed.

For now, the headset certainly seems like it’s playing second fiddle to the true Android XR star: the glasses.

(Image credit: Google)

Not only are most of the showcased Android XR features (which look very useful) made for a device you’d wear all the time, I’m surprised by how much choice Google is already offering us in terms of hardware.

Samsung is working on Android XR tech but so is Xreal with its Project Aura, and stylish eyewear brands Gentle Monster and Warby Parker. And Google’s promise of it “starting with” these brands suggests more partners are on the way.

This abundance of choice is fantastic for two key reasons.

First up, with more choice prices will have to remain competitive. Meta’s display-equipped smart glasses are reportedly set to cost over $1,000 with insiders expecting a cost in the $1,300-$1,400 range (which would be around £1,000-£1,100 or AU$2,050-AU$2,200).

Meta's glasses are cool, but don't offer Android XR's variety yet (Image credit: Meta)

With more glasses options to choose from we may see prices drop to more affordable levels more quickly than if there were just one or two players in the game.

Second, glasses, like other fashion accessories, need to prioritize style to some degree. Utility is important, but if we’re expected to wear smart glasses all day everyday then just like any other accessory they need to suit our identity.

By partnering with a range of different brands out the gate – the aesthetics of Gentle Monster and Warby Parker are almost polar opposites to one another – Android XR tech should appeal to a wider audience than the Meta’s Ray-Ban-only approach, because it will boast glasses designed to suit a wider range of fashion niches.

(Image credit: Google)

It’s still early days for Android XR, and there are crucial details we’re still missing, but Google has certainly come out swinging with its latest operating system

I’ll be paying close attention to Google’s Android XR demos, and looking for more concrete information on the upcoming hardware. For now, though, Google certainly has me on the hook.

You might also like
Categories: Technology

Huawei's new foldable laptop looks like it was ripped straight out of a Mission Impossible movie – this is the future

Tue, 05/20/2025 - 15:00
  • Huawei unveiled its new MateBook Fold Ultimate Design laptop during Computex 2025
  • It features an 18-inch OLED display, and a new foldable mechanism to rival competitors
  • It's only available in China now, but could come to global markets soon

Computex 2025 is here, and it was only a matter of time until one of the huge tech companies during the expo revealed a new device that takes laptop design to the next level – and this one may be worth keeping tabs on.

On its website (translated from Chinese), Huawei announced the new MateBook Fold Ultimate Design laptop, featuring an 18-inch (when expanded) 3K OLED display running on Harmony OS 5. Notably, unlike other foldable laptops like the Asus Zenbook Duo, the MateBook Fold Ultimate Design unfolds into an entire single screen.

Essentially, this means that instead of what would be two screens connected via regular laptop hinges, it's a 'water-drop hinge' that allows it to open and close smoothly, and lay completely flat for an 18-inch screen experience. This mechanism is arguably a step up above Microsoft's new Surface Pro, which acts as a tablet but is also a 2-in-1 when you use its keyboard (which is sold separately).

When in its laptop form, you'll have a 13-inch OLED screen at your disposal using its touchscreen keyboard (or the keyboard that's included in the package). However, the MateBook Fold Ultimate Design looks like more than just a 2-in-1 laptop; you'll be able to transform it from a regular laptop into a portable 18-inch display for casual viewing on the device in a matter of seconds.

Closing the laptop entirely gives it a classy and thin notebook or diary-style design, as if it's built as a disguise, further setting it apart in terms of its design from competitors. To add a cherry on top, it has 1,600 nits of peak brightness, and a 74.69WHr battery – and both features could easily stand alone as major selling points.

If I didn't know what it really was, you could easily tell me it's just a notepad... (Image credit: Huawei)

According to a reliable tech analyst, Ming-Chi Kuo, Huawei is planning a production target for the MateBook Fold of between 180,000 and 200,000 units, and its life cycle will primarily depend on user feedback regarding the software functionality.

It's an important factor to note, since the MateBook Fold is by no means an inexpensive laptop. It's currently only available in China, starting at ¥23,999, which converts to around $3,330 / £2,490 / AU$5,200, but its new features will be hard to turn down if you can afford it.

Analysis: If this is the future of laptop design, I'm here for it

(Image credit: Huawei)

I've played plenty of futuristic games like Cyberpunk 2077 and seen enough movies like Mission Impossible to suggest that Huawei's new laptop could be a game-changer. It isn't like other companies, such as Asus, haven't introduced similar devices – the difference is, none of them utilize the 'water-drop hinge' mechanism Huawei has introduced.

It simply makes the Asus Zenbook Duo and the Lenovo Yoga Book 9i look like clumsy setups, that require a stand to stay upright and hinges for both screens. The MateBook Fold is the first laptop foldable laptop I've seen that has caught my eye – if only I could afford it.

The only drawback here is that it will likely set you back thousands of dollars if it eventually launches globally. However, if it sells well enough and gains the traction that I anticipate, we could easily see Huawei's competitors and others follow suit soon – and that's exactly what I'm hoping for.

You may also like...
Categories: Technology

The 13 biggest announcements from Google I/O 2025

Tue, 05/20/2025 - 14:59

Want proof that Google really has gone all-in on AI? Then look no further than today's Google I/O 2025 keynote.

Forget Android, Pixel devices, Google Photos, Maps and all the other Google staples – none were anywhere to be seen. Instead, the full two-hour keynote spent its entire time taking us through Gemini, Veo, Flow, Beam, Astra, Imagen and a bunch of other tools to help you navigate the new AI landscape.

There was a lot to take in, but don't worry – we're here to give you the essential round-up of everything that got announced at Google's big party. Read on for the highlights.

1. Google Search got its biggest AI upgrade yet 

(Image credit: Google)

‘Googling’ is no longer the default in the ChatGPT era, so Google has responded. It’s launched its AI Mode for Search (previously just an experiment) to everyone in the US, and that’s just the start of its plans.

Within that new AI Mode tab, Google has built several new Labs tools that it hopes will stop us from jumping ship to ChatGPT and others.

A ‘Deep Search’ mode lets you set it working on longer research projects, while a new ticket-buying assistant (powered by Project Mariner) will help you score entry to your favourite events.

Unfortunately, the less popular AI Overviews is also getting a wider rollout, but one thing’s for sure: Google Search is going to look and feel very different from now on.

2. Google just made shopping more fun @techradar

♬ original sound - TechRadar

Shopping online can go from easy to chaotic in moments, given the huge amount of brands, retailers, sellers and more – but Google is aiming to use AI to streamline the process.

That's because the aforementioned AI Mode for Search now offers a mode that will react to shopping-based prompts, such as ‘I’m looking for a cute purse’ and serve up products and images for inspiration and allow users to narrow down large ranges of products; that is if you live in the US as the mode is rolling out there first.

The key new feature in the AI-powered shopping experience is a try-on mode that lets you upload a single image of yourself, from which Google’s combination of its Shopping Graph and Gemini AI models will then enable you to virtually try on clothes.

The only caveat here is the try-on feature is still in the experimental stage and you need to opt-in to the ‘Search Labs’ program to give it a go.

Once you have the product or outfit in mind, Google’s agentic checkout feature will basically buy the product on your behalf, using the payment and delivery details stored in Google Pay; that is, if the price meets your approval – as you can set the AI tech to track the cost of a particular product and only have it buy it if the price is right. Neat.

3. Beam could reinvent video calls

(Image credit: Google)

Video calls are the bane of many people's lives, particularly if you work in an office and spend 60% of your time in such calls. But Google's new Beam could make them a lot more interesting.

The idea here is to present calls in 3D, as if you're in the same room as someone when you're on a call with them; a bit like with VR. However, there's no need for a VR headset or glasses here, with Beam instead using cameras, mics, and – of course – AI to work its magic.

If that all sounds rather familiar, it's because Google has teased this before, under the name Project Starline. But this is no longer a far away concept as it's here, and almost ready for people to use.

The caveat is that both callers will need to sit in a custom-made booth that can generate the 3D renders that are needed. But it's all pretty impressive nonetheless, and the first business customers will be able to get the kit from HP later in 2025.

4. Veo 3 just changed the game for AI video

AI video generation tools are already incredibly impressive, given they didn't even exist a year or two ago, but Google new Veo 3 model looks like taking things to the next level.

As with the likes of Sora and Pika, the tool's third-generation version can create video clips and then tie them together to make longer films. But unlike those other tools, it can also generate audio at the same time – and expertly sync sound and vision together.

Nor is this capability limited to sound effects and background noises, because it can even handle dialogue – as demonstrated in the clip above, which Google demoed in its I/O 2025 keynote.

"We’re emerging from the silent era of video generation," said Google DeepMind CEO Demis Hassabis – and we're not going to argue with that.

5. Gemini Live is here – and it's free

(Image credit: Future)

Google Gemini Live, the search giant’s AI-powered voice assistant, is now available for free on both Android and iOS. Previously a paid-for option, this move opens up the AI to a wealth of users.

With Gemini Live, you can talk to the generative AI assistant using natural language, as well as use your phone camera to show it things from which it’ll extract information to serve up related data. Plus, the ability to share one’s phone screen and camera with other Android users via Gemini Live has now been extended to compatible iPhones.

Google will start rolling out Gemini Live for free from today, with iOS users being able to access the AI and its screen sharing features in the coming weeks.

6. Flow is an awesome new AI filmmaking tool

(Image credit: Google)

Here's one for all the budding movie directors out there: at I/O 2025, Google took the covers off Flow, an AI-powered tool for filmmakers that can create scenes, characters and other movie assets from a natural language text prompt.

Let’s say you want to see doctors perform an operation in the back of a 1070s taxi; well, pop that into Flow and it’ll generate the scene for you, using the Veo 3 model, with surprising realism.

Effectively an extension of the experimental Google Labs VideoFX tool launched last year, Flow will be available for subscribers to Google Al Pro and Google Al Ultra plans in the US, with more countries to come.

And it could be a tool that’ll let budding directors and cinematic video makers more effectively test scenes and storytelling, without needing to shoot a lot of clips.

Whether this will enhance filmmaking planning or yield a whole new era of cinema, where most scenes are created using generative AI rather than making use of sets and traditional CGI, has yet to be seen. But it looks like Flow could open up movie making to more than just keen amateurs and Hollywood directors.

7. Gemini's artistic abilities are now even more impressive

(Image credit: Google)

Gemini is already a pretty good choice for AI image generation; depending on who you ask, it's either slightly better or slightly worse than ChatGPT, but essentially in the same ballpark.

Well, now it might have moved ahead of its rival, thanks to a big upgrade to its Imagen model.

For starters, Imagen 4 brings with it a resolution boost, to 2K – meaning you'll be better able to zoom into and crop its images, or even print them out.

What's more, it'll also have "remarkable clarity in fine details like intricate fabrics, water droplets and animal fur, and excels in both photorealistic and abstract styles”, Google says – and judging by the image above, that looks pretty spot on.

Finally, Imagen 4 will give Gemini improved abilities at spelling and typography, which has bizarrely remained one of the hardest puzzles for AI image generators to solve so far. It's available from today, so expect even more AI-generated memes in the very near future.

8. Gemini 2.5 Pro just got a ‘groundbreaking new ‘Deep Think’ upgrade

(Image credit: Shutterstock/JLStock)

Enhanced image capabilities aren't the only upgrades coming to Gemini, either – it's also got a dose of extra brainpower with the addition of a new Deep Think Mode.

This basically augments Gemini 2.5 Pro with a function that means it’ll effectively think harder about queries posed at it, rather than trying to kick out an answer as quickly as possible.

This means the latest pro version of Gemini will run multiple possible lines of reasoning in parallel, before deciding on how to respond to a query. You could think of it as the AI looking deeper into an encyclopaedia, rather than winging it when coming up with information.

There is a catch here, in that Google is only rolling out Deep Think Mode to trusted testers for now – but we wouldn't be surprised if it got a much wider release soon.

9. Gemini AI Ultra is Google’s new ‘VIP’ plan for AI obsessives

(Image credit: Shutterstock/Sadi Santos)

Would you spend $3,000 a year on a Gemini subscription? Google thinks some people will, because it's rolled out a new Gemini AI Ultra plan in the US that costs a whopping $250 a month.

The plan isn't aimed at casual AI users, obviously; Google says it offers "the highest usage limits and access to our most capable models and premium features" and that it'll be a must if "you're a filmmaker, developer, creative professional or simply demand the absolute best of Google Al with the highest level of access."

On the plus side, there's a 50% discount for the first three months, while the previoiusly available Premium plan also sticks around for $19.99 a month, but now renamed to AI Pro. If you like the sound of AI Ultra, it will be available in more countries soon.

10. Google just showed us the future of smart glasses

(Image credit: Google)

Google finally gave us the Android XR showcase it has been teasing for years.

At its core is Google Gemini – on-glasses-Gemini can find and direct you towards cafes based on your food preferences, it can perform live translation, and find answers to questions about things you can see. On a headset, it can use Google Maps to transport you all over the world.

Android XR is coming to devices from Samsung, Xreal, Warby Parker, and Gentle Monster, though there’s no word yet on when they’ll be in our hands.

11. Project Astra also got an upgrade

(Image credit: Future)

Project Astra is Google’s powerful mobile AI assistant that can react and respond to the user’s visual surroundings, and this year’s Google I/O has given it some serious upgrades.

We watched as Astra gave a user real-time advice to help him fix his bike, speaking in natural language. We also saw Astra argue against incorrect information as a user walked down the street mislabeling the things around her.

Project Astra is coming to both Android and iOS today, and its visual recognition function is also making its way to AI Mode in Google Search.

12. …As did Chrome

(Image credit: Future)

Is there anything that hasn’t been given an injection of Gemini’s AI smarts? Google’s Chrome browser was one of the few tools that hadn’t it seems, but that’s now changed.

Gemini is now rolling out in Chrome for desktop from tomorrow to Google AI Pro and AI Ultra subscribers in the US.

What does that mean? You’ll apparently now be able to ask Gemini to clarify any complex information that you’re researching, or get it to summarize web pages. If that doesn’t sound too exciting, Google also promised that Gemini will eventually work across multiple tabs and also navigate websites “on your behalf”.

That gives us slight HAL vibes (“I’m sorry, Dave, I’m afraid I can’t do that”), but for now it seems Chrome will remain dumb enough for us to be considered worthy of operating it.

13. …And so did Gemini Canvas

As part of Gemini 2.5, Canvas – the so-called ’creative space inside the Gemini app – has got a boost via the new upgraded AI models in this new version of Gemini.

This means Canvas is more capable and intuitive, with the tool able to take data and prompts and turn them into infographics, games, quizzes, web pages and more within minutes.

But the real kicker here is that Canvas can now take complex ideas and turn them into working code at speed and without the user needing to know specific coding languages; all they need to do is describe what they want in the text prompt.

Such capabilities open up the world of ‘vibe coding’, where one can create software without needing to know any programming languages, and it also has the capability of prototyping new ideas for apps at speed and just through prompts.

You might also like
Categories: Technology

New Nintendo Switch 2 Cyberpunk 2077 gameplay has been revealed – say what you like, but I'm still not impressed

Tue, 05/20/2025 - 14:44
  • CD Projekt Red's Cyberpunk 2077 has been showcased on the Nintendo Switch 2
  • Comparisons between it and other consoles like the Steam Deck and PS5 have been made
  • DLSS appears to be at the forefront of the handheld's performance and visual capabilities

The launch of the Nintendo Switch 2 is almost here, and it'll come with a handful of titles ready for gamers to dive into from day one. Luckily, we now have an early look at one in particular, from game developer CD Projekt Red.

In a YouTube video by Nintendo Life, CD Projekt Red's Cyberpunk 2077 is revealed running on the Nintendo Switch 2, with visual quality that rivals other PC handhelds like the Steam Deck. This is thanks to Nvidia's custom T239 chip, which allows the new handheld to take advantage of DLSS upscaling, for better-than-native performance while upscaling from a lower internal resolution.

Considering earlier expectations that were based on the hardware rumors (which turned out to be legitimate), Cyberpunk 2077 has impressed many gamers with its lighting and environment details. However, it's worth noting that Nvidia's DLSS upscaling is rumored to be used quite aggressively, which is clear to see in some of the blurry sequences in the gameplay showcase.

This is to be expected as the Switch 2 is already pushing above its weight in running a game like Cyberpunk 2077. But there are still very evident performance dips, particularly during vehicle traversal, which highlights the potential issue – if DLSS is indeed used aggressively and performance is not up to par, seeing dips into what looks like the upper end of 20fps, then is it really impressive after all?

PlayStation 5 on the left, Nintendo Switch 2 on the right... (Image credit: Nintendo Life)

It's worth noting that it isn't exactly clear which segments of the gameplay below are either docked or handheld (it's likely the former considering the 4K video quality), and there will be a choice between quality and performance modes.

This is also still a work in progress and will likely be drastically different from the launch version, but it will be interesting to see how this fares against the MSI Claw 8 AI+ – which delivers great visuals and performance playing Cyberpunk 2077 – along with other upcoming handhelds like the recently-announced MSI Claw A8 using AMD's Ryzen Z2 Extreme.

Analysis: It's better than I expected, but doesn't warrant the Switch 2's cost

Now, before you say I have a Nintendo Switch 2 agenda, I do think games like Cyberpunk 2077 have the potential to further exceed their performance and visual expectations on the device. Despite that, the handheld's $449.99 / £395.99 / AU$699.95 price has me asking a basic question – wouldn't it be better to buy a PS5, Xbox Series X at around the same price, for a better experience?

I could go into the handheld PC comparisons, and the Claw 8 AI+'s processing power, but I'd hate to sound like a broken record. Spoiler alert; it's purported to be the better and more powerful device.

However, the simple fact here is that the Switch 2's Cyberpunk 2077 isn't in the same ballpark visually and performance-wise as either of Sony's or Microsoft's consoles. In that sense, the Switch 2's value as a gaming console rival is lost if it costs nearly the same and yet provides a worse experience.

Before you point out that the MSI Claw 8 AI+ costs more than the PS5 and Xbox Series X, it's not in the category of game console (it also doesn't come with a dock for extra performance), and its price compared to the Switch 2 is still warranted considering the power packed in such a compact device. If the Switch 2's price were much lower, I'd be far more impressed with Cyberpunk 2077's performance, but tariffs or not, that's not the case.

DLSS seems to be the one factor that will do the heavy lifting with the Switch 2, and I'd argue it's the one reason why its version of Cyberpunk 2077 can be compared to other handhelds using either XeSS or FSR (neither of which are on the level of Nvidia's DLSS). Even then, without tools like Frame Generation, it still leaves me unimpressed with the Switch 2, but I'll happily eat my words if I'm proven wrong with its capabilities.

You might also like...
Categories: Technology

Netflix's next big true crime drama is channeling Gone Girl in its first trailer

Tue, 05/20/2025 - 13:47
  • Netflix has released a trailer for the new crime movie, A Widow's Game
  • The plot is based on a case called "the black widow of Patraix"
  • It's released on the streamer on May 30

A Widow's Game is a gripping new Netflix movie which is giving serious Gone Girl vibes, and I'm so excited to watch when it arrives on one of the best streaming services.

Arriving on May 30, it definitely has the potential to be one of the best Netflix movies, as it's inspired by a very interesting case I hadn't heard of before known as "the black widow of Patraix."

The Spanish-language movie is directed by Carlos Sedes and written by the team that brought us the Netflix drama series The Asunta Case. That series follows a couple who reported their daughter missing, which unravels the truth about a seemingly picture-perfect family.

Check out the new trailer below.

What is the plot of A Widow's Game?

(Image credit: Netflix)

Set in 2017, the body of a man is found in a parking lot. He's been stabbed seven times and the authorities believe all signs point to a crime of passion. With a veteran inspector heading up the crime, she's soon led to a suspect no one expected: Maje, the young widow who had been married to the victim for less than a year.

The cast is led by Pan's Labyrinth star Ivana Baquero, who plays Maje, and Criminal's Carmen Machi, who is Eva, the case inspector. The cast also includes Tristán Ulloa, Joel Sánchez, Álex Gadea, Pablo Molinero, Pepe Ocio, Ramón Ródenas, Amparo Fernández and Miquel Mars.

I love a good crime drama and I'm very excited to see this one unfold and how the titular widow is brought to justice. If she is, of course!

You might also like
Categories: Technology

These three stalkerware apps have just gone dark, and a data breach could be to blame

Tue, 05/20/2025 - 13:00
  • Three spouseware apps have disappeared, journalists have found
  • All three were leaking sensitive data
  • It's not uncommon for spouseware apps to disappear and rebrand after a security mishap

Three spouseware apps - Cocospy, Spyic, and Spyzie, have gone dark. The apps, which are all basically clones of one another, are no longer working. Their websites are gone, and their cloud storage, hosted on Amazon, is deleted.

The news was broken by TechCrunch earlier this week, who said that the reason behind the disappearance is not blatantly obvious, but it could be linked to data breaches that happened earlier this year.

“Consumer-grade phone surveillance operations are known to shut down (or rebrand entirely) following a hack or data breach, typically in an effort to escape legal and reputational fallout,” the publication wrote.

60% off for Techradar readers

With Aura's parental control software, you can filter, block, and monitor websites and apps, set screen time limits. Parents will also receive breach alerts, Dark Web monitoring, VPN protection, and antivirus.

Preferred partner (What does this mean?)View Deal

The grey zone

“LetMeSpy, a spyware developed out of Poland, confirmed its “permanent shutdown” in August 2023 after a data breach wiped out the developer’s servers. U.S.-based spyware maker pcTattletale went out of business and shut down in May 2024 following a hack and website defacement.”

Spouseware, or spyware, is a type of application that operates in the grey zone. It is advertised as a legitimate software, used to keep track of minors, people with special needs, and similar. However, most of the time it is just a cover for illegal activities, such as spying on other members of the household, love interests, and similar.

Given its nature, the development team and key people are usually hidden, which makes it difficult for members of the media to get a comment or a statement.

In late February this year, two of the apps - Cocospy and Spyic - were found exposing sensitive user data: email addresses, text messages, call logs, photographs, and other sensitive information. Furthermore, researchers were able to exfiltrate 1.81 million of email addresses used to register with Cocospy, and roughly 880,000 addresses used for Spyic. Besides email addresses, the researcher managed to access most of the data harvested by the apps, including pictures, messages, and call logs.

Just a week later, similar news broke for Spyzie. The app was found leaking email addresses, text messages, call logs, photographs, and other sensitive data, belonging to millions of people who, without their knowledge or consent, have had these apps installed on their devices. The people who installed those apps, in most cases partners, parents, significant others, have also had their email addresses exposed in the same manner.

Via TechCrunch

You might also like
Categories: Technology

Google Search just got its biggest-ever upgrade to lure you away from ChatGPT – here are 7 new things to try

Tue, 05/20/2025 - 12:47

Google Search is under pressure – not only are many of us replacing it with the likes of ChatGPT Search, Google's attempts to stave off competition with the features like AI Overviews have also backfired due to some worrying inaccuracies.

That's why Google has just given Search its biggest overhaul for over 25 years at Google I/O 2025. The era of the 'ten blue links' is coming to close, with Google now giving its AI Mode (previously stashed away in its Labs experiments) a wider rollout in the US.

AI Mode was far from the only Search news at this year's I/O – so if you been wondering what the next 25 years of 'Googling' looks like, here are all of the new Search features Google's just announced.

A word of warning: beyond AI Mode, many of the features will only be available to Labs testers in the US – so if you want to be among the first to try them "in the coming weeks", turn on the AI Mode experiment in Labs.

1. AI Mode in Search is rolling out to everyone in the US

(Image credit: Google)

Yes, Google has just taken off the stabilizers off its AI Mode for Search – which was previously only available in Labs to early testers – and rolled it out to everyone in the US. There's no word yet on when it's coming to other regions.

Google says that "over the coming weeks" (which sounds worryingly vague) you'll see AI Mode appear as a new tab in Google Search on the web (and in the search bar in the Google app).

We've already tried out AI Mode and concluded that "it might be the end of Search as we know it", and Google says it's been refining it since then – the new version is apparently powered by a custom version of Gemini 2.5.

@techradar

♬ original sound - TechRadar 2. Google also has a new 'Deep Search' AI Mode

(Image credit: Google)

A lot of AI chatbots – including ChatGPT and Perplexity – now offer a Deep Research mode for longer research projects that require a bit more than a quick Google. Well, now Google has its own equivalent for Search called, yes, 'Deep Search'.

Available in Labs "in the coming months" (always the vaguest of release windows), Deep Search is a feature within AI Mode that's based on the same "query fan-out" technique as that broader mode, but according to Google takes it to the "next level".

In reality, that should mean an "expert-level, fully-cited report" (Google says) in only a few minutes, which sounds like a big time-saver – as long as the accuracy is a bit better than Google's AI Overviews.

3. Search Live lets you quiz Google with your camera

(Image credit: Google)

Google already lets you ask questions about the world with Google Lens, and demoed its Project Astra universal assistant at Google I/O 2024. Well, now it's folding Astra into Google Search so you can ask questions in real-time using your smartphone's camera.

'Search Live' is another Labs feature and will be marked by a 'Live' icon in Google's AI Mode or in Google Lens. Tap it and you'll be able to point your camera and have a back-and-forth chat with Google about what's in front of you, while getting links sent to you with more info.

The idea sounds good in theory, but we're still yet to try it out beyond its prototype incarnation last year and the multimodal AI project is cloud-based, so your mileage may vary depending on where you're using it. But we're excited to see how far it's come in the last year or so with this new Labs version in Search.

@techradar

♬ original sound - TechRadar 4. AI Overviews are going global

(Image credit: Future)

We're not exactly wild about AI Overviews, which are the little AI-generated paragraphs you often see at the top of your search results. They're sometimes inaccurate and have resulted in some infamous clangers, like recommending that people add glue to their pizzas. But Google is ploughing ahead with them and announced that AI Overviews are getting wider rollout.

The new expansion means the feature will be available in more than 200 countries and territories and more than 40 languages worldwide. In other words, this is the new normal for Google Search, so we'd better get used to it.

Google's Liz Reid (VP, Head of Search) acknowledged in a press briefing before Google I/O 2025 that AI Overviews have been a learning experience, but claims they've improved since those early incidents.

"Many of you may have seen that a set of issues came up last year, although they were very much education and quite rare, we also still took them very, very seriously and made a lot of improvements since then", she said.

5. Google Search will soon be your ticket-buying agent

(Image credit: Google)

Finding and and buying tickets and still something of painful experience in Google Search. Fortunately, Google is promising a new mode that's powered by Project Mariner, which is an AI agent that can surf the web just like a human and complete tasks.

Rather than a separate feature, this will apparently live within AI Mode and kick in when you ask questions like "Find two affordable tickets for this Saturday's Reds game in the lower level".

This will see it scurry off and analyze hundreds of ticket options with real-time pricing. It can also fill in forms, leaving you with the simple task of hitting the 'purchase' button (in theory, at least).

The only downside is that this is another of Google's Lab projects that will launch "in the coming months", so who knows when we'll actually see it in action.

6. Google Shopping is getting an AI makeover

(Image credit: Google)

Google gave its Shopping tab within Google Search a big refresh back in October 2024, and now many of those features are getting another boost thanks to some new integration with AI Mode.

The 'virtual try-on' feature (which now lets you upload a photo of yourself to see how new clothing might look on you, rather than models) is back again, but the biggest new feature is an AI-powered checkout feature that tracks prices for you, then buys things on your behalf using Google Pay when the price is right (with your confirmation, of course).

We're not sure this is going to help cure our gear-acquisition syndrome, but it it does also have some time-saving (and savings-wrecking) potential.

7. Google Search is getting even more personalized (if you want it to)

Like traditional Search, Google's new AI Mode will offer suggestions based on your previous searches, but you can also make it a lot more personalized. Google says you'll be able to connect it to some of its other services, most notably Gmail, to help its answer your queries with a more tailored, personal touch.

One example Google gave was asking AI Mode for "things to do in Nashville this weekend with friends". If you've plugged it into other Google services, it could use your previous restaurant bookings and searches to lean the results towards restaurants with outdoor seating.

There are obvious issues here – for many, this may be a privacy invasion too far, so they'll likely not opt into connecting it to other services. Also, these 'personal context' powers sound like they have the 'echo chamber' problem of assuming you always want to repeat your previous preferences.

Still, it could be another handy evolution of Search for some, and Google says you can always manage your personalization settings at any time.

You might also like
Categories: Technology

Want to be the next Spielberg? Google’s AI-powered Flow could bring your movie ideas to life

Tue, 05/20/2025 - 12:46
  • Google Flow is a new tool for filmmakers to tap into the power of generative AI
  • Flow uses multiple Google AI models to create cinematic scenes and characters from text prompts
  • This could open up more creative movie-making for people without Hollywood budgets

Google clearly wants to inject artificial intelligence into more creative tools, as evidenced by the introduction of Flow at today’s Google I/O 2025.

Flow is the search giant’s new ‘AI filmmaking tool’ that uses Google’s AI models, such as Veo, Imagen, and Gemini to help creative types explore storytelling ideas in movies and videos without needing to go out and film clips and cinematic scenes or sketch out a lot of storyboard scenes by hand.

Effectively an extension of the experimental Google Labs VideoFX tool launched last year, Flow lets users add in text prompts in natural, everyday language to create scenes, such as "astronauts walk out of the museum on a bridge,” and the AI tech behind Flow will create such a scene.

Flow lets filmmakers bring their own assets into it, from which characters and other images can be created. Once a subject or scene is created, it can be integrated into clips and scenes in a fashion that’s consistent with the video or film as a whole.

There are other controls beyond the creation of assets and scenes, with Flow offering direct manipulation of camera angles, perspectives and motion, easy editing of scene to hone in on features or widen up a shot to include more action - this appears to work as easily as a cropping tool - and offers the ability to manage all the ‘ingredients’ and prompt for Flow.

Flow will be available for subscribers of Google Al Pro and Google Al Ultra plans in the US, with more countries slated to get access to the filmmaking AI soon.

AI-made movies? Image 1 of 8

Google Flow in action (Image credit: Google Flow)Image 2 of 8

(Image credit: Google Flow)Image 3 of 8

(Image credit: Google Flow)Image 4 of 8

(Image credit: Google Flow)Image 5 of 8

(Image credit: Google)Image 6 of 8

(Image credit: Google)Image 7 of 8

(Image credit: Google Flow)Image 8 of 8

(Image credit: Google)

From seeing videos of Flow in action, it appears to be a powerful tool that brings an idea into a visual form, and with surprising realism. Powered by natural language prompts means budding filmmakers can create shots and science that would in the past have required dedicated sets or at least some deft CGI work.

In effect, Flow could be one of those AI tools that opens up the world of cinema to a wider range of creatives, or at least gives amateurs more powerful creative tools to bring their ideas to life.

However, this does raise the question of whether Flow would be used to create ideas for storytelling that would then be brought into silver screen life via physical sets, actors, and dedicated cinema CGI. Or if Flow will be used to create whole movies with AI, effectively letting directors be the sole producers of films, and bypass the need for actors, camera people, and the wealth of crew that are integral to traditional movie making.

As such, AI-powered tools like Flow could breathe new life into the world of cinema that one might argue has got a little stale, at least on the big production commercial side, and at the same time disrupt the roles and work required in the movie-making industry.

You might also like
Categories: Technology

Gemini Live is now free for everyone on Android and iOS, and you can finally share your screen and camera on iPhone - here's how to try it

Tue, 05/20/2025 - 12:45
  • Google's Gemini Live is now free for all Android and iOS users
  • iOS users can now share their screen and camera with Gemini Live (previously only available on Android)
  • Expect more integration with Google apps in the coming weeks

Google just announced that its AI voice assistant, Gemini Live, is now available for free on iOS and Android.

Gemini Live has been available to paid subscribers for a while now, but you can now chat with AI, use your smartphone's camera to show it things, and even screen share without spending any money.

The major announcement happened at Google I/O, the company's flagship software event. This year, Google I/O has focused heavily on Gemini and the announcement of AI Mode rolling out to all US Google Search users.

@techradar

♬ original sound - TechRadar

Gemini Live is one of the best AI tools on the market, competing with ChatGPT Advanced Voice Mode. Where Gemini Live thrives is in its ability to interact with what you see on screen and in real life.

Before today, you needed an Android device to access Live's camera, but now that has all changed, and iPhone users can experience the best that Gemini has to offer.

Google says the rollout will begin today, with all iOS users being able to access Gemini Live and screen sharing over the following weeks.

More Gemini Live integration in your daily life

Free access and iOS rollout weren't the only Gemini Live features announced at Google I/O. In fact, new functionality for the voice assistant could be a headline new addition.

Over the coming weeks, Google says Gemini Live will "integrate more deeply into your daily life. " Whether that's by adding events to your Google Calendar, accessing Google Maps, or interacting with more of the Google ecosystem, Gemini Live is going to become an essential part of how AI interacts with your device.

While Google didn't say if this functionality will be available on iOS, it's safe to assume that, for now, increased system integration will be limited to Android.

Gemini Live's free rollout, along with its upgrades, is one of, if not the, best announcements of Google I/O, and I can't wait to see how it improves over the next few months.

How to use Gemini Live

(Image credit: Google)

Accessing Gemini Live is simple, you just need access to the Gemini app on iOS or Android.

  • Open the Gemini app
  • Tap the Gemini live icon (found at the far right of the text input box)
  • Start chatting with Gemini Live
You might also like
Categories: Technology

Google's Veo 3 marks the end of AI video's 'silent era'

Tue, 05/20/2025 - 12:45
  • Google's video generation model got a major upgrade
  • Announced at Google I/O, Veo 3 can combine audio and video in its output
  • It's an Ultra and US-only feature for now

AI video generation tools such as Sora and Pika can create alarmingly realistic bits of video, and with enough effort, you can tie those clips together to create a short film. One thing they can't do, though, is simultaneously generate audio. Google's new Veo 3 model can, and that could be a game changer.

Announced on Tuesday at Google I/O 2025, Veo 3 is the third generation of the powerful Gemini video generation model. With the right prompt, it can produce videos that include sound effects, background noises, and, yes, dialogue.

Google briefly demonstrated this capability for the video model. The clip was a CGI-grade animation of some animals talking in a forest. The sound and video were in perfect sync.

If the demo can be converted into real-world use, this represents a remarkable tipping point in the AI content generation space.

"We’re emerging from the silent era of video generation," said Google DeepMind CEO Demis Hassabis in a press call.

Lights, camera, audio

He isn't wrong. Thus far, no other AI video generation model can simultaneously deliver synchronized audio, or audio of any kind, to accompany video output.

It's still not clear if Veo 3, which, like its predecessor, Veo 2, should be able to output 4K video, surpasses current video generation leader OpenAI Sora in the video quality department. Google has, in the past, claimed that Veo 2 is adept at producing realistic and consistent movement.

Regardless, outputting what appears to be fully produced video clips (video and audio) may instantly make Veo a more attractive platform.

It's not just that Veo 3 can handle dialogue. In the world of film and TV, background noises and sound effects are often the work of Foley artists. Now, imagine if all you need to do is describe to Veo the sounds you want behind and attached to the action, and it outputs it all, including the video and dialogue. This is work that takes animators weeks or months to do.

In a release on the new model, Google suggests you tell the AI "a short story in your prompt, and the model gives you back a clip that brings it to life."

If Veo 3 can follow prompts and output minutes or, ultimately, hours of consistent video and audio, it won't be long before we're viewing the first animated feature generated entirely through Veo.

Veo is live today and available in the US as part of the new Ultra tier ($249.99 a month) in the Gemini App and also as part of the new Flow tool.

Google also announced a few updates to its Veo 2 video generation model, including the ability to generate video based on reference objects you provide, camera controls, outpainting to convert from portrait to landscape, and object add and erase.

@techradar

♬ original sound - TechRadar You might also like
Categories: Technology

Google finally gave us a closer look at Android XR – here are 4 new things we've learned

Tue, 05/20/2025 - 12:45
  • Android XR has been showcased by Google at I/O 2025
  • It relies heavily on AI to deliver many of its features like live translation
  • Several different glasses brands will deliver Android XR features

At Google I/O 2025 Google finally gave us what we’ve all been waiting for (well, what I’ve been waiting for): a proper Android XR showcase.

The new Google operating system made for Android headsets and Android glasses has been teased as the next big rival to Meta’s Horizon OS – the software that powers the Meta Quest 3 and Quest 3S – and we finally have a better picture of how it stacks up.

Admittedly the showcase was a little short, but we do know several new details about Android XR, and here are four you need to know.

1. Android XR has Gemini at its core

(Image credit: Future)

While I’d argue Google’s Android XR showcase wasn’t as in-depth as I wanted, it did show us what the operating system has running at its core: Google Gemini.

Google’s advanced AI is the OS’ defining feature (at least that’s how Google is positing it).

On-glasses-Gemini can recommend you a place to eat ramen then offer you on-screen directions to where to find it, it can perform live-translation, and on a headset it can use Google Maps' immersive view to virtually transport you to any destination you request.

Particularly on the glasses this completely hands-free approach – combined with cameras and a head-up display – looks to be Google Gemini in its most useful form. You can get the assistant’s help as quickly as you can ask for it, no fumbling to get your phone out required.

I want to see more but this certainly looks like a solid upgrade on the similar Meta AI feature the Ray-Ban Meta smart glasses offer.

2. Android XR is for more than Samsung

(Image credit: Xreal)

Ahead of Google I/O we knew Samsung was going to be a key Android XR partner – alongside Qualcomm, who’s providing all the necessary Snapdragon chipsets to power the Android XR hardware.

But we now know several other companies are collaborating with Google.

Xreal has showcased Project Aura, which will bring Android XR to an upgraded version of its tethered glasses that we’re familiar with (like the Xreal One) – with Aura being complete with a camera and Snapdragon processor.

Then Google also teased glasses from Gentle Monster and Warby Parker, implying it is taking Meta’s approach of partnering with fashion brands, rather than just traditional tech brands.

Plus, given that Gentle Monster and Warby Parker offer very different design aesthetics, this will be good news for people who want varied fashion choices for their new smart glasses accessories.

3. Project Moohan is still coming ‘later this year’

(Image credit: Google)

The Android XR headset Project Moohan is still set to launch in 2025, but Google and Samsung have yet to confirm a specific release date.

I was hoping we’d get something more concrete, but continued confirmation that Moohan will be landing in 2025 is better than it being delayed.

Google and its partners weren’t keen to give us any firm dates, in fact. Xreal calling its Project Aura the second official Android XR glasses suggests it’ll land sometime after Moohan, but before anything else – however, we’ll have to wait and see what plays out.

4. Meta should be worried, but not terrified

(Image credit: Google)

Google certainly dealt XR’s biggest player – Meta, with its hugely popular Quest headset hardware – a few blows and gave its rival something to be worried about.

However, this showcase is far from a finisher, especially not in the headset department.

Meta’s Connect 2025 showcase in September is expected to show us similar glasses tech and features, and depending on release dates Meta might beat Android XR to the punch.

That said, competition is only going to be a good thing for us consumers, as these rivals battle over price and features to entice us to one side or the other. Unlike previous battles in the XR space this certainly seems like a balanced fight and I’m excited to see what happens next.

@techradar

♬ original sound - TechRadar You might also like
Categories: Technology

Xreal is making Android XR glasses and this could be the biggest XR announcement since the Meta Quest 2 – here’s why

Tue, 05/20/2025 - 12:45
  • Xreal is working on Android XR glasses
  • They're codenamed Project Aura (the same name Google's rumored Glass 2 apparently had)
  • The glasses are a collab between Xreal, Google and Qualcomm

Google and Samsung's Android XR collab has been a major focus, but at Google I/O 2025 a new (yet familiar) partner emerged to showcase the second official Android XR device: Xreal with Project Aura.

Xreal and its Xreal One glasses currently top our list for the best smart glasses thanks to their impressive audio and visual quality.

However, while they include AR elements – they make your connected device (a phone, laptop, or console, among other options) float in front of you like you’re in a private movie theatre, which is fantastic by the way – they aren’t yet as versatile as other smart glasses propositions we’re being promised by Google, Meta, Snap and others.

Xreal Project Aura – a pair of XR glasses officially referred to as an optical see-through (OST) XR device – should shift Xreal’s range towards that of its rivals thanks to its advanced Qualcomm chipset, Xreal’s visual system expertise, and Google’s Android XR software. The combination of which should (hopefully) form a more fully realized spatial computing device than we’ve seen from Xreal before.

Samsung aren't the only Android XR glasses (Image credit: Google)

As exciting as this announcement is – I’ll explain more below in a moment – we should keep our emotions in check until further details on Project Aura are revealed at the Augmented World Expo (AWE) in June, and in other announcements set to be made “later this year” (according to Xreal).

Simply because beyond its existence and its general design we know very little about Aura.

We can see it has in-built cameras, have been promised Qualcomm processors, and it appears to use the same dual-eye display technology exhibited by Xreal’s other glasses. Plus it'll tethered rather than fully wireless, though it should still offer all of the Android XR abilities Google has showcased.

But important questions like its cost and release date haven’t yet been detailed.

I’m hoping it’ll offer us a more cost-effective entry point to this new era of XR glasses, but we’ll have to wait and see before we know for certain if this is “a breakthrough moment for real-world XR” as Chi Xu, Co-founder and CEO of Xreal promises.

Still, even before knowing its specs and other key factors I’m leaning towards agreeing with Xreal’s CEO.

I love my Xreal One glasses (Image credit: Future / Hamish Hector) Meta should be worried

So why is this Xreal Android XR reveal potentially so important in my eyes?

Because while Meta has promised its Horizon OS will appear on non-Meta headsets – from Asus, Lenovo, and Xbox – since that announcement we’ve seen nothing of these other headsets in over a year. That is, beyond a whisper on winds (read: a small leak) about Asus’ Project Tarius.

Android XR on the other hand has, before launch, not only confirmed collaborations between Google and other companies (Xreal and Samsung) but shown those devices in action.

They aren’t just promises, they’re real.

A threat to the Meta Quest 3? (Image credit: Meta)

Now the key deciding factor will be if Android XR can prove itself as an operating system that rivals Horizon OS in terms of the breadth and quality of its XR apps. With Google, Samsung, Xreal, and more behind it, I’m feeling confident that it will.

If it lives up to my expectations, Android XR could seriously shake up Meta’s XR dominance thanks to the varied XR hardware options under its umbrella out the gate – that should lead to competition resulting in better devices and prices for us consumers as an end result.

We’ll have to continue to watch how Android XR develops, but it looks like Google is off to a very strong start. For the first time in a while Meta might finally be on the back foot in the XR space, and the ball is in its court to respond.

You might also like
Categories: Technology

Google Beam could change your video calls forever with glasses-free 3D and near real-time translation

Tue, 05/20/2025 - 12:45
  • Project Starline launched in 2021 and is now Google Beam
  • It's rolling out this year, adding 3D to video calls
  • There's also a real-time translation component to the tech

You may have already seen Google's Project Starline tech, which reimagines video calls in full 3D. It was first teased over four years ago, and at Google I/O 2025 we got the news that it's rolling out in full with a new name: Google Beam.

Since its inception, the idea of Google Beam has been to make it feel like you're in the same room as someone when you're on a video call with them. Rather than using headsets or glasses though, it relies on cameras, mics, and AI technology.

@techradar

♬ original sound - TechRadar

"The combination of our AI video model and our light field display creates a profound sense of dimensionality and depth," says Google. "This is what allows you to make eye contact, read subtle cues, and build understanding and trust as if you were face-to-face."

Beam participants need to sit in a custom-made booth, with a large, curved screen that's able to generate a partly three-dimensional rendering of the person they're speaking to. The first business customers will get the equipment from HP later this year.

Real-time translation

Google Beam in action (Image credit: Google)

There's another element of Google Beam that's been announced today, and that's real-time translation. As you might expect, this is driven by AI technology, and makes it easier to converse with someone else in a different language.

As per the demo that Google has shown off, the translation is just a second or two behind the speech, and it works in the same way that a translation might be added afterwards on top of someone speaking in a video recording.

It's another impressive part of the Google Beam experience, and offers another benefit for organizations with teams and clients all across the world. According to Google, it can preserve voice, tone, and expression, while changing the language the audio is spoken in.

This part of the experience won't only be available in Google Beam though: it's rolling out now inside Google Meet now for consumers, though you are going to need either the Google AI Pro or the Google AI Ultra plan to access it.

You might also like
Categories: Technology

Google CEO: AI is not a 'zero-sum moment' for search

Tue, 05/20/2025 - 12:45

Google became a verb for search long before AI chatbots arrived to answer their first prompt, but now those two trends are merging as Google solidified AI's position in search with the full rollout of AI Mode for all US Google Search users. Google made the announcement as part of Google I/O, which is underway in California.

Finding results from a generative model that often gives you everything you need on the Google result page is a fundamental shift in traditional web search paradigms.

For over 25 years now, we've traditionally searched on a term, phrase, or even complete thought and found pages and pages of links. The first page is the links that matter most in that they'll mostly closely align with your query. It's no secret that companies, including the one I work at, fight tooth and nail to create content that lives on the first page of those results.

Things began to change in the realm of Google Search when Google introduced AI Overviews in 2023. As of this week, they're used by 1.5 billion monthly users, according to Google.

Where AI Overview was a light-touch approach to introducing generative AI to search, AI Mode goes deeper and further. The latest version of AI Mode, introduced at Google I/O 2025, adds more advanced reasoning and can handle even longer and more complex queries.

Suffice it to say, your Google Search experience may never be the same.

View from the top

Google CEO Sundar Pichai (Image credit: Bloomberg/Getty Images)

Google CEO Sundar Pichai, though, has a different view. In a conversation with reporters before Google I/O and in answer to a question about the rise of AI chatbots like Gemini and the role of search, Pichai said, "It's been a pretty exciting moment for search."

He said that engagement with AI Overviews and even the limited AI Modes tests has shown increased engagement, with people spending more time in search and inputting longer and longer queries.

No one asked if the rise of chatbots could mean the end of search as we know it, but perhaps Pichai inferred the subtext, adding, "It's very far from a zero-sum moment."

If anything, Pichai noted, "People, I think, are just getting more curious; people are using a lot of this a lot more. "

While AI Overview is often accused of having some factual issues, Google is promising that AI mode, which uses more powerful models than AI Overview, will be more accurate. "AI Mode uses more powerful models and uses more reasoning across – sort of doing more work– ...and so it reaches an even higher bar," said Google Search Head Liz Reid.

As for where search is going, Pichai sees features like AI Mode "expanding use cases". He also thinks that agentic AI is "giving people glimpses of a proactive world."

I think, by this, Pichai means that AI-powered search will eventually learn your needs and do your bidding, even if your query or prompt doesn't fully describe your need.

What that means in practice is still up for debate, but for Google and Picahi, the advent of AI in search is all upside.

"I do think it's made the Web itself more exciting. People are engaging a lot more across the board, and so it's a very positive moment for us."

You might also like
Categories: Technology

Gemini’s AI images have been updated to Imagen 4 with a ‘huge step forward in quality’ – here’s what you need to know

Tue, 05/20/2025 - 12:45
  • Imagen 4 is now available to all Gemini users
  • 2K images with more detail and better typography
  • Google claims you can now use it to create greetings cards

Google's AI image generation just levelled up, with a new version of Imagen 4 bringing with it a bunch of big upgrades including a higher resolution and better text handling.

The upgrade was announced at Google I/O 2025 today, and should noticeably improve Gemini’s image capabilities, which were already rivalling those of ChatGPT.

Taking over from the previous version 3, Imagen 4 has "remarkable clarity in fine details like intricate fabrics, water droplets and animal fur, and excels in both photorealistic and abstract styles”, according to Google. You can see the new level of detail in the preview images above and below.

Imagen 4 is also the first version of Google’s AI image generator that can go up to 2K resolution, meaning you’ll be able to make larger images for presentations and pictures that will look even better when printed out.

The detail on the water droplets in this image generated by Imagen 4 is quite impressive. (Image credit: Google)

A real challenge for AI image generators in the past (apart from creating realistic fingers) has been representing text in a way that makes sense and is readable.

While Imagen 3 did make significant inroads into presenting typography in a better way, Imagen 4 promises to take text to the next level.

Google claims Imagen 4 will be “significantly better at spelling and typography, making it easier to create your own greeting cards, posters and even comics”.

Usage limits

When it comes to the usage limits on Imagen 4, we don’t expect the situation to be radically different from those with Imagen 3, but will update this post if we hear anything different.

Currently, if you are using Imagen 3 through the Gemini chatbot, daily limits vary depending on whether you’re a free Gemini user or a Gemini Advanced subscriber.

Free users can expect around 10-20 image generations per day, depending on how heavily the service is being used. Gemini Advanced subscribers can expect higher limits of up to 100-150 daily image generations.

As with Imagen 3, there are content restrictions on Imagen 4, especially around generating images of real individuals. However, Imagen 4 has no problems generating images of generic people.

Available today across Google apps

Imagen 4 isn’t only available in Gemini, either; from today you’ll be able to use it across Whisk, Vertex AI, Slides, Vids, Docs and more in Workspace.

And there’s more to come, too. Google says that it will “soon” be launching a super-fast variant of Imagen 4 that’s up to 10x faster than Imagen 3 at generating images.

You may also like
Categories: Technology

Google Gemini 2.5 just got a new 'Deep Think' mode – and 6 other upgrades

Tue, 05/20/2025 - 12:45
  • Google Gemini 2.5 Pro is getting a new Deep Think model
  • Deep Think allows Gemini to consider multiple reasoning paths before responding
  • Deep Think will improve Gemini's accuracy on complex math and code

Google is adding some extra brainpower to Gemini with a new Deep Think Mode. The company unveiled the latest option for Google Gemini 2.5 Pro at Google I/O 2025, showing off just what its AI can do with extra depth.

Deep Think basically augments Gemini's AI 'mind' with additional brains. Gemini in Deep Think mode won't just spit out an answer to a query as fast as possible. Instead, it runs multiple possible lines of reasoning in parallel before deciding how to respond. It’s like the AI equivalent of looking both ways, or rereading the instructions before building a piece of furniture.

And if Google's tests are anything to go by, Deep Think's brainpower is working. It’s performing at a top-tier level on the 2025 U.S. math olympiad, coming out on top in the LiveCodeBench competitive programming test, scoring an amazingly high 84% on the popular MMMU, a sort of decathlon of multimodal reasoning tasks. Deep Think isn’t widely available just yet. Google is rolling it out to trusted testers only for now. But, presumably, once all the kinks are ironed out, everyone will have access to the deepest of Gemini's thoughts.

Gemini shines on

Deep Think fits right into the rest of Gemini 2.5’s growing lineup and the new features arriving for its various models in the API used by developers to embed Gemini in their software.

For instance, Gemini 2.5 Pro now supports native audio generation out. That means it can talk back to you. The speech has an “affective dialogue” feature, which detects emotional shifts in your tone and adjusts accordingly. If you sound stressed, Gemini might stop talking like a patient customer service agent and respond more like an empathetic and thoughtful friend (or at least how the AI interprets such a response). And it will be better at knowing when to talk at all thanks to the new Proactive Audio feature, which filters out background noise so Gemini only chimes in when it’s sure you’re talking to it.

Paired with new security safeguards and the upcoming Project Mariner computer-use features, Gemini 2.5 is trying very hard to be the AI you trust not just with your calendar or code, but with your book narration or entire operating system.

Another element expanding across Gemini 2.5 is what Google calls a 'thinking budget.' Previously unique to Gemini 2.5 Flash, the thinking budget lets developers decide just how deeply the model should think before responding. It's a good way to ensure you get a full answer without spending too much. Otherwise, Deep Think could give you just a taste of its reasoning, or give you the whole thing and make it too expensive for any follow-ups.

In case it's not clear what those thoughts involve, Gemini 2.5 Pro and Flash will offer 'thought summaries' for developers, a document showing the exact details of what the AI was doing in terms of applying information through its reasoning process, so you can actually look inside the AI brain.

All of this signals a pivot from models that just talk fast to emphasizing ones that can reason deeper, if slower. Deep Think is part of that shift toward deliberate, layered reasoning. It’s not just trying to predict the next word anymore, it's applying that logic to ideas and the very process of coming up with answers to your questions. Google seems keen to make Gemini not only able to fetch answers, but to understand the shape of the question itself.

Of course, AI reasoning still exists in a space where a perfectly logical answer might come with a random side of nonsense, no matter how impressive the benchmark scores. But you can start to see the shape of what’s coming, where the promise of an actual 'co-pilot' AI comes to fruition.

You might also like
Categories: Technology

Pages