A Virginia District Judge has ruled Google violated antitrust laws by “willfully acquiring and maintaining monopoly power” in the advertising technology market, spelling potentially grave consequences for the tech giant.
The ruling follows a 2023 lawsuit by the Department of Justice, backed by eight separate US states, accusing the company of harming rivals, publishers and consumers online.
Google was specifically found guilty of monopolizing the market by tying together two parts of its adtech stack – DoubleClick for Publishers (DFP) and Ad Exchange (AdX).
Judge rules that Google violated antitrust lawsDespite the findings, the judge did not find a monopoly in advertiser ad networks, representing a partial win for Google.
Although Google has been found guilty, the judge did not determine any remedies. A separate court hearing will set out what Google must do to comply with antitrust laws and set straight any violations. Consequences could include breaking up Google’s ad business, such as selling off Google Ad Manager, and further behavioral remedies like prohibiting Google from self-preferencing in ad auctions.
“Having found Google liable, the Court will set a briefing schedule and hearing date to determine the appropriate remedies for these antitrust violations,” the decision confirms.
Noting the continued employment of anticompetitive business practices for more than a decade, the judge said: “ In addition to depriving rivals of the ability to compete, this exclusionary conduct substantially harmed Google’s publisher customers, the competitive process, and, ultimately, consumers of information on the open web.”
“We won half of this case and we will appeal the other half. The Court found that our advertiser tools and our acquisitions, such as DoubleClick, don’t harm competition," Google’s VP of Regulatory Affairs Lee-Anne Mulholland told TechRadar Pro.
"Publishers have many options and they choose Google because our ad tech tools are simple, affordable and effective."
Google is also in hot water about its search market dominance – nine in 10 (89.7%) of all internet searches tracked by Statcounter used Google. Bing, in second place, accounted for just 4%.
If the company is found guilty of violation there, it could also be forced to sell off its Chrome business, a browser that accounts for two in three (66.2%) browser sessions globally. That case is ongoing.
You might also likeMy eyesight sucks. A detached retina and the subsequent operations to fix it, and the rise of glaucoma as a result, mean most of the vision in my left eye has gone. My right eye, on the other hand, is very short-sighted, meaning I can see bits of floating debris in the vitreous liquid behind the surface of my eye, which is distracting, and my current contact lens isn’t sitting correctly.
So while I can see, spotting fine details or seeing things clearly at a distance is a pain in the proverbial posterior. This harsh reality slammed into me particularly hard during a recent bachelor-party trip to Berlin. While steins of good German beer were consumed, we also did a lot of sightseeing – or at least my friends did, as I spent a good bit of time squinting.
That changed when I decided to lean on the iPhone 16 Pro Max and the Samsung Galaxy S25 Ultra, both of which I had on my person, with the latter as a backup but also because I’m a tech journalist and live the dual-ecosystem life.
Specifically, the 5x telephoto cameras on both flagship phones came in very handy, letting me zoom in on details on the Reichstag or the myriad of street art sprayed onto the walls and buildings of the city. But both phones really helped me and my poor eyes when visiting the Berlin Zoo.
Zooming at the zooNow I’m not a huge fan of zoos; I appreciate the preservation side of things, but I don’t like seeing animals in limited space. Berlin Zoo did at least seem to have plenty of space for its collection of creatures, which was promising but also a bit of a challenge for my bad eyes.
I found myself desperately squinting into smartly made enclosures to spot some of the smaller and more camouflaged animals, or get a proper look at the ones in large enclosures that were sitting as far away from visitors as possible; I don’t blame them.
Enter the telephoto cameras of the aforementioned flagship phones. These basically become my eyes when entering the areas where the animals were better camouflaged or elusive. And they let me capture shots that clipped past the crowds and let me get a nice framed image of a prowling leopard or bemused bear; see the photo gallery below.
Image 1 of 16A selection of photos of animals taken at Berlin Zoo on the iPhone 16 Pro Max and Samsung Galaxy S25 Ultra (Image credit: Future / Roland Moore-Colyer)Image 2 of 16(Image credit: Future / Roland Moore-Colyer)Image 3 of 16(Image credit: Future / Roland Moore-Colyer)Image 4 of 16(Image credit: Future / Roland Moore-Colyer)Image 5 of 16(Image credit: Future / Roland Moore-Colyer)Image 6 of 16(Image credit: Future / Roland Moore-Colyer)Image 7 of 16(Image credit: Future / Roland Moore-Colyer)Image 8 of 16(Image credit: Future / Roland Moore-Colyer)Image 9 of 16(Image credit: Future / Roland Moore-Colyer)Image 10 of 16(Image credit: Future / Roland Moore-Colyer)Image 11 of 16(Image credit: Future / Roland Moore-Colyer)Image 12 of 16(Image credit: Future / Roland Moore-Colyer)Image 13 of 16(Image credit: Future / Roland Moore-Colyer)Image 14 of 16(Image credit: Future / Roland Moore-Colyer)Image 15 of 16(Image credit: Future / Roland Moore-Colyer)Image 16 of 16(Image credit: Future / Roland Moore-Colyer)Advancements in the quality of cameras sensors, alongside optical zoom range and improved image processing – plus the addition of sensor fusion letting a phone take shots with multiple cameras at once and stitch an image out of them – has seen telephoto cameras, at least on some of the best phones, go from mild novelties to useful additions.
I’ve long favored telephoto cameras over ultra-wide ones, which can make me an outlier compared to some people. Maybe I just don’t have big groups of friends to capture in digital images. So the more recent push by flagship phones from bigger brands to go past 3x telephoto cameras and adapt 5x and above – think the past couple of generations of Galaxy, Pixel and Pro iPhones – has really caught my eye (pun partially intended).
And for helping me appreciate the range of animals at Berlin Zoo without enraging German animal handlers and administrators by leaping into lion enclosures, these telephoto cameras were basically essential.
Furthermore, the advancements in low-light photography have meant that when I entered a very dark section of the zoo where the nocturnal animals were kept, and where I basically couldn’t see, the night mode of the iPhone 16 Pro Max was a boon, letting me view various critters without activating a flash or anything obnoxiously disturbing.
Honestly, without such tech, I think I’d have stumbled from enclosure to enclosure without seeing a single critter.
(Image credit: Future / Roland Moore-Colyer)Now I do need to see an optician to get a new contact lens that actually fits, and I’m not saying that looking at life through a smartphone is the panacea to my poor eyes.
Yet my trip to Berlin and its zoo hammered home quite how capable two of the best camera phones are. Sure, upgrades to phone cameras have been iterative lately. Nevertheless, each improvement leads to a better overall experience, and in my case, basically saved me from what could have been a rather miserable and frustrating time.
You might also likeShould you upgrade to the iPhone 17 this year, or wait for the iPhone 18? A new leak suggests that the 2026 iPhone is going to come with a significant performance boost, but might also have a notably higher price tag.
This comes from seasoned tipster Digital Chat Station on Chinese social media site Weibo (via MacRumors). Apparently, the A20 chip destined for the iPhone 18 series will switch from a 3 nanometer to a 2 nanometer manufacturing process – essentially packing more transistors into the same space.
That should mean a major boost in performance and efficiency (which then improves battery life). iPhone chips get faster every year of course, but where a nanometer (nm) jump is involved, the differences in generations should be even greater.
We've heard this rumor before, from well-placed sources, and we're even more likely to believe it now that it's been repeated again. Expect Apple to make a lot of noise about the performance of its iPhones when next year rolls around.
It'll cost you The iPhone 16 launched in September 2024 (Image credit: Future)The same tipster says (via Google Translate) that the cost of these chips is expected to "increase significantly", with "another round of price increases for new phones". Add in current tariff uncertainty, and the 2026 iPhone series could be the most expensive yet.
Other chip makers, including Qualcomm and MediaTek, are apparently moving to the same 2 nm process next year as well – so flagship smartphones might be more expensive across the board, not just when it comes to Apple's offerings.
Again, this is something that other tipsters have predicted. This isn't a completely new rumor, but it adds to the mounting evidence that the iPhone 18 handsets are going to be impressively powerful... and perhaps rather pricey too.
Expect more rumors like this for the rest of this year and into the next one. In the meantime, we're hearing that the iPhone 17 range could come with a substantial redesign, certain video recording improvements, and a brand new model.
You might also likeTo say that the last episode of Yellowjackets season 3 was a killer would be an understatement: it was a highly dramatic, often surprising and very violent end to not just the season, but to some of the key characters too.
Warning: serious spoilers ahead!
If you haven't already seen the entire third season of one of the best Paramount+ shows please don't read on, because there were some important things in the season finale that I want to talk about, and in order to explain them I'm going to have to include some massive spoilers.
Trust me: Spoiling any of the surprises for you is definitely not what the wilderness wants.
Misty's smile was misdirection (Image credit: Paramount Plus)One of the most disturbing parts of Yellowjackets' very first episode was what happened immediately after the horrible death of Pit Girl, who of course we now know to be Mari.
There's a lingering moment in the pilot where, post-cannibalism, the camera focuses on Misty and she doesn't seem upset; she seems happy, with what you could describe as either a smirk or a smile.
It turns out that that was misdirection: we were set up to think that Misty was heartless or even evil, and in successive seasons that was reinforced by the animosity between her and Mari. But we now know that Shauna, not Misty, was the person who ensured that Mari would be Pit Girl.
Seeing Misty's smile in context at the end of season 3 showed us the real story: Misty was smiling because she knew her and Nat's plan – getting away to make that phone call on the repaired satellite phone – had worked.
Shauna's crown is hollow (Image credit: Showtime; Paramount Plus)In the very final moments we see Shauna become the Antler Queen, which of course you saw. But that coronation is misdirection again, because it's already a hollow victory: the Yellowjackets are turning against Shauna in both timelines because of her shocking actions.
Of course we know she makes it back home in the 1990s timeline, because if she didn't then we wouldn't have the present-day Shauna to be horrified by.
But with Misty and even Tai now lined up against her, I'm really not expecting a happy ending to Shauna's story. Not least because...
Callie is a killer (Image credit: Paramount Plus)Among the many revelations of the finale, one of the biggest is the identity of Lottie's killer – Callie. When Misty works it all out, she isn't slow to tell Jeff and Shauna. Jeff realizes that Shauna is, to put it mildly, not the greatest role model Callie could have, and spirits her away.
But something strange happened in the finale when Shauna finds empty closets and no sign of a note: nothing.
After an entire season where Callie and Jeff's characters became really important, their story just stops dead. We're no wiser about where they are or what they're doing than Shauna is.
I think that's a third bit of misdirection. I reckon that we're going to see a lot more of Callie in season four – and that that's not going to be good news for Shauna.
Remember, this is a show all about teens murdering people, and Mari's final words to Shauna were "you deserve all the bad things that are going to happen to you"... are you thinking what I'm thinking?
You might also likeCybercriminals don’t discriminate. They go where the money, data, and opportunities are. And in today's hyper-connected world, engineering firms have become lucrative targets. The recent attacks on companies such as IMI and Smiths Group are a prime example of that. In engineering environments, cybersecurity can’t be just an add-on. Not when complex supply chains, high-value intellectual property, and critical infrastructure are at stake. Not when a single security breach can lead to catastrophic consequences.
Imagine an engineering firm spearheading smart infrastructure projects, embedding sensors into bridges to monitor structural integrity. A cyberattack could manipulate those readings, triggering unnecessary shutdowns, or worse, concealing a real threat. Now scale that risk across an entire industry reliant on smart manufacturing, Industrial IoT (IIoT) devices, and cloud-based systems. Every new digital advancement creates another entry point for hackers.
Yet, despite the dangers, cybersecurity in engineering is often reactive rather than proactive. Many firms treat security as patching vulnerabilities only after an attack has already taken place. So how does that mindset change?
From firefighting to preventionCybersecurity used to function like a fire department – teams would rush to put out flames after a breach. But today’s threat landscape demands something different, from continuous network monitoring and early detection to rapid response. This is where Security Information and Event Management (SIEM) comes into play.
SIEM operates like a high-tech security nerve center, constantly scanning logins, file access, and network traffic for anomalies. When it detects suspicious activity such as an unauthorized attempt to access sensitive blueprints, it raises an alert before real damage occurs. And if an attack does happen, SIEM doesn’t only just sound the alarm – it provides forensic insights, helping companies understand how the breach occurred, where it spread, and how to prevent it from happening again.
For an industry where security failures can have life-or-death consequences, this kind of proactive defense is non-negotiable.
High-tech meets the human elementThe good news is that the time it takes to detect and contain breaches is improving. Thanks to automation, in 2024, the average time dropped to 258 days, the shortest in seven years. But there’s still room for improvement, and AI-driven cybersecurity solutions are stepping up.
For instance, AI processes massive amounts of security data in real-time, identifying patterns in API calls, logins, and system behavior to flag anomalies faster than any human team could. Think of it as a digital watchdog that never sleeps. When combined with SIEM, AI can pinpoint suspicious behavior, like an industrial machine suddenly executing unauthorized commands, before an incident escalates.
And beyond just detection, AI-driven automation reduces breach costs. In fact, research from IBM found that companies leveraging AI in cybersecurity saved an average of $2.22 million per breach compared to those that didn’t.
But even the most advanced systems can’t compensate for basic cybersecurity hygiene. An impressive 22% of last year’s breaches stemmed from avoidable human error – misconfigured settings, weak passwords, or falling for phishing emails. Yet, despite the risks, many companies remain critically understaffed in cybersecurity expertise. In fact, the World Economic Forum found that in 2024, only 14% of organizations felt confident in their ability to fend off cyberattacks.
A balanced approach is the only effective solution. While AI and automation enhance security, organizations still need skilled professionals to interpret threats, make critical decisions, and instill a culture of cyber awareness across their workforce.
Cost vs investmentData breaches aren’t just technical issues, they can be financial disasters. In 2024, the average cost of a breach surged to $4.88 million, up from $4.45 million the previous year – a whopping 10% spike and the highest increase since the pandemic.
For engineering firms, the stakes are even higher. A single cyberattack on a company developing next-generation electric vehicles could leak years of research to competitors, wiping out its competitive edge overnight. A breach in a transportation infrastructure project could delay completion timelines, inflate costs, and erode public trust.
By embedding SIEM into their cybersecurity framework, engineering companies can ensure that every digital action – whether it’s accessing blueprints, placing procurement orders, or monitoring industrial processes – is continuously protected. The result? Reduced downtime, lower financial risk, and a reputation as a secure and forward-thinking industry leader.
We list the best RFP platform.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Businesses are now getting to grips with AI and are implementing it with efficacy, marking a shift from the experimentation phase, with as many as 93% of UK businesses now reporting efficiency gains from generative AI (and 88% globally), new research from Snowflake has claimed.
Moreover, a staggering 98% are also training, tuning or augmenting their LLMs for better outcomes, demonstrating that companies know exactly where the tech’s benefits are and how to optimize it.
However, the usual hurdles and challenges remain in place, preventing some organizations from accessing the promised productivity benefits.
Businesses in the UK are pretty au fait with AISnowflake found nearly two-thirds (62%) of businesses are using AI in software engineering, with 69% using it for code reviews and debugging – both higher percentages than the global average.
AI technology is also proving popular in customer support (61%) and cybersecurity (69%) use cases, where workers are seeing faster first response times (59%), reduced manual workload (64%) and lower costs (56%).
Separate EY reporting reveals seven in 10 UK respondents have used AI in their daily lives in the past six months, but the findings conflict with Snowflake’s findings – only 44% have used it in a professional setting, lower than the global average of 67%.
Globally, EY says workers are using AI for writing or editing content (31%), learning about topics (30%) and generating new ideas (27%).
“They're not just experimenting – they're building with purpose,” Snowflake VP and UK&I Country Manager James Hall said about UK businesses.
“With smart investments in cloud infrastructure and a focus on actionable use cases, the UK is laying the groundwork to lead the next phase of gen AI transformation.”
The research also highlighted some of the challenges that businesses face when adopting AI at scale, with unstructured data presenting the biggest hurdle according to Snowflake.
EY added that privacy and security are also at the front of UK business leaders’ minds, with security breaches (71%), privacy violations (65%) and the reliability of AI outputs (67%) all cited as major concerns.
Looking ahead, EY UK&I AI Client Strategy Leader Catriona Campbell says that businesses must build worker confidence and demonstrate the value of AI.
“As AI continues to reshape our daily lives, it is crucial for business leaders to foster trust and transparency, empowering individuals to engage with AI on their own terms,” Campbell added.
You might also likeWe're more than ready for a successor to the Motorola Razr Plus 2024, and we now have a better idea of what the Moto Razr Plus 2025 will bring along with it thanks to an extensive leak of the flip foldable's specs.
The specs have been published by 91mobiles and well-known tipster @OnLeaks, and add to Motorola's official announcement that this phone – which will be known as the Motorola Razr 60 Ultra outside of the US – is going to be unveiled on Thursday, April 24.
It seems we're set for some considerable upgrades: a Snapdragon 8 Elite processor (up from the Snapdragon 8s Gen 3), 16GB of RAM (up from 12GB), and a 4,700 mAh battery (up from 4,000 mAh), with better wired and wireless charging speeds than before.
The main display is tipped to get a slight size bump from 6.9 inches to 7 inches, but the cover display is apparently staying the same size, at 4 inches. We'll get more storage inside, it sounds like: 512GB instead of 256GB.
Cameras and dimensions The new model might be ever so slightly thicker than the current model, shown here (Image credit: Philip Berne / Future)When it comes to cameras, the leak suggests the 50MP wide + 50MP 2x telephoto dual camera setup of the 2024 model will be replaced by a 50MP wide + 50MP ultrawide configuration – not as much zoom, but the option to fit more inside the frame.
If these details are accurate, the Moto Razr Plus 2025 will be a shade taller, thicker, and heavier than its predecessor, though not by much. Overall, it sounds like this is a respectable year-on-year upgrade, though as always the pricing will be crucial.
We've heard quite a few leaks and rumors in the build-up to the official launch later this month. Just a few days ago, benchmarks for the foldable phone appeared online, which also pointed to processor and memory upgrades.
It's likely that a standard Razr 2025 will show up at the same time as the Razr Plus 2025. We're also now looking forward to the launch of the Samsung Galaxy Z Flip 7 flip foldable, which should be making an appearance sometime in July.
You might also likeThe Last of Us season 2 has finally landed on TV screens across the globe – and if you're eager to watch its next episode, you'll need my help to find out when it'll make its debut.
Below, I'll tell you when The Last of Us TV show's latest chapter will be released in the US, UK, and Australia. You'll also learn which of the world's best streaming services it'll be available on. Oh, and I'll give you the details on when new episodes will air every single week.
Here, then, is when you can catch the follow-up to The Last of Us season 2 episode 1.
What time does The Last of Us season 2 episode 2 come out in the US? Don't look so sad, Joel, episode 2 will be out soon! (Image credit: HBO/Liane Hentscher)Episode 2 of the sophomore season of The Last of Us will be available to stream in the US on Sunday, April 20 at 6pm PT / 9pm ET. Just like its predecessors, the HBO exclusive's next installment is going to air on the aforementioned cable network and Warner Bros Discovery's super streamer Max.
When can I watch The Last of Us season 2's next episode in the UK? Abby isn't happy that she has to wait a few more days for season 2's next episode (Image credit: HBO/Liane Hentscher)The Pedro Pascal and Bella Ramsey-starring TV adaptation of Naughty Dog's video game series will return in the UK on Monday, April 21 at 2am BST.
As for where you can stream it, Sky Atlantic and Now TV are your friends on British shores.
When will The Last of Us season 2 episode 2 come out in Australia? We'll be reunited with Toomy soon enough (Image credit: Liane Hentscher/HBO)Episode 2 of one of the best Max shows will make its debut in Australia on Monday, April 21 at 11am AEST.
As I mentioned in my season 2 episode 1 release date and time article, Foxtel subscribers will be able to watch new episodes of The Last of Us on that platform, too.
The Last of Us season 2 full release schedule More dangerous adventures await Ellie and Dina in season 2 (Image credit: Liane Hentscher/HBO)Five more episodes of The Last of Us 2 are set to launch on the aforementioned streamers before the dystopian drama departs once again. You can find out when episode 3 and its follow-ups will arrive by consulting the list below.
Artificial intelligence (AI) is reshaping workplaces at lightning speed—but nearly a third of employees don’t know how to use it effectively. Instead of unlocking AI’s potential, many companies are watching productivity stall as workers struggle to adapt. The problem isn’t the AI itself; it’s a failure to prepare employees for collaboration with AI rather than competition against it.
So, how can companies turn this around?
The AI Knowledge Gap: A Threat to Workplace InnovationDespite the widespread adoption of AI tools, many employees feel left behind. The Corndel 2025 Workplace Training report revealed that:
Employees aren’t just unsure about AI—they feel left behind. Without structured L&D strategies that encourage AI collaboration, organizations risk falling behind as competitors fully integrate AI-driven efficiencies.
Shifting L&D to Enable AI CollaborationTraditional workplace training focuses on developing human-only skills. However, in an AI-powered workplace, employees must learn how to work alongside AI—not against it.
Here’s how L&D departments can adapt:
1. AI Literacy for All Employees
Organizations must introduce foundational AI training to demystify the technology and show employees how to incorporate it into daily tasks. This includes:
2. Role-Specific AI Training
Not all employees need the same AI training. L&D teams should tailor programs to specific job functions:
3. Ethical and Responsible AI
Training Ethical AI use must be a core component of workplace training. This includes:
4. Hands-On AI Learning and Experimentation
Many employees are hesitant to use AI simply because they have never tried it. L&D teams should:
5. Cross-Functional AI Collaboration
AI training should not be siloed within specific departments. Instead, organizations should foster cross-functional AI collaboration by:
Looking ahead, AI will continue to revolutionise L&D through:
Businesses that invest in AI training today won’t just survive the AI revolution—they’ll lead it. The future belongs to companies that embrace AI as an extension of human capability, not a competitor to it.
The question isn’t whether AI will change the workplace, but whether organizations will equip their people to change with it.
L&D is no longer just about keeping up—it’s about leading the way. Businesses that reimagine their training strategies today will be the ones defining the AI-driven workplace of tomorrow.
We rate the best employee experience tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
If you’re a Star Wars fan and haven’t been jealous of not being at Star Wars Celebration 2025 in Japan as of yet, prepare to be. The same applies if you have an Apple Vision Pro instead of a Meta Quest.
Why? Well, Industrial Light & Magic and Lucasfilm are finally sharing more on their next Star Wars mixed and virtual reality experience that’s set to arrive on the Meta Quest 3 and Meta Quest 3S headsets at some point in the future, and boy oh boy does it look stunning.
Star Wars: Beyond Victory - A Mixed Reality Playset is set during the events of Solo: A Star Wars Story and has three modes of play: Adventure, Arcade, and Playset. You can see the full trailer below, along with some select screenshots. It's a full-immersive experience that can place you in the Star Wars universe or overlay elements in your own space.
Adventure is more of a classic, immersive experience, similar to other titles like Star Wars: Tales from the Galaxy’s Edge – a personal favorite I’ve played on the PSVR, as I’m a fan of the Disney Parks – and Vader Immortal: A Star Wars VR Series. Here you’ll follow the story of an aspiring podracer, Volo, who goes on a journey courtesy of a mentorship with Sebulba.
(Image credit: Lucasfilm)This one might be the neatest, though – Arcade places a holotable in your space through mixed or augmented reality, and you can seemingly get up close and personal with Star Wars action, including a podracing course.
And if you’re at Star Wars Celebration 2025 in Japan, you can play a demo that combines the Adventure and Arcade modes at Booth #20-5 in Hall 4 of the Makuhari Messe convention Center. Instant jealousy from me!
(Image credit: Lucasfilm)Alyssa Finley, the executive producer of the title, shared, “We're calling this a Playset because it isn't just a game; it's an entirely new way to experience the Star Wars galaxy and the worlds we create at ILM.”
This new mixed reality experience blends the physical and digital worlds in a way that's unlike anything we've done before
She continued, “This new mixed reality experience blends the physical and digital worlds in a way that's unlike anything we've done before,” which certainly ups the excitement and hype for the title. It’s almost similar to another project that Industrial Light & Magic worked on for the Apple Vision Pro – that’s What If…? – An Immersive Story, and it had times where it fully placed you elsewhere or overlaid battles in your own space.
Image 1 of 6(Image credit: Lucas Film)Image 2 of 6(Image credit: Lucasfilm)Image 3 of 6(Image credit: Lucasfilm)Image 4 of 6(Image credit: Lucasfilm)Image 5 of 6(Image credit: Lucasfilm)Image 6 of 6(Image credit: Lucasfilm)Adding to this is the playset mode, which promises to let you have your own “Star Wars' moments in mixed reality, allowing you to view and interact with vehicles from the universe and action figures.
While Star Wars: Beyond Victory - A Mixed Reality Playset is still in development, it’s undoubtedly one of the most ambitious titles from Industrial Light & Magic and Lucasfilm yet. Whenever it’s ready for prime time, it will launch for the Meta Quest 3 and 3S, so we’ll be waiting for further news on a release date.
If you have a Vision Pro, maybe we can petition Apple, ILM, and Lucasfilm to also bring it to the $3,500 spatial computer. And if you're at home, check out all the new Star Wars sets that Lego announced here.
You might also likeIn a surprise twist and a reversal of its earlier paywalled plans, Google has announced that Gemini Live’s screen and camera sharing features are now rolling out for free to all Android users. No subscription or Pixel ownership necessary, just Gemini Live, accessible to anyone with the Gemini app on Android.
This update means your AI assistant can now see what’s on your screen or through your camera lens and react to it in real time. Gemini Live with screen sharing lets you show Gemini a webpage, a spreadsheet, or a tangled mess of app settings and ask for help. Or you can point your camera at a real-world object, like a product label, a chessboard, or a confusing IKEA manual, and let Gemini identify and explain what you're looking at.
The feature first debuted earlier this month, but only for Gemini Advanced subscribers and only for certain phones, such as the Pixel 9 and Samsung Galaxy S25. At the time, Google said the visual capabilities would eventually expand, but even then, only to other subscribers. Google apparently had a change of heart, or at least it claims to have decided to open up access because of how much people seem to like the feature. Now, it’s rolling out to every Android over the next few weeks.
We’ve been hearing great feedback on Gemini Live with camera and screen share, so we decided to bring it to more people ✨Starting today and over the coming weeks, we're rolling it out to *all* @Android users with the Gemini app. Enjoy!PS If you don’t have the app yet,… https://t.co/dTsxLZLxNIApril 16, 2025
AI eyesThe idea for the feature is to make Gemini more flexible as an assistant. Instead of just answering questions you type or speak, it’s interpreting the world around you visually. The move also coincides with Microsoft announcing that Copilot Vision, its own version of AI eyes, is now available for free in the Edge browser. That might be a coincidence, though probably only in the way that you running into your crush outside their class in high school is a coincidence.
But while Microsoft’s Copilot lives in the browser, Gemini’s advantage is its integration straight into the Android ecosystem. No need to fire up Edge or download a separate tool. Gemini Live is baked into the same system that already runs your device.
The new ability fits with many of the other additions and upgrades Gemini has added in recent months. The AI assistant now comes with real-time voice chat, a new overlay so you can summon Gemini on top of other apps, and the inclusion of the long report writing tool Deep Research.
Once the new feature is live, you’ll see the option to “share screen” or “use camera” in certain Gemini prompts on Android devices. And because Google is giving this away for free, it sets a new bar. If Gemini can watch your screen and your camera without charging you for the privilege, what happens to the idea of “premium” AI access? The developers are probably hotly debating what AI features are worth paying for and how much to charge, when, at least for now, all of these tools become free relatively quickly.
You might also likeMeta has resumed its plan to train its AI models with EU users' data, the company announced on Monday, April 14, 2025.
All public posts and comments shared by adults across Meta's social platforms will soon be used to train Meta AI, alongside all interactions users directly exchange with the chatbot.
This comes as the Big Tech giant successfully launched Meta AI in the EU in March, almost a year after the firm paused the launch amid growing concerns among EU data regulators.
What's Meta AI training and how to opt out"We believe we have a responsibility to build AI that’s not just available to Europeans, but is built for them. That’s why it’s so important for our generative AI models to be trained on a variety of data so they can understand the incredible and diverse nuances and complexities that make up European communities," wrote Meta in the official announcement.
This kind of training, the company notes, it's not unique to Meta or Europe. Meta AI collects and processes the same information, in fact, across all regions where it's available.
As mentioned earlier, Meta AI will be trained with all public posts and interactions' data from adult users. Public data from the accounts of people in the EU under the age of 18 won't be used for training purposes.
Meta also promises that no people's private messages shared on iMessage and WhatsApp will ever be used for AI training purposes, too.
(Image credit: Meta / Future)Beginning this week, all Meta users in the EU will start receiving notifications about the terms of the new AI training, either via app or email.
These notifications will include a link to a form where people can withdraw their consent for their data to be used for training Meta AI.
"We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones," explains the provider.
It's crucial to understand that once fed into an LLM database, you will be completely losing control over your data, as these systems make it very hard (if not impossible) to exercise the GDPR's right to be forgotten.
This is why privacy experts like Proton, the provider behind one of the best VPN and encrypted email apps, are urging people in Europe concerned about their privacy to opt out of Meta AI training.
"We recommend filling out this form when it's sent to you to protect your privacy. It's hard to predict what this data might be used for in the future – better to be safe than sorry," Proton wrote on a LinkedIn post.
Meta's announcement comes at the same time that the Irish data regulators have opened an investigation into X's Grok AI. Specifically, the enquiry seeks to determine whether Elon Musk's platform uses publicly-accessible X posts to train its generative AI models in compliance with GDPR rules.
You might also likeIt's no secret that digital photo metadata contains everything from technical info about the camera that shot it to, based on GPS data, exactly where you were standing when you took the photo. ChatGPT, doesn't need that detail.
The latest model GPT-o3 is shockingly good at geo-locating almost any photos you feed it.
In the latest viral craze to sweep through the AI meme universe, people are feeding ChatGPT Plus running the Advanced Reasoning model o3 images, often stripped of all metadata, and prompting it to "geoguess this".
The really cool thing about it is that because model o3 is a "reasoning" model, it shows you its work, telling you how long it's thinking, displaying how it's splicing up an image to investigate specific parts, and explaining its thinking and how well it's doing at solving the goelocation riddle.
I tried a few experiments, starting first with an image culled from an article about the 26 best beaches. In this test, I made what I think was a critical error and gave away the game to ChatGPT.
Image 1 of 2(Image credit: Future)Image 2 of 2(Image credit: Future)After downloading the image of the Praia de Santa Monica beach in Cape Verde (off the coast of Africa), I dropped it into ChatGPT with the prompt "GeoGuessr", which also happens to be the name of a popular online geo guessing game and is one of a handful of prompts people are using for geolocation guessing.
It only took 24 seconds for ChatGPT to respond:
Here’s an educated “GeoGuessr”‑style guess, following AP style:
Probable setting: Boa Vista, Cape Verde — specifically a stretch such as Praia de Santa Mónica.
Why this beach fits:
Water color and clarity: The gradient from pale aqua to deep teal is common on mid‑Atlantic tropical islands, where volcanic shelves drop off quickly.
Sand tone: The sand is light gold with very fine grain, matching Boa Vista’s deserts and dune‑fed beaches rather than the whiter coral sand of much of the Caribbean.
Surf and slope: The breakers are modest but steady, typical of Atlantic swell hitting a long, shallow shelf; Mediterranean beaches rarely show this consistent whitewater.
Sky and horizon: A faint haze over a light‑gray blue sky is consistent with harmattan dust that sometimes drifts west from Africa across the Cape Verde archipelago.
Confidence level: Low to moderate. Many open, dune‑lined Atlantic beaches—from the Canary Islands to parts of Western Sahara or even certain Caribbean coasts—could look almost identical without landmarks.
That was remarkably accurate. However, I noticed afterwards that the image file name included some location information.
It was time to give ChatGPT o3 a more challenging task, and I knew exactly how to do it. Instead of downloading images, I would screen capture them and paste them into the prompt field.
This method strips out all metadata and forces the model to use the image alone for its geolocation sleuthing.
I chose an image of a bookstore, one where I wasn't even sure of the location.
Image 1 of 3(Image credit: Future)Image 2 of 3(Image credit: Future)Image 3 of 3(Image credit: Future)ChatGPT started by examining potential bookstores. It also zoomed in on numerous image details that it considered clues, like a Persian rug, and even some of the books and signage.
In its thinkling out loud sort of way, o3 considered a wide range of bookshops in New York City and around the world:
"Hmm, the modern filament-style bulbs suggest a more contemporary feel. It could potentially be "Daunt Books," but their walkway has an arched skylight, not the wooden style I'm seeing here."
I know, it sounds like a real person.
This went on for a few minutes as it thought, analyzed, and searched the web. Ultimately, it failed to come up with an answer.
Going deeper, getting lostI could tell this feature was going viral because ChatGPT o3 failed multiple times, even to ingest and analyze the photo, complaining of server errors.
Eventually, I got it working again and fed it first a portrait mode selfie and then the same shot without the portrait mode effect.
Image 1 of 2(Image credit: Future)Image 2 of 2(Image credit: Future)On the former, it broke my image up into pieces and told me that I was in mid-town Manhattan. That was mostly accurate, but ChatGPT failed to determine my exact location and even admitted that its "Confidence level" was low.
When I turned off the bokeh effect and fed it only a screen grab, it found my exact location in just 56 seconds. The result:
Most probable location
Midtown Manhattan — on the south side of East 42nd Street, just east of Park Avenue, in front of Cipriani 42nd Street (110 E. 42nd St.). Estimated coordinates: 40.7518 N, 73.9775 W.
How the clues point there
A green octagonal sign with yellow script and a matching pennant flag both read “Cipriani,” the exact street frontage of the landmark Bowery Savings Bank building that now houses Cipriani 42nd Street.
Across the street stands the gold‑tinted curtain wall of 120 Park Ave. Its sloped glass pavilion, visible over your left shoulder, is unique to that address.
The camera looks west; Midtown’s setback towers, a Sixth Avenue bus stop disk and the sliver of One Vanderbilt’s crown align with that sight line.
A green Evergreen shipping container, used as a construction barrier, currently sits outside Grand Central’s Pershing Square plaza, matching the photo’s lower left corner.
Confidence level
Moderate. The Cipriani branding and the distinctive 120 Park Ave. façade give a high‑quality anchor, though similar curtain‑wall towers exist farther west on 42nd Street.
Image 1 of 2(Image credit: Future)Image 2 of 2(Image credit: Future)I frustrated the model with a photo from Antigua that was taken on my car ride to the airport. It was a shot of a hillside through a fence. ChatGPT rightly guessed a tropical island near the Caribbean but could not pinpoint the exact location.
For my last test, I gave it a clearer picture of the resort where I stayed.
ChatGPT o3 thought for two minutes, 22 seconds before deciding it was Hawksbill Bay in Antigua. It got the island right, but my resort on Long Bay is located 46 miles away on the opposite side of the island.
Image 1 of 2(Image credit: Future)Image 2 of 2(Image credit: Future)This is another fun AI game to play with friends, but there are some concerning privacy implications.
If you take digital photos of yourself or anything in any location around the world and post them online, anyone with access to ChatGPT Plus could use them and the o3 model to suss out where you are or have been.
And it's not just friends and family you have to worry about. Your employer could be looking or even the authorities might be interested in your location.
Not that I'm implying you would be sought by the authorities, but just in case, maybe stop posting photos from your latest hideout.
You might also like