Diego Luna and Adria Arjona have revealed how long they've known about Andor season 2's incredibly bittersweet final scene.
Speaking to TechRadar ahead of season 2's launch in late April, the pair admitted that it's been a long time since they were first told about the Star Wars series' last scene.
In fact, it sounds like Luna and Arjona have been aware of its ending since development began on Andor's first season, so it's a secret they've had to keep for many years. And, as Arjona told me, holding onto that information for so long hasn't been easy.
Full spoilers immediately follow for Andor season 2 episode 12 and Rogue One: A Star Wars Story.
Hang on... is that Cassian's child?! (Image credit: Lucasfilm/Disney+)The final scene of Andor season 2 – and, with the Disney+ show ending after two seasons, the last one we'll ever see – reveals that Cassian and Bix, the characters that Luna and Arjona portray, have become parents.
After we watch Cassian and his android bestie K-2SO fly off to meet Galactic Empire informant Tivik on the Ring of Kafrene – a scene that directly leads into the events of 2016's Rogue One film – we're reunited with Bix on Mina-Rau. Viewers will remember this Outer Rim world as the place that Bix, Brasso, Wilmon, and B-2EMO fled to after Andor season 1's finale.
Meanwhile, in season 2 episode 9 we learn that Bix made the heart-breaking decision to leave Cassian so he could focus on helping the burgeoning Rebel Alliance topple the Empire. It's pleasing, then, that we're reunited with Arjona's fan-favorite character one final time before one of the best Disney+ shows ends.
Bix's difficult decision to leave Cassian in season 2 episode 9 makes complete sense now (Image credit: Lucasfilm/Disney+)What many fans might not have anticipated, though, is that Bix was pregnant when she left Cassian. Season 2's final scene not only confirms this because, well, Bix is cradling their child when we're reunited with her, but it also appears old enough for months to have passed since she gave birth. Each three-episode arc in season 2 takes place a year after their forebear, which is why Bix is a mom when we meet her again.
It's a big reveal, the bittersweetness of which is three-fold. For one, it's clear that Bix made the incredibly tough decision to leave Cassian to keep their baby safe. Then there's the fact that, if Cassian had learned that Bix was pregnant, he would have given up his role in the embryonic rebellion to enjoy a peaceful family life with Bix and his newborn.
The most tear-jerking part of all of this, though, is that Cassian never learns he's become a father. After helping the Rebel Alliance to acquire the Death Star plans on Scarif in Rogue One, Cassian dies when the Empire's now-operational, planet-killing superweapon destroys Scarif, i.e. the planet he and numerous other rebel fighters are unable to flee after the successfully steal the Death Star's blueprints.
#andor #AndorSeason2 #andorspoilers cassian died for a sunrise he never got to see. pic.twitter.com/QxTpUZX5CRMay 14, 2025
Armed with the knowledge that Cassian is survived by Bix and a child he didn't know he had, fans won't view Rogue One in the same light ever again. That's something Luna teased at length in the lead-up to season 2's arrival and, following episode 12's release, now we know why he was so insistent on Rogue One being a completely different viewing experience once everyone had finished watching Andor on one of the world's best streaming services.
"I knew [how season 2 would end]," Luna, who also serves as an executive producer on Andor, told me.
"From the beginning, we knew what the ending was going to be and we were always aiming for it. We all knew where we were going from the outset and, with this being a prequel to Rogue One, we understood the assignment. I think that's what makes this show different – we had an ending in mind, we stuck to it, and I hope it makes people re-watch Rogue One in with an entirely new perspective."
We knew what the ending was going to be and we were always aiming for it
Andor actor Diego Luna
As for Arjona's worries about slipping up and revealing Andor's official ending ahead of time, she said: "He [showrunner Tony Gilroy] told me pretty early on during season one's development. It was a big secret to keep and I'm not very good at keeping them – I get a little overexcited! But, I kept this one because it's so important to the story we wanted to tell.
"Tony gave me a very good idea of what season two's story was going to be when I initially signed onto the project," Arjona added. "Then, when he was writing season two, he told me it was going to end with that shot.
"Most showrunners or just people in the industry, they'll tell you something to get you sign up to something and then not follow through on that promise. But, everything Tony told me is exactly what happened and I'm very grateful for his honesty."
Andor seasons 1 and 2 are out now in their entirety on Disney+.
You might also likeAs a family tech expert, I’ve seen social media and tech companies do some pretty incredible things. But Google’s plan to roll out Gemini, its AI chatbot, to users under 13, is wild. They gave notice to parents in an email, but it felt much more like a warning than a warm invitation.
So, why are they doing this? No one’s really sure, although Google is simply just joining Instagram, Snapchat, and a whole host of other platforms in the race to bring AI to nearly every facet of our lives.
Children, though, are much more vulnerable than adults — especially when it comes to online interactions. Here are my top 5 concerns about Google’s recent and reckless decision in opening up Gemini to kids 13 and under.
1. It’s teaching kids to outsource thinking and creativity from a very early ageYoung kids need to be practicing writing, drawing, and critically thinking with their own minds — not using scraped words and images dredged up from the depths of the internet.
Chat GPT is already proving to be a breeding ground for cheating and shortcuts in schools. Giving younger kids instant access to Gemini will only accelerate this introduction to cutting corners when it comes to learning and being creative.
2. Misinformation is rife on the platformWhen an AI platform like Gemini provides completely wrong information, it’s called a “hallucination.” That’s a quaint way of saying “making things up that are total nonsense.”
Google even says in its FAQs about Gemini in the Family Link app that “[Hallucinations are] a strange term, but it basically just means that Gemini sometimes gets things wrong or says things that aren’t true. [They] can sound real, and Gemini might even say them confidently.”
For adults, these types of errors may be easy to recognize and ignore, like saying that the capital of France is Cairo. But for kids, they may not know when to double-check a simple answer — let alone something complicated or nuanced. This sort of defeats the purpose of having Gemini help with homework for children.
3. Inappropriate content can present dangers to kidsGemini can also act as a chatbot “friend” for kids, which presents multiple dangers. Other similar chatbots have been blamed for exposure to sexual content and even one child’s suicide.
Of course, Google has stated that the Gemini for kids will have safeguards, but there’s never a guarantee that inappropriate things won’t slip through the cracks — especially when these AI platforms regularly hallucinate.
Fortunately, apps like Bark can monitor your child’s saved photos, videos, and even text messages for inappropriate content they may save or share from Gemini.
4. Personal info that’s shared can be hackedSharing personal information — from sensitive emotional states to home addresses to personal photos — with an AI platform is vulnerable because everything can be hacked. If someone were to gain access to your child’s Gemini chats, it could be stressful and even dangerous.
Hate speech and bias can be conveyed in Gemini responsesThe way AI platforms like Gemini work is by scraping the totality of the internet for similar information — information that was written by other humans.
That means that human biases and viewpoints can be presented by Gemini as truth, which we now know isn’t even always true.
Because AI platforms provide answers based on information that humans created, it can mirror prejudices that exist in the data it’s fed with. This can include harmful positions about marginalized groups.
Final wordAt the end of the day, technology is just another tool that can make our lives easier, but it’s just that — a tool, not a necessity.
Even though calculators are used every day in advanced math, kids still learn how to count, add, subtract, multiply, and divide the old-school way in elementary school.
The same should go with AI platforms like Gemini when it comes to writing, thinking, and being creative.
Checkout our comprehensive list of the best AI tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The modern IT landscape is growing more complex every day. It’s predicted that more than $5.61 trillion will be spent on IT this year as companies continually expand their estates.
This perpetual growth means that keeping track of everything within the IT infrastructure is becoming increasingly challenging and many organizations operate with significant blind spots in their networks.
This gives rise to the ‘unknown unknowns’ – devices that are unmonitored and unmanaged but can still access critical corporate assets. These are the most dangerous kinds of security gaps, creating vulnerabilities that cannot be closed because they are not even on the radar.
It’s time to get past any assumption that “what you can’t see won’t hurt you” – cyber attackers are specifically hunting for the hidden vulnerabilities that organizations overlook.
The problem with traditional IT asset managementThese security gaps aren’t typically the result of a lack of effort or investment, but a natural byproduct of IT and security teams either not having the right tools or not using their tools effectively. Some teams discover 15-30% more devices that were totally off their radar even though they have been conducting manual audits regularly.
Much of this false sense of security is the result of traditional tools that aren’t capable of seeing the big picture. Many agent-based scanners and on-premises security tools only give a narrow view and fail to detect all assets on the network. A device might appear to be secure through the metrics of one tool but actually lack critical controls when linked with other data across the system.
This is exacerbated by highly fragmented IT landscapes. Siloed teams and disconnected tools make it impossible to achieve a unified approach to security. Each team might believe they have control of what they can see, but their data doesn’t align. Without an easy way to correlate and compare data and processes, the dots won’t be connected.
Inefficient, manual-heavy processes also limit teams to conducting periodic audits. With IT environments evolving on a daily basis, these audits are outdated the moment they’re completed.
Why these gaps are the biggest security risksThe cracks in security visibility can appear in multiple forms. One of the most common issues is employees accessing corporate systems via unmanaged devices. This is particularly prevalent when Bring Your Own Device (BYOD) policies are combined with flexible working but without the controls to back it up. Many people are still accessing corporate data using home laptops that are completely outside of the IT department’s control. This situation means ignoring a threat sitting right on your network.
We also often find networks containing dormant or misconfigured assets that appear to be safe and compliant on the surface. Our data finds around 10% of devices lack essential cybersecurity controls, and 20% aren’t properly configured. In the worst case scenario, controls aren’t functioning at all.
Audit reports may also indicate that a system is offline, but it is actually still communicating with corporate networks and, therefore, still an active security risk.
These unseen and unsecured devices are highly vulnerable to cyberattacks, providing an opportunity for threat actors to gain a foothold in the network without triggering any security alerts. Compromising an unmonitored personal machine offers a cybercriminal an easy path in, enabling them to access sensitive information on the network and exploit channels like email for Account Takeover (ATO) attacks.
How organizations can close the visibility gapIf an organization doesn’t know an asset exists, it has no chance of securing it. So how do teams start finding and accounting for these dangerous unknown unknowns?
The first step is to equip IT and security teams with the right tools, along with the expertise and processes to use them. We often find companies have invested heavily in a full suite of solutions, but many of them aren’t being used effectively or may be unnecessary for the company’s needs.
This means that, even with these investments, they may not have a clear picture of the security health of their estate. It’s not about frequency, it’s about approach. To reliably find and close these gaps, security teams need both a complete view of their entire network and everything accessing it, and the assurance that this picture is completely accurate and up to date.
A Cyber Asset Attack Surface Management (CAASM) strategy is central to achieving this visibility and control. This takes a highly automated approach to asset discovery, building a list based on what is actually connected to the network and accessing systems, rather than an outdated inventory.
Once a clear and accurate picture of all assets has been established, it’s possible to start delving into how secure each device is. This means establishing if the right security controls are installed, whether they are actually functional, and if they have been properly configured. Proper validation is essential – it’s never enough to just assume controls are working.
From here, it’s crucial to keep up continuous, real-time monitoring for all assets. Again, automation is key as manually correlating IT asset data is impossible at scale. Automated tools can compare access logs with IT inventories in real-time and flag inconsistencies.
It’s also important to move away from device discovery alone and account for application access patterns. Security teams should have a clear view of what devices are accessing key applications and data so that they can spot anomalies such as access attempts from devices outside the managed asset list.
Eliminating the blind spots for goodSecurity frameworks like Cyber Essentials, ISO 27001 and NIST CSF can provide a good starting point for prioritizing security needs and improving visibility. However, organizations need to foster a culture where unidentified assets are proactively identified and secured. Even a single unmanaged device can open the door to a major breach, so detecting them must be embedded into daily operations, not treated as an annual or quarterly audit task.
The reality is that many organizations are unaware of the extent of their IT blind spots and have a chance of closing the gaps with their current capabilities. If you don’t have full visibility, you’re making security decisions based on incomplete data. It’s like locking your front door while leaving the windows wide open – and then pulling the blinds down so you can’t see the issue.
Check out the best IT asset management software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Sustainability isn’t a word most people associate with SaaS. Unlike sectors that rely heavily on physical infrastructure or global logistics, the software industry often escapes scrutiny when it comes to environmental impact.
With the carbon cost of digital activity rising, especially in the era of AI, that’s starting to change.
Since launching just over two years ago, EcoSend, an email marketing platform, has run entirely on renewable energy and serverless infrastructure. I spoke with the James Gill, the company’s founder and CEO, to understand what’s holding SaaS back from being truly sustainable, and what it will take for the industry to lead, not lag, in building a greener digital future.
What does being sustainable mean to an online business like yours? How do you gauge/monitor your sustainability?At EcoSend, sustainability isn’t just a buzzword or an afterthought. It is baked into our very DNA as a company.
Since we launched EcoSend just over two years ago, we have run our platform on fully renewable energy and the latest ‘serverless’ technology. Being ‘sustainable’ means that our planet has to be at the heart of every decision we make, and every corner of our product. Historically, SaaS businesses have avoided the level of scrutiny around sustainability which other industries have been exposed to.
Companies operating in hardware, transportation, supply chains etc have started adapting to more sustainable practices as a result of this scrutiny. We strongly believe that SaaS companies should adapt in the same manner, to actively lead us all towards a better future, rather than continuing to lag behind.
To help us monitor our commitment to sustainability, we don’t just rely on internal processes, but external accreditation and reporting. We are certified by ‘Green Small Business’, as well as members of the Good Business Charter, Pledge 1%, and Terra Carta. In addition to this, we are currently in the process of certifying as a B Corp.
As a SaaS provider, what are the biggest obstacles you've encountered in your journey towards being more eco-friendly?One of the greatest challenges we face is common to many businesses working in the sustainability space, and that is a lack of awareness. Few people are aware of the carbon cost of our digital activities, and yet if ‘The Internet’ were a country, it would be the fourth most polluting nation in the world.
Taking email specifically, over 360 billion emails are sent per day according to research by Statista. Each email can have a carbon cost of up to 26g. Yet few marketers are aware of the environmental impact of their campaigns. While many of us can understand the impact of boiling a kettle, the impact of digital activities like sending an email seems harder to wrap one’s head around.
So the first challenge is raising awareness. Secondary to that challenge is motivating change.
It is one thing for a company to be aware of the environmental impact of their online activity, but to motivate them to migrate their email service provider, data and campaigns is another.
We dedicate a lot of effort to making migration as easy as possible for companies, to help remove the inertia associated with changing software. This includes custom import tools and dedicated onboarding sessions with our support team to facilitate the process for companies who are keen to make the switch.
In an ideal world, what would it take to get publishers like us to report more on ecological credentials of SaaS providers?I believe a starting place would be better awareness of the scale of the challenge. Digital emissions often sit behind reporting on other high-emission activities, such as air travel. But the ‘invisibility’ of digital emissions should not mask its scale nor its impact.
With the recent, meteoric rise in AI, the impact of digital emissions will only continue to grow and be more keenly felt.
The ‘carrot’ motivation to SaaS companies to improve their sustainability credentials will only get us so far. But increased coverage by publishers in the digital sustainability sector will lead to increased regulation across the ecological credentials of SaaS companies. This will play an additional ‘stick’ factor in motivating companies to change to better practices and reporting. Both are required in order to facilitate change.
EcoSend has sustainability built at its core as its USP. How different is the make-up of your audience compared to your peers?We are fortunate to work with hundreds of dedicated B Corps, charities, and purpose-driven organizations, for whom the concepts of sustainability and holistic capitalism are ingrained in their core ethos.
Our clients are ambitious and determined to grow profitably, but the key difference is that this growth never comes at the expense of people or the planet.
Whether they are entrepreneurs, SMEs, or large corporations; we learn so much from our client’s efforts to drive forward digital sustainability, and we’re delighted to support them in their mission.
AI has been hailed as the solution to the sustainability conundrum and the destroyer of forests. What's your take on this?The rise of AI presents both threat and opportunity. The deciding factor will be in how AI is deployed. ChatGPT is currently estimated to use around 39.98 Million kWh per day. That’s more than the annual energy consumption of over 100 nations.
Frivolous usage of AI will only hurtle us further down a path we shouldn’t be on. That said, if we can consciously harness AI, we can use it to create a better future. Whether it’s automating complex ESG reports, evaluating the sustainability credentials of a large supply chain, or increasing the efficiency of corporations’ admin tasks; there is scope for AI to serve us, rather than contribute to our demise.
If the use of AI is both limited and conscious, then I believe it can be a solution to the sustainability challenge. Unfortunately, current trends seem to show the opposite is the case.
Where do you go from here? How do you get even more sustainable?We are currently in the process of certifying as a B Corp. The process to apply and maintain B Corp certification will ensure our company is held to B Corp’s high standards for ethical and sustainable business practices. Through this application, we will hold ourselves publicly accountable to ensure we ‘walk the walk’ of sustainability, ethics, and community across our company, software platform, and supply chain.
Beyond our B Corp application, we regularly review how to improve our sustainability credentials. These reviews have prompted us to take a more active role in our local community, through volunteering at food banks and conservation sites across London.
One example of ways we have evolved - we have always sent a gift to our Enterprise clients every Christmas. We switching from sending hampers to instead sending donations to Beam - an organization that supports people experiencing homelessness in the UK to find stable work and accommodation. We have so far sponsored over 50 clients via our partnership.
In addition, we recently added a new tree-planting partner to our ecosystem to improve the monitoring and reporting of where our clients’ trees are planted.
While we are proud of what we’re doing, we know the world needs more - more from us, and more from all businesses. We are excited to push ourselves to do ever more to protect our home planet and inspire others to follow.
You might also likeTalking about Daredevil: Born Again season two alongside Charlie Cox at the Disney Upfront presentation, Krysten Ritter has officially announced her return as Jessica Jones.
According to Variety, Ritter said, “It’s so great to be back… I’m so excited to bring back this iconic character, and without giving too much away, there is much more in store for Jessica Jones. This is going to be an incredible season!”
When Netflix canceled its Marvel TV series, and Disney Plus resurrecting some of the characters, it was probably just a matter of time before the Defenders found a new home.
Charlie Cox's Daredevil was the first – returning as "a really good lawyer" in 2021's Spider-Man: No Way Home, Vincent D’Onofrio's Kingpin appeared in Hawkeye and everyone was delighted to see the reintroduction of Jon Bernthal's The Punisher in season one of Daredevil: Born Again.
And now, it’s exciting to know that Ritter’s Jessica Jones will continue the fight alongside Daredevil after The Defenders ended on Netflix.
That said, we don’t know exactly what her role is going to be in season two. It could well be a small one, but Ritter hinting that there’s “much more in store” for her character lends gravity to this announcement. And, as we saw in The Defenders, the two characters align well, so it makes sense for the two to pair up again.
The nine-episode first season wrapped up in April on Disney Plus and we’re yet to find out when the second season will release – but here's everything we know about Daredevil: Born Again season two.
Why Jessica Jones' return is a smart move for Disney (Image credit: Netflix)While I'm excited to see the return of Jessica Jones, it also makes sense from a financial standpoint for Disney because it might well be cheaper. A series centered on a grounded investigator with super strength – but not Hulk-level strength – should be much cheaper to make than the stunt-heavy Daredevil.
Sure, Disney doesn't necessarily need to pinch pennies; however, Andor – which will complete its second and final season this week – was originally slated for four or five seasons and I find it hard to believe its reported $645 million cost had nothing to do with it.
I am, of course, speculating and we really don't know what the future holds for Jessica Jones, but Ritter is a great actor, and I personally rank Jessica Jones just behind The Punisher. Even my parents, who wouldn’t touch a Marvel movie with a 10-foot pole, enjoyed the latter when it was on Netflix.
If Disney can take lesser-known characters like Jessica Jones, get good actors to play them and create grittier, realistic stories with some superpowers sprinkled in, it could be a successful show that might cost closer to $100 million. Unlike The Acolyte, which despite a promising premise, suffered from weak writing and performances that held it back.
We don’t yet know if Mike Colter (Luke Cage) or Finn Jones (Iron Fist) will join the MCU in the future. I'd be interested to see how they'd play in these new-look series – but, for now, I’m excited to see my favorite alcohol-loving, super-strong detective back in action. Teaming up with Daredevil is just a bonus.
You might also like...I've spent a lot of time experimenting with ChatGPT’s Deep Research feature, and I've produced all kinds of strange (though comprehensive) reports. There's always been a notable gap in its functionality, though, until now. OpenAI has augmented the Deep Research feature with the ability to export your reports as fully formatted PDFs. No more ChatGPT links or screenshots necessary to share what I've learned about the Lake George monster.
It's a small interface upgrade, but one that feels like it should have been built into Deep Research from the beginning. Here’s how it works. You make your Deep Research report or pull up one from a while ago, then click on the share icon at the top of the page. You'll see that the usual 'share link' button now has a companion 'download as PDF' button. One click and your report will be a fully formed, citation-rich PDF in your downloads folder.
This export option isn't universally available at the moment. You'll need a subscription to ChatGPT Plus, Team, or Pro. Enterprise and Education users don’t have it yet, but OpenAI said it’s coming soon. That's good, as students and professionals are among those I would bet would use Deep Research the most.
Deep PDFYou can now export your deep research reports as well-formatted PDFs—complete with tables, images, linked citations, and sources.Just click the share icon and select 'Download as PDF.' It works for both new and past reports. pic.twitter.com/kecIR4tEneMay 12, 2025
With downloadable PDFs, you can finally do all the things you’d expect to do with your research. That might mean putting it with other research projects, sharing it with teammates, or just attaching it to an email as part of a bet you're going to win.
So yes, this is just a PDF button. But it’s a PDF button that fixes what used to be one of ChatGPT’s more frustrating aspects. Now, with downloadable PDFs, you can finally do all the things you’d expect to do with your research: archive it, share it with teammates, attach it to an email, or even – this is my new favorite – upload it to another AI.
Yes, really. With the PDF in hand, I popped it into Gemini’s NotebookLM, Google’s own experimental research assistant. Suddenly, the AI was summarizing my Deep Research report, making flashcards, and suggesting related reading. Then I tried uploading the same PDF into a podcast tool and got an AI-generated episode script out of it. Which means, in a roundabout way, ChatGPT just became a content pipeline. One that exports research and lets other tools remix it into whatever format you need.
And that’s a huge deal.
Because the more AI tools we use, the more we’re going to need bridges between them. OpenAI doesn’t need to be the everything app, but it does need to be interoperable. Giving users a PDF option is low-hanging fruit, sure, but it’s also the kind of fruit that lets you bake an entirely new pie. It makes Deep Research portable. It gives it legs. It means I don’t have to keep 14 tabs open just to reference a well-organized write-up on the history of Japanese vending machines.
Of course, OpenAI’s implementation still has quirks. It’s a little confusing that the “Download as PDF” option isn’t in the main chat share menu. Most people will assume it’s not there unless they know where to click. And for a company whose whole pitch is about reducing friction and increasing clarity, burying this behind a second share icon feels oddly off-brand. Still, I’ll take “slightly hidden but fully functional” over “completely missing” any day.
More importantly, this change signals something else: OpenAI is listening. Maybe not always quickly. Maybe not always intuitively. But enough people have clearly asked for this (or screamed about it on Reddit) that it finally happened. And in a product landscape where most updates feel like AI models arguing over who’s better at summarizing Aristotle, it’s refreshing to get a feature that solves a real-world problem.
You might also likeThe new Samsung Galaxy S25 Edge combines some of Samsung's best technology from its leading flagship phones and adds some new wrinkles to deliver an exciting new Android smartphone that is already widely known as one of the slimmest handsets on the market.
It doesn't have a new chip, but the one it has, the Qualcomm Snapdragon 8 Elite for Galaxy, is the best you can possibly get for Android. It lacks a telephoto lens, but includes the equivalent of Samsung's best 200MP camera. It's a mostly uncompromising smartphone that promises virtually every feature you could find on the more expensive Galaxy S25 Ultra (save the S Pen).
But what you're paying $1,099.99 / £1,099 / AU$1,849 for is an incredibly svelte titanium frame, one that feels thin and, at just 163 grams, exceptionally light. Even though I know the specs - 5.8mm – I struggled to find a way to put that measurement in perspective.
That's why I brought a small collection of quarters to my first hands-on experience with the Samsung Galaxy S25 Edge. I think a stack of quarters next to Samsung's newest Galaxy S phone helps contextualize these thickness claims.
Is it as thin as a stack of six quarters? What about five? And how does that compare to other phones, even classic thin handsets of a bygone era?
That's the thing about thinness. It seems to go in and out of fashion. Sometimes we prefer zaftig for, say, the battery benefits it offers, and other times, we grow weary of carrying all that extra weight and dream of paper-thin phones.
Thin phones, though, have their risks.
The arc of time bends (Image credit: Getty Images)When we think back to the icon of handset thinness, the iPhone 6 Plus, many of us also less than fondly remember bendgate. That was when Apple built a 6.9mm thick iPhone that could not stand up to a butt, or at least withstand being squeezed between a butt and a hard surface like a chair or concrete ledge.
A year later, Apple followed that up with the 7.1mm iPhone 6s and, after that, the iPhone 7, which featured a reinforced, higher-grade aluminum. That ended Bendgate and, for a time, our desire for ultra-thin phones.
Now, though, they're back. Apple is reportedly prepping an iPhone 17 Air that might surpass even the S25 Edge in thinness. Samsung, though, has the edge, working on thin phones even before the S25 Edge release.
Company execs told me that Samsung's work on its folding line has well prepared it for building this slim handset. Unfolded, the Galaxy Z Fold 6 is just 5.6mm thick. Yes, even thinner than the S25 Edge.
If you're wondering why the S25 Edge is a couple of millimeters thicker than the Z Fold, remember that Samsung can split that device's 4,200 mAh battery between the two halves. Samsung had to fit the S25 Edge's 3,900 mAh battery into a single 5.8mm panel.
Could this little slice of technology heaven lead to another "bendgate"? I doubt it.
The flat titanium band surrounding the S25 Edge's body is far more rigid and less bendable than the all curved aluminum body of the iPhone 6 Plus. Also, Samsung is using the brand-new Gorilla Glass Ceramic 2 on the display to further strengthen the phone.
It was with all this in mind that I pulled out my stack of quarters and got to work measuring true thinness based on a coin scale.
Image 1 of 4(Image credit: Lance Ulanoff / Future)Image 2 of 4(Image credit: Lance Ulanoff / Future)Image 3 of 4(Image credit: Lance Ulanoff / Future)Image 4 of 4(Image credit: Lance Ulanoff / Future)While this isn't an exact science, the modern US quarter has reliably remained at 1.75mm thickness since at least the 1930s. Still, my results make sense when you compare the actual thickness of each phone in millimeters. The S25 Edge is the clear leader.
My tiny experiment proves that in the most physical sense, the Samsung Galaxy S25 Edge is one of the thinnest phones on the market. Sure, it might not seem like much of a test. But, hey, what else was I going to do with this stack of quarters?
For now, I suggest you head to a Samsung retailer to at least touch the super-thin Galaxy S25 Edge for yourself. Quarters are optional.
You might also likeA new survey from Liquid Web suggests switching away from WordPress is paying off for the majority of users.
While much of the conversation around CMS migration focuses on risk, the new data shows that many businesses are seeing clear benefits after making the move.
Of the former WordPress users surveyed, 7 in 8 said they don’t regret switching to a different CMS. Nearly 70% reported no increase in costs after the transition, and 72% said they’re not considering a return to WordPress. Shopify was the most common destination among switchers at 42%, followed by Wix at 38% and Squarespace at 6%.
Plugin fatiguePlugin fatigue is a common pain point on WordPress, and 78% saw improvements in this department after the switch. While 22% said fatigue worsened, the majority experienced relief from the updates, compatibility issues, and maintenance that often come with large plugin stacks.
The process of switching itself wasn’t as difficult as some were expecting either. Only 23% ran into trouble with content migration, meaning 77% were able to move their sites without major disruption.
The same percentage said they didn’t find the new CMS harder to learn, and only a relatively small group (21%) said they missed features they had before.
Tiffany Bridge, Product Manager, eCommerce Applications at Liquid Web, said CMS platforms can deliver better experiences when setup is done right. “Many users leave WordPress looking for simplicity and come back realizing it wasn’t the CMS, it was the setup. Hosting makes all the difference between fatigue and flow.”
While WordPress still appeals to many for its flexibility, the survey shows that switching isn’t always a downgrade. In fact, for many users, it’s a step toward a simpler, more manageable experience.
The numbers point to a growing group of businesses that have made the move, and aren’t looking back.
You might also likeNvidia has signed a strategic agreement with HUMAIN, a new AI-focused subsidiary of Saudi Arabia’s Public Investment Fund, as part of an ambitious plan to establish the kingdom as a global leader in artificial intelligence by 2030.
The partnership includes large-scale infrastructure development, workforce training, and a massive hardware rollout featuring hundreds of thousands of Nvidia’s latest GB300 chips.
HUMAIN plans to deploy up to 500 megawatts of AI computing capacity, beginning with 18,000 GB300 Grace Blackwell superchips powered by Nvidia’s InfiniBand networking. These chips will be used in hyperscale data centers across Saudi Arabia, designed to train and operate sovereign AI models at scale.
Broader digital transformation goalsThe move is intended to support the country’s broader digital transformation goals and economic diversification outlined in Vision 2030.
The partnership also includes the adoption of Nvidia’s Omniverse platform. According to HUMAIN, this will enable the development of physical AI and robotics applications across industries such as manufacturing, logistics, and energy.
By using digital twins and simulation tools, companies in the kingdom will be able to optimize physical environments for greater efficiency, safety, and sustainability.
Workforce development is a key component of the collaboration. HUMAIN and Nvidia plan to upskill thousands of Saudi citizens and developers in areas such as robotics, simulation, and digital twin technologies.
“AI, like electricity and internet, is essential infrastructure for every nation,” said Jensen Huang, founder and CEO of Nvidia. “Together with HUMAIN, we are building AI infrastructure for the people and companies of Saudi Arabia to realize the bold vision of the Kingdom.”
His Excellency Eng. Abdullah Alswaha, Minister of Communications and Information Technology, added: “This lays the groundwork for a new industrial revolution, anchored in advanced infrastructure, talent and global ambition. This is how Saudi Arabia continues to lead as a partner of choice in shaping the future of AI.”
Our partnership with Nvidia is a bold step forward in realizing the Kingdom’s ambitions to lead in AI and advanced digital infrastructure,” said Tareq Amin, CEO of HUMAIN. “Together, we are building the capacity, capability and a new globally enabled community to shape a future powered by intelligent technology and empowered people.”
You might also likeIf you’ve ever stared at a pile of Lego bricks and despaired at making them match the vision in your head, you may be in luck thanks to a new, free AI tool that turns text prompts into real, buildable Lego designs. Describe what you want to build and the aptly named LegoGPT will produce a step-by-step plan using a limited palette of real Lego bricks, with a handy list of which bricks to use and how many you'll need..
To function in the real world, LegoGPT is notably cautious in its approach. While many AI image generators can comfortably spit out wild 3D shapes with zero regard for the laws of physics, LegoGPT runs every design through a literal physics simulator. It checks for weak points. It identifies problem bricks. And if it finds something unstable, it starts all over, reworks the layout, and tries again. It's like how most AI chatbots are a kind of auto-complete for words, hunting for the right one to add to a sentence. Except LegoGPT is predicting the next brick to auto-build a (digital) Lego model.
With LegoGPT's answers, you can learn how to turn that colorful plastic pile into brick art. You don’t need a PhD in structural engineering or a childhood spent mastering Technic sets, or even the Lego-building robot shown off in a video made by the Carnegie Mellon University researchers behind the new tool.
Brick AIThe magic behind LegoGPT comes from a very large dataset called StableText2Lego. The researchers made the dataset by building more than 47,000 stable Lego structures and pairing them with text captions describing their appearance. Rather than spend months or years on that tedious chore, the researchers roped in OpenAI’s GPT-4o AI model to analyze rendered images of the Lego structures from 24 different angles and come up with a detailed description they could use.
LegoGPT’s code, data, and demos are all publicly available on the researchers’ website and GitHub. There are some caveats. LegoGPT currently only builds with eight standard brick types, all rectangular, and operates inside a 20-brick cubed space. So you’re not getting intricate curved architecture or sprawling castles just yet. Think more early-70s Lego catalog than 4,000-piece Millennium Falcon. Still, the results are fun and very sturdy.
(Image credit: LegoGPT)The broader implication for generating real-world objects with AI from casual language makes LegoGPT exciting beyond the novelty of making toy blueprints from text descriptions. It promises designs that aren’t just possible, but verified to be physically buildable. This could become a cornerstone of prototyping, architectural modeling, and, of course, a weekend activity for Lego hobbyists. But don't dwell too much on the details. You don’t need to understand the underlying math to enjoy it.
The limitations in size, scope, and brick variety ensure LegoGPT will not replace Lego’s in-house designers anytime soon, but it is a leap toward making design more accessible, playful, and connected to the real world. Also, right now, the system doesn’t care about color, unless you ask it to. The default focus is purely structural. However, the researchers have already added an optional appearance prompt feature that lets you layer on color schemes. So if you want your electric guitar built in metallic purple, go for it.
(Image credit: LegoGPT) You might also likeGoogle put on an Android Show today to offer a glimpse at its upcoming interface changes with Android 16, in addition to a slew of Gemini news. It didn’t show off any new devices running the new look; instead, Google offered advice to developers and an explanation of its overall design philosophy. That philosophy seems very… purple.
The new Material 3 Expressive guidelines call for extensive use of color (especially shades of purple and pink), new shapes in a variety of sizes, new motion effects when you take action, and new visual cues that group and contain elements on screen.
A screengrab of examples from Google's Material 3 Expressive blog post (Image credit: Google)Google says it has done more research on this design overhaul than any other design work it's done since it brought its Material Design philosophy to Android in 2014. It claims to have conducted 46 studies with more than 18,000 participants, but frankly, I’m not a UX designer, so I don’t know if that’s a lot.
Google's Material 3 Expressive is the new look of Android 16After all of that work, Google has landed on this: Material 3 Expressive. The most notable features, once you get past the bright and – ahem – youthful colors, are the new motion effects.
For instance, when you swipe to dismiss a notification, the object you are swiping will be clear while other objects will blur slightly, making it easier to see. The other notifications nearby will move slightly as you swipe their neighbor. Basically, there will be a lot more organic-looking motion in the interface, especially on swipes and the control levers.
New shapes are coming to Android 16 with Material 3 Expressive (Image credit: Google)There will be new type styles as well built into Android 16, with the ability to create variable or static fonts. Google is adding 35 more shapes to its interface library for developers to build with, along with an expanded range of default colors.
Google didn’t say that its new Material 3 Expressive design language was targeting iPhone fans, but the hints are there. The next version of Android won’t look cleaner and more organized, instead, Google wants to connect with users on an ‘emotional’ level. According to Google’s own research, the group that loves this new look the most are 18-24 year olds, ie, the iPhone’s most stalwart fan base.
Will this look win over the iPhone's biggest fans? We'll see in the months ahead (Image credit: Google)In its official blog post, Google says, “It's time to move beyond 'clean' and 'boring' designs to create interfaces that connect with people on an emotional level.” That connection seems to be much stronger among young people. Google says that every age group preferred the new Material 3 Expressive look, but 18-24 year olds were 87% in favor of the new look.
Apple’s iPhone fanbase is strongest in this age group, if not the generation that’s even younger. It makes sense that Google is making big changes to Android. In fact, this refresh may be overdue. We haven’t seen many inspiring new features in smartphones since they started to fold, and foldable phones haven’t exactly caught on. I’m surprised Google waited this long to improve the software, since there wasn’t any huge hardware innovation in the pipeline (temperature sensors, anybody?).
Material 3 Expressive is coming to more than just Android phonesThe new Material 3 Expressive look won’t be limited to Android 16. Google says Wear OS 6 will get a similar design refresh, with more colors, motion, and adaptable buttons that change shape depending on your watch display.
Wear OS watches will also be able to use dynamic color themes, just like Android phones. Start with an image or photo and Wear OS will create a matching color theme for your watch to complement what it sees.
Google demonstrated new buttons that grow as they fill more of the Wear OS display (Image credit: Google)Even Google’s apps will start to look more Expressive. Google says apps like Google Photos and Maps will get an update in the months ahead that will make them look more like Android 16.
Google borrows a few iPhone features for Android 16, including a Lockdown ModeGoogle also demonstrated Live Updates, a new feature that borrows from the iPhone to show you the progress of events like an Uber Eats delivery. The iPhone does this in the Dynamic Island, and Google is adding this feature to the top of the Android 16 display.
Security was a big focus of the Android Show, starting with new protections against calling and text message scams. Google is securing its phones against some common scammer tactics. For instance, scammers might call pretending to be from your bank and might ask you to sideload an app.
With Android 16, you won’t be able to disable Google’s Play Protect app-scanner or sideload any apps while you are on a phone call. You also won’t be able to grant permission to the Accessibility features, a common workaround to get backdoor access to a phone.
Google’s Messages app will also get smarter about text message scams. It will filter out scam messages that ask you to pay overdue toll road fees or try to sell you crypto.
The iPhone already has an extreme protection mode called Lockdown (Image credit: Future / Philip Berne)Google is also enabling Advanced Protection, its own version of Apple’s Lockdown Mode, on Android 16. Advanced Protection is a super high-security mode that offers the highest level of protection against attacks, whether over wireless networks or physically through the USB port.
Basically, if you’re a journalist, an elected official, or some other public figure and you think a government is trying to hack your phone, Google’s Advanced Protection should completely lock your phone against outside threats.
(Image credit: Google)If you don’t need that much security but you still want more peace of mind, Google is improving its old Find My Device feature. Android 16 will introduce the Find Hub, which will be a much more robust location to track all of your devices, including Android phones, wearables, and accessories that use ultra-wideband (UWB), similar to Apple AirTags.
Google is introducing new UWB capabilities to help find objects nearby, and those will roll out to Motorola’s Moto Tag first in the months ahead. The new Find Hub will also be able to use satellite connectivity to help locate devices and keep users informed. Plus, if you lose your luggage, Google is working directly with certain airlines like British Airways to let you share your tag information so they can go look for the bag they lost.
Gemini is coming to your car... and your TV... and your watch, and...Today’s Android Show wasn’t all about Android. Google also made some big announcements about Google Gemini. Gemini is coming to a lot more devices. Gemini is coming to Wear OS watches. Gemini is coming to Android Auto and cars that run Google natively.
Gemini is coming to Google TV. Gemini is even coming to Google’s Android XR, a platform for XR glasses that don’t even exist yet (or at least you can’t buy them). For a brief moment in the Android Show, we caught a glimpse of Google's possible upcoming glasses.
Could these be Google's new XR glasses? Hopefully we'll find out at Google I/O (Image credit: Google)You’ll be able to talk to Gemini Live and have a conversation in your car on the way to work. ‘Hey Gemini, I need advice on asking my boss for a promotion!’ or ‘Hey Gemini, why is my life so empty that I’m talking to a machine in my car when I could be listening to music or a true crime podcast?’
I may sound like an AI skeptic, but Google’s own suggestions are equally dystopian. Google says on the way to your Book Club, you might ask Gemini to summarize that book you read ages ago (and mostly forgot) and suggest discussion topics. That does not sound like a book club I want to join.
Google did not offer any specific timing for any of the features mentioned in the Android Show, and only said these concepts would appear in the months ahead. It’s unusual for Google to share so much news ahead of Google I/O, which takes place May 20-21 near its HQ in Mountain View, CA. I’ll be on the scene at Google I/O with our News Chief Jake Krol to gather up anything new.
With the Pixel 9a launch already passed, and now team Android spilling all the beans, I suspect Google I/O is going to be mostly about AI. Google is getting these tidbits out of the way so that I don’t waste time asking about new phones when it wants to talk more about Gemini and all the new AI developments. Or perhaps, even better, the Android XR news today was just a hint of what’s to come. Stay tuned, we’ll know more next week!
You might also like