Almost 20 years have passed since the last Alien vs Predator (AvP) movie landed in theaters.
But, based on the first trailer for Predator: Badlands, another installment in the much-maligned crossover film franchise – and an entry that could redeem that movie series in many people's eyes to boot – might be here sooner than anyone expected.
That's because Badlands, one of this year's new movies that'll arrive on November 7, drops some not-so-subtle hints that it's an AvP movie in all but name. Oh, and there's also an unexpected reference to another sci-fi film franchise that indicates that it may exist in the same universe as the Alien and Predator movie series.
But enough waffling! Watch Badlands' first teaser below and see if you can spot the aforementioned clues before I explain all.
As you'll have noticed, there's a very clear reference to the Alien movies in Badlands' first-look footage via Elle Fanning's character, who's called Thia.
For those who might have missed it, though: at the 0:25 mark, Thia's eyes roll back into her head to reveal a telling sign that's not actually human, but an android.
Thia is no ordinary robotic humanoid, either – as *ahem* eagle-eyed viewers noted a logo imprinted onto the back of her eyes indicating that she was created by the Weylan-Yutani Corporation. That's the sinister fictional megacorporation that exists in the Alien franchise, which prioritizes profits and experimentation on dangerous alien lifeforms over the lives of its employees.
Elle Fanning's Thia is a Weyland-Yutani synthetic android? Interesting... (Image credit: 20th Century Studios)That's not the only reference to Weyland-Yutani in Badlands' teaser. It's hard to make out but, at the 0:50 mark, a smashed-up, orange-colored truck can be seen on the right side of the screen – and it's adorned with the Weyland-Yutani logo too.
Oh, and before I forget: scroll back to the very start of the teaser and, amid the other skulls that are hung on the wall of what's likely the Predator's dwelling, look closely at the skull sitting at the center of the collection. Look familiar? That's because it's the skull of one of the aliens in the Independence Day film franchise.
Neat, eh? Although, that now begs the question of whether those movies also exist in the same universe and the Alien and Predator films...
Why Predator: Badlands could secretly be the third entry in the Alien vs Predator movie series Alien: Romulus was well received by fans and critics alike last August (Image credit: 20th Century Studios)Of course, those Weyland-Yutani Easter eggs could be nothing more than simple reminders that xenomorphs (the aliens in Alien) and the yautja (the actual name of the Predator species) exist in the same universe. However, wouldn't it be cooler if, at some point, Badlands performs a narrative bait and switch and turns into an AvP film?
Awful though the previous two AvP movies are, I certainly hope so – and that's down to the individuals who have revived the Alien and Predator series, two stalwart film franchises of the '80s and '90s.
Dan Trachtenburg, who directs Badlands, is also the filmmaker behind 2022's Predator prequel movie Prey. That flick, which is available on Hulu (US) and Disney+ (internationally), is a film I labeled as "the best Predator movie since the 1987 original" in my Prey review.
With Trachtenburg also on directing duties for Badlands, I'm confident he'll deliver back-to-back brilliant entries as part of his wider Predator franchise reboot. Indeed, Prey and Badlands notwithstanding, another Predator film – an animated anthology flick called Predator: Killer of Killers – will also make its debut this year. It'll be available to stream at home from June 6.
Did you spot the Weyland-Yutani logo on this damaged vehicle? (Image credit: 20th Century Studios)As for the Alien movies, filmmaker Fede Alvarez gave us that franchise's best installment since 1986's Aliens with last year's Alien: Romulus. With a sequel to that big-screen offering currently in development, the future looks similarly bright for the xenomorph and facehugger-starring sci-fi horror film series.
Okay, but what's all of this got to do with the possibility that Badlands is actually an AvP movie? The trailer's Weyland-Yutani nods aside, Alvarez has previously outlined what he'd like to see from a new AvP film.
Speaking to Collider in February, the Argentine said: "The way I would do it [a new AvP film], most likely, if it could be done this way… It’s harder to keep secrets online… The best AvP will be the one that you don’t know is AvP until the other guy shows up.
"You think you’re watching a Predator movie, and then they land in some place and there are creatures, and f*****g hell, it’s a Xenomorph. That would get me. 'F**k yeah!' You’d go crazy.
#PredatorBadlands arrives in theaters and IMAX November 7. pic.twitter.com/6LIzcYbg54April 23, 2025
"Or, vice versa," Alvarez continued. "You’re in an Alien movie, and then suddenly a mysterious creature is there, and you can hear that sound, and you see the cloak, and you go, 'Is that a f****g Predator?' And then turns out it is. That would be the way to do it, don’t you think? Once you put it in the title, it’s like, ‘Spoiler alert.'"
When pressed by Collider as to whether he'd ever team up with Trachtenburg to make such a film, Alvarez added: "I can’t speak for Dan. At some point, once there’s another Alien, and I know he’s working on a sequel to Prey, one day if we feel like, 'Yeah, that’s what we cannot wait to see, I think that’s a movie we could do."
Is all of that Alvarez's coded way of suggesting Badlands could be another AvP film? Probably not, but I live in hope that I'm wrong!
You might also likeNintendo has issued an update on the Switch 2's wireless GameCube controller, stating that it will be usable with games outside of the Nintendo Switch Online GameCube library.
However, players should expect to encounter issues or inconsistencies. In a statement to Nintendo Life, a Nintendo spokesperson said:
"The Nintendo GameCube controller is designed for use with the Nintendo GameCube – Nintendo Classics collection of games and is an optional way to play those games.
"Since it doesn’t have all the buttons and features found in other controllers that can be used with the Nintendo Switch 2 system, there may be some issues when playing other games. The Nintendo GameCube controller can only be used on Nintendo Switch 2 and is not compatible with Nintendo Switch."
The Nintendo Switch 2 GameCube controller does have some notable additions to keep it more in line with the Joy-Con 2 and Nintendo Switch 2 Pro Controller. That includes Home, Capture and GameChat buttons, as well as a 'ZL' button at the top presumably to act as a bumper opposite the 'Z' button.
While most buttons do seem present and accounted for compared to other Switch 2 controllers, the GameCube's face button layout is certainly unorthodox and this may be where those aforementioned issues stem from.
It is a curious thing that the new GameCube controller won't be compatible with the original Nintendo Switch given that console got its own wired GameCube controller that launched alongside Super Smash Bros. Ultimate.
Hopefully, this new variant will eventually have control schemes built in for some Nintendo Switch 2 games. Potentially the upcoming Kirby Air Riders or a future Smash Bros. title. Fingers crossed..
You might also like...Sony has released a new PlayStation 5 software update that sees the return of its classic console designs.
The update rolls out today and, as detailed in a new PlayStation blog post, contains two new enhancements based on player feedback.
The first and most notable feature of the update is the return of PlayStation’s 30th anniversary PS5 UI designs, which honour the PS1, PS2, PS3, and PS4.
These retro console designs were limited time when they were first released, but Sony has decided to bring them back.
Players will be able to find the designs under a new feature called ‘Appearance’ under the Settings menu.
“Due to the overwhelmingly positive response from our community, we’re happy to bring back the look and feel of the four console designs for players to customize the home screen on PS5!” Sony aid.
(Image credit: Sony)Then there’s the new PS5 Audio Focus feature, which is designed for increasing immersion through presets that “amplify soft sounds to meet your hearing preferences, ensuring a clearer audio experience when using headphones or headsets.”
These presets will make it easier for players to distinguish sounds like in-game character dialogue, subtle sound effects, and party voice chat, “for a more immersive gaming experience”.
PS5 users can find the new feature in the sound settings, or the control centre during gameplay, and can choose between four presets, each with three different levels, including Weak, Medium, and Strong. You can check out the presets below.
LinkedIn is expanding a feature designed to combat one of the biggest problems in online business - identity theft and authenticity challenges.
The site is taking its “Verified on LinkedIn” feature even further, extending the verification system beyond its platform.
External sites, such as Adobe’s Content Authenticity app and Behance, can now integrate LinkedIn verification as well, allowing creators to display the “Verified on LinkedIn” badge on their profiles, too.
Keeper is a cybersecurity platform primarily known for its password manager and digital vault, designed to help individuals, families, and businesses securely store and manage passwords, sensitive files, and other private data.
It uses zero-knowledge encryption and offers features like two-factor authentication, dark web monitoring, secure file storage, and breach alerts to protect against cyber threats.
Preferred partner (What does this mean?)View Deal
Host of new threatsIdentity theft and social engineering are one of the biggest methods of cybercrime today.
Major criminal organizations, including state-sponsored groups such as North Korean actors Lazarus, often create fake accounts on LinkedIn and use them to target high-profile individuals such as CEOs, software developers, or government employees.
In fact, one of the biggest crypto thefts happened after Lazarus created a fake recruiter profile on LinkedIn and a fake job, and invited a blockchain developer for an interview. During the interview process, the developer was dropped a piece of malware, which enabled the theft of roughly $600 million in different tokens.
This attack campaign has been ongoing for months, with cybersecurity researchers dubbing it Operation DreamJob.
Other groups have followed suit, on both sides. Besides creating fake recruiter profiles and fake jobs, they would also create fake software developer personas, landing jobs at major companies and using privileged access to steal sensitive data.
“Verified on LinkedIn” was originally launched in April 2023, initially as a free verification system that allows users to confirm their identity, workplace, or education, using government-issued IDs, work email addresses, or third-party services such as CLEAR, or Microsoft Entra. LinkedIn users that verify their identities get a badge added to their profile page, showing other platform users that the person is authentic.
A year later, the company expanded this feature to include recruiter verification, as well, in an attempt to combat job-related scams. Verified recruiters receive a checkmark badge on their profiles, as well.
You might also likeA two-decade-old game has produced a marked demonstration of just how strange the world of bugs can be, after Windows 11 24H2 appeared to break something in Grand Theft Auto: San Andreas – though I should note upfront that this wasn’t Microsoft’s fault in the end.
Neowin picked up on this affair which was explained at length – in very fine detail, in fact – by a developer called Silent (who’s responsible for SilentPatch, a project dedicated to fixing up old PC games, including GTA outings, so they work on modern systems).
Grand Theft Auto: San Andreas was released way back in 2004, and the game has a seaplane called the Skimmer. What players of this GTA classic found was that after installing the 24H2 update for Windows 11, the Skimmer had suddenly vanished from San Andreas.
The connection between applying 24H2 and the seaplane’s disappearance from its usual spot down at the docks wasn’t immediately made, but the dots were connected eventually.
Then Silent stepped in to investigate and ended up diving down an incredibly deep programming rabbit hole to uncover how this happened.
As mentioned, the developer goes into way too much depth for the average person to care about, but to sum up, they found that even when they force-spawned the Skimmer plane in the game world, it immediately shot up miles into the sky.
The issue was eventually nailed down to the ‘bounding box’ for the vehicle – the invisible box defining the boundaries of the plane model – which had an incorrect calculation for the Z axis (height) in its configuration file.
For various reasons and intricacies that we needn’t go into, this error was not a problem with versions of Windows before the 24H2 spin rolled around, purely by luck I might add. Essentially, the game read the positioning values of the previous vehicle before the Skimmer (a van), and this worked okay (just about – even though it wasn’t quite correct).
But Windows 11 24H2 changed the behavior of the code of Grand Theft Auto: San Andreas, so it no longer read the values of that van – and with error now exposed, the plane effectively received a (literally) astronomical Z value. It wasn’t visible in the game any longer because it was shot up into space.
And so the mystery of the disappearing seaplane was solved – the Skimmer, in fact, was orbiting a distant galaxy somewhere far, far away from San Andreas. (I feel a spin-off mash-up game coming on).
(Image credit: Rockstar Games) Analysis: Too quick to pin the blameThis is a rather fascinating little episode that shows how tiny bugs can creep in, and by chance, go completely unnoticed for 21 years until a completely unrelated operating system update changes something that throws a wrench into the coding works.
It also serves to underline a couple of other points. Firstly, that there’s a complex nest of tweaks and wholesale changes under the hood of the 24H2 update, which comes built on a new underpinning platform. That platform is called Germanium, and it’s a pivotal change that was required for the Arm-based (Snapdragon) CPUs that were the engines of the very first Copilot+ PCs (which was why 24H2 was needed for those AI laptops to launch).
In my opinion, this is why we’ve seen more unexpected behavior and weird bugs with the 24H2 update than any other upgrade for Windows 11, due to all that work below the surface of the OS (which is capable of causing unintended side effects at times).
Furthermore, this affair highlights that some of these problems may not be Microsoft’s doing, and I’ve got to admit, I’m generally quick to pin the blame on the software company in that regard. My first thought when I started reading about this weird GTA bug was – ‘what a surprise, yet more collateral damage from 24H2’ – when in fact this isn’t Microsoft’s fault at all (but rather Rockstar’s coders).
That said, much of the flak being aimed at Microsoft for the bugginess of 24H2 is, of course, perfectly justified, and the sense still remains with me that this update – and the new Germanium platform which is its bedrock – was rather rushed out so that Copilot+ PCs could meet their target launch date of summer 2024. That, too, may be an unfair conclusion, but it’s a feeling I’ve been unable to shake since 24H2 arrived.
You may also like...It's fair to say there's a sort of uneasiness when it comes to AI, an unknown that makes the general public a little on edge, unsure of what to expect from chatbots like ChatGPT in the future.
Well, one Reddit user got more than they bargained for in a recent conversation with ChatGPT's Advanced Voice Mode when the AI voice assistant started to speak like a demon.
The hilarious clip has gone viral on Reddit, and rightfully so. It's laugh-out-loud funny despite being terrifying.
Does ChatGPT voice turn into a demon for anyone else? from r/OpenAIIn the audio clip, Reddit user @freddieghorton asks ChatGPT a question related to download speeds. At first, ChatGPT responds in its "Sol" voice, but as it continues to speak, it becomes increasingly demonic.
The audio has clearly bugged out here, but the result is one of the funniest examples of AI you'll see on the internet today.
The bug happened in ChatGPT version v1.2025.098 (14414233190), and we've been unable to replicate it in our own testing. Last month, I tried ChatGPT's new sarcastic voice called Monday, but now I'm hoping OpenAI releases a special demonic voice for Halloween so I can experience this bug firsthand.
We're laughing nowYou know, it's easy to laugh at a clip like this, but I'll put my hands up and say, I would be terrified if my ChatGPT voice mode started to glitch out and sound like something from The Exorcist.
While rationality would have us treat ChatGPT like a computer program, there's an uneasiness created by the unknown of artificial intelligence that puts the wider population on edge.
In Future's AI politeness survey, 12% of respondents said they say "Please" and "Thank You" to ChatGPT in case of a robot uprising. That sounds ludicrous, but there is genuinely a fear, whether the majority of us think it's rational or not.
One thing is for sure, OpenAI needs to fix this bug sooner rather than later before it incites genuine fear of ChatGPT (I wish I were joking).
You Might Also LikeWelcome to our liveblog for Adobe Max London 2025. The 'creativity conference', as Adobe calls it, is where top designers and photographers show us how they're using the company's latest tools. But it's also where Adobe reveals the new features it's bringing to the likes of Photoshop, Firefly, Lightroom and more – and that's what we've rounded up in this live report direct from the show.
The Adobe Max London 2025 keynote kicked off at 5am ET / 10am BST / 7pm ACT. You can re-watch the livestream on Adobe's websiteand also see demos from the show floor on the Adobe Live YouTube channel.But we're also at the show in London and will be bringing you all of the news and our first impressions direct from the source.
Given Adobe has been racing to add AI features to its apps to compete with the likes of ChatGPT, Midjourney and others, that was understandably a big theme of the London edition of Adobe Max – which is a forerunner of the main Max show in LA that kicks off on October 28.
Here were all of the biggest announcements from Adobe Max London 2025...
The latest newsGood morning from London, where it's a classic grey April start. We're outside the Adobe Max London 2025 venue in Greenwich where there'll be a bit more color in the keynote that kicks off in about 15 minutes.
It's going to be fascinating to see how Adobe bakes more AI-powered tools into apps like Photoshop, Lightroom, Premiere Pro and Firefly, without incurring the wrath of traditional fans who feel their skills are being sidelined by some of these new tricks.
So if, like me, you're a longtime Creative Cloud user, it's going to be essential viewing...
We're almost ready for kick off (Image credit: Future)We've taken our spot in the Adobe Max London 2025 venue. As predicted, it's looking a bit more colorful in here than the grey London skies outside.
You can watch the keynote live on the Adobe Max London website,but we'll be bringing you all of the news and our early reactions here – starting in just a few minutes...
And we're off (Image credit: Future)Adobe's David Wadhwani (Senior VP and general manager of Adobe's Digital Media business) is now on stage talking about the first Max event in London last year – and the early days of Photoshop.
Interestingly, he's talking about the early worries that "digital editing would kill creativity", before Photoshop became mainstream. Definite parallels with AI here...
Jumping forward to Firefly (Image credit: Future)We're now talking Adobe Firefly, which is evolving fast – Adobe is calling it the "all-in-one app for ideation" with generative AI.
Adobe has just announced a new Firefly Image Model 4, which seems to be particularly focused on "greater photo realism".
A demo is showing some impressive, hyper-realistic portrait results, with options to tweak the lighting and more. Some photographers may not be happy with how easy this is becoming, but it looks handy for planning shoots.
Firefly's video powers are evolving (Image credit: Future)Adobe's Kelly Hurlburt is showing off Firefly's text-to-video powers now – you can start with text or your own sample image.
It's been trained on Adobe Stock, so is commercially viable in theory. Oh, and Adobe has just mentioned that Firefly is coming to iOS and Android, so to keep an eye out for that "in the next few months".
Firefly Boards is a new feature (Image credit: Adobe)We're now getting out first look at Firefly Boards, which is out now in public beta.
It's basically an AI-powered moodboarding tool, where you add some images for inspiration then hit 'generate' to see some AI images in a film strip.
A remix feature lets you merge images together and then get a suggested prompt, if you're not sure what to type. It's collaborative too, so co-workers can chuck their ideas onto the same board. Very cool.
You can use non-Adobe AI models too (Image credit: Adobe)Interestingly, in Firefly Boards you can also use non-Adobe models, like Google Imagen. These AI images can then sit alongside the ones you've generated with Firefly.
That will definitely broaden its appeal a lot. On the other hand, it also slightly dilutes Adobe's approach to strictly using generative AI that's been trained on Stock images with a known origin.
Adobe addresses AI concerns (Image credit: Future)Adobe's David Wadhwani is back on stage now to calm some of the recent concerns that have understandably surfaced about AI tools.
He's reiterating that Firefly models are "commercially safe", though this obviously doesn't include the non-Adobe models you can use in the new Firefly Boards.
Adobe has also again promised that "your content will not be used to train generative AI". That includes images and videos generated by Adobe's models and also third-party ones in Firefly Boards.
That won't calm everyone's concerns about AI tools, but it makes sense for Adobe to repeat it as a point-of-difference from its rivals.
We're talking new Photoshop features now (Image credit: Future)Adobe's Paul Trani (Creative Cloud Evangelist, what a job title that is) is on stage now showing some new tools for Photoshop.
Naturally, some of these are Firefly-powered, including 'Composition Reference' in text-to-image, which lets you use a reference image to generate new assets. You can also generate videos too, which isn't something Photoshop is traditionally known for.
The new 'Adjust colors' also looks a handy way to tweak hue, saturation and more, and I'm personally quite excited about the improved selection tools, which automatically pick out specific details like a person's hair.
But the biggest new addition for Photoshop newbies is probably the updated 'Actions panel' (now in beta). You can use natural language like 'increase saturation' and 'brighten the image' to quickly make edits.
It's Illustrator's turn for the spotlight now, with Michael Fugoso (Senior Design Evangelist) – the London audience doesn't know quite what to do with his impressive enthusiasm and 'homies' call-outs.
The headlines are a speed boost (it's apparently now up to five times faster, presumably depending on your machine) and, naturally, some new Firefly-powered tools like 'Text to Pattern' and, helpfully, generative expand (in beta from today).
Because you can never have enough fonts, there's also apparently 1,500 new fonts in Illustrator. That'll keep your designer friends happy.
(Image credit: Future) Premiere Pro gets some useful upgrades (Image credit: Future)AI is supposed to be saving us from organizational drudgery, so it's good to see Adpbe highlighting some of the new workflow benefits in Premiere Pro.
Kelly Weldon (Senior Experience Designer) is showing the app's improved search experience in the app, which lets you type in specifics like "brown hat" to quickly find clips.
But there are naturally some generative AI tricks, too. 'Generative Extend' is now available in 4K, letting you extend a scene in both horizontal and vertical video – very handy, particularly for fleshing out b-roll.
Captions have also been given a boost, with the most useful trick being Caption Translation – it instantly creates captions in 25 languages.
Even better, you can use it to automatically translate voiceovers – that takes a bit longer to generate, but will be a big boost for YouTube channels with multi-national audiences.
A fresh look at Photoshop on iPhone (Image credit: Future)It's now time for a run-through of Photoshop on iPhone, which landed last month – Adobe says an Android version will arrive "early this Summer".
There doesn't appear to be anything new here, which isn't surprising as the app's only about a month old.
The main theme is the desktop-level tools like generative expand and adjustment layers – although you can read our first impressions of the app for our thoughts on what it's still missing.
'Created without generative AI'This is interesting – Adobe's free graphics editor Fresco now has a new “created without generative AI" tag, which you can include in the image’s Content Credentials to help protect your rights (in theory). That label could become increasingly important, and popular, and in the years ahead.
Lightroom masks get better (Image credit: Future)One of the most popular new tricks on smartphones is removing distractions from your images – see 'Clean Up' in Apple Intelligence on iPhones and Samsung's impressive Galaxy AI (which we recently pitted against each other).
If you don't have one of those latest smartphones, Lightroom on mobile can also do something similar with 'generative remove' – that isn't new, but from the demos it looks like Adobe has given it a Firefly-powered boost.
But the new feature I'm most looking forward to is 'Select Landscape' in desktop Lightroom and Lightroom Classic. It goes beyond 'Select Sky' to automatically create masks for different parts of your landscape scene for local edits – I can see that being a big time-saver.
A new tool to stop AI stealing your work (Image credit: Future)This will be one of the biggest headlines from Max London 2025 – Adobe has launched a free Content Authenticity web app in public beta, which has a few tricks to help protect your creative works.
The app can apply invisible metadata, baked into the pixels so it works even with screenshotting, to any work regardless of which tool or app you've used to make it. You can add all kinds of attribution data, including your websites or social accounts, and can prove your identity using LinkedIn verification. It can also describe how an image has been altered (or not).
But perhaps the most interesting feature is a check box that says “I request that generative AI models not use my content". Of course, that only works if AI companies respect those requests when training models, which remains to be seen – but it's another step in the right direction.
A 'creative civil war' (Image credit: Future)The YouTuber Brandon Baum is on stage now talking (at some considerable length) about what he's calling the "creative civil war" of AI.
The diatribe is dragging on a bit and he may love James Cameron a bit too much, but there are some fair historical parallels – like Tron once being disqualified from the 'best special effects' Oscars because using computers was considered 'cheating', and Netflix once being disqualified from the Oscars.
You wouldn't expect anything less than a passionate defense of AI tools at an Adobe conference, and it probably won't go down well with what he calls creative "traditionalists". But AI is indeed all about tools – and Adobe clearly wants to make sure the likes of OpenAI doesn't steal its lunch.
That's a wrap (Image credit: Adobe)That's it for the Adobe Max London 2025 keynote – which contained enough enthusiasm about generative AI to power North Greenwich for a year or so. If you missed the news, we've rounded it all up in our guide to the 5 biggest new tools for Photoshop, Firefly, Premiere Pro and more.
The standout stories for me were Firefly Boards (looking forward to giving that a spin for brainstorming soon), the new Content Authenticity Web app (another small step towards protecting the work of creatives) and, as a Lightroom user, that app's new 'Select Landscape' masks.
We'll be getting involved in the demos now and bringing you some of our early impressions, but that's all from Max London 2025 for now – thanks for tuning in.
Bowers & Wilkins has launched, in its own words, "the most advanced and capable wireless headphone the brand has yet made", the Px7 S3.
Based on the very impressive PX7 S2 and PX7 S2e, the new headphones have re-engineered drive units, aptX adaptive and lossless audio, "greatly upgraded" ANC and an all-new design. And they have their own dedicated headphone amp inside, rather than the amp integrated into the chip platform, used by most headphones.
Let's start with that design. They're visibly slimmer than the PX7 S2e, and the carry case is more compact too. There's a redesigned arm mechanism and a new headband for a closer fit, and Bowers & Wilkins says it's improved the memory foam in the ear cups too. That means more comfort for longer listening, and the spec suggests you're going to want to spend a lot of time inside these over-ears.
We've been testing these headphones, so you don't need to wait for the full-fat, in-depth verdict: our Bowers & Wilkins Px7 S3 review is right there. Spoiler alert: it's five stars. But if you just want the low-down on what's inside, keep reading.
You can customize controls and personalize your headphones via the Music app. (Image credit: Bowers & Wilkins) Bowers & Wilkins Px7 S3: key features and pricingThe Px7 S3 are a first for the brand: their 40mm biocellulose drivers are powered by a discrete headphone amp (though still built into the unit) that the firm says delivers more scale and energy than you get from the average setup in the best wireless headphones, where the amp isn't cutomized for the particular driver design.
Speaking of, the drivers have a redesigned chassis, voice coil, suspension and magnet that delivers lower coloration and distortion, improved resolution and "superior dynamics". As with previous models the drivers are slightly angled to ensure a consistent distance from each point of the drivers' surface to your ears and deliver a spacious stereo image.
In addition to spacious audio, the Px7 S3 also deliver spatial audio for the first time in a B&W headphone – or at least they will soon. The feature is coming as an over-the-air update later in 2025.
The Px7 SE have aptX Adaptive 24/96 and aptX Lossless for higher quality audio over Bluetooth, and their DSP delivers 24-bit / 96kHz sound quality. You can also use the headphones with wired connections: 3.5mm analogue and USB-C cables are included.
Bluetooth LE Audio and Bluetooth Auracast will come this year too, again as an over-the-air update.
The other big improvement here is in the active noise cancellation. According to the firm, "Bowers & Wilkins engineers are confident that Px7 S3 features the most powerful and effective active noise cancelling technology the brand has ever developed."
That's a big claim, but there are eight microphones located around the periphery of each earcup with two measuring the output of each drive unit, four monitoring the ambient noise around you, and two more for "outstanding" vocal clarity.
With ANC on you can expect 30 hours of battery, and a 15-minute quick charge will give you up to 7 hours of playback.
The Px7 S3 are available in most countries from today, 24 April, in a choice of Anthracite Black, Indigo Blue and Canvas White. They are $429 / £399 (we're waiting on Australian pricing, but the UK price translates to around AU$830).
However, the list of countries where it's launching today doesn't include the US. Due to "evolving market conditions" the North and Latin America date will be announced shortly.
You might also likeAcross the globe, 70% of data center facility leaders say their national power grid is being stretched to its limits. Now, the sustainability warning bells aren’t just ringing, they’re deafening.
Behind much of this growing concern is the surging energy demand driven by artificial intelligence (AI) in data centers. To quantify the use of AI, McKinsey’s latest survey highlights that 78% of respondents say their organizations already use AI tools in at least one business function. From workplace productivity gains to life-saving capabilities such as detecting illnesses, AI innovations are second to none.
Data centers, which house the computing power for AI, must now focus on supporting its growth sustainably, ensuring national grids are protected. If not, we might run out of power, leading to data center outages that affect communities and livelihoods. However, according to Cadence’s Innovation Imperative, while 88% of data center operators say they’re actively working to enhance energy efficiency, only three in ten (31%) believe that they’re doing enough.
The good news is that data centers can reduce their energy impact by harnessing AI in smarter ways. AI-powered digital twins—virtual replicas of facilities—help operators shrink their environmental footprint, prevent costly outages, and finally, boost sustainability.
Ultimately, the research shows that data center operators want to make a difference. The challenge is knowing where to start. The first step is uncovering where the real problems lie and what’s truly driving excess energy use.
Apprehension on EnergyBefore beginning to invest in efficiency tactics, data centers must assess the challenges in their facilities.
Our latest research shows that almost two-thirds (60%) of facility leaders overprovision, which is allocating more resources to a system than necessary. This is due to concerns that scaling back will cause outages. While it’s understandable that facility leaders want reliable systems, it also wastes energy, drives up their footprint, and increases operational costs.
As energy needs rise to power AI, they lead to excessive overprovisioning, with immense energy waste.
Another challenge that Cadence uncovered is that many data centers struggle with stranded capacity. This is another unsustainable practice, where installed capacity in the data center cannot be used, and 29% of leaders reported stranded capacity as a constraint.
Picture stranded capacity like a game of Tetris, where data centers are playing five levels at the same time, trying to fit all the systems (blocks) into the data center. Operators are often unaware of doing this and can’t spot the available capacity. Thus, the facility fails to meet its design goals and has a costly impact on the planet.
Furthermore, while high-density servers are great for holding immense power, their high energy requirements can create several challenges for data centers. Currently, 59% of data center operators are using high-density servers, so it is important to make sure they run properly and effectively, with as little stranded capacity and over-provisioning as possible.
This is especially true with rack densities exceeding 100kW, and as high as 600kW, with the latest Rubin architecture that Nvidia presented at GTC.
Addressing These Matters is Not a “Nice-to-Have”Tackling data center energy challenges is now critical, especially as regulatory factors come into play. This includes stricter reporting requirements, such as the EU’s Energy Efficiency Directive, which requires carbon emission reporting.
Local communities are increasingly opposing data center facilities. This is primarily due to claims that such facilities consume large quantities of energy, competing for energy resources and water with the population. Recently, this has become a concern in Virginia, where some residents will soon be neighbors to a 466,000-square-foot data center.
Addressing these issues requires a nuanced, multi-faceted approach. From energy reporting and thermal modeling to capacity planning and workload optimization, digital twins will play a critical role in tackling stranded capacity, reducing excessive energy use in data centers, and allowing them to trial renewable energies.
Reducing Inefficient Resource AllocationBy simulating real-time operations, digital twins enable operators to fully utilize available capacity, optimize energy consumption, and minimize the environmental impact on surrounding areas.
Minimizing overprovisioning is a good place to start to reduce energy consumption. Digital twins, enhanced by AI, offer a powerful solution. Through real-time data and historical trends, operators can create a virtual environment that mirrors the physical facility. This allows them to test different scenarios, evaluate the impact of resource allocation decisions, and identify potential areas of over-provisioning.
It also provides an excellent stranded capacity solution. By integrating sensors and data collection mechanisms, operators can continuously monitor the performance of various components, such as power consumption, cooling efficiency, and overprovisioning. This data can then be analyzed using predictive analytics to identify potential bottlenecks or areas of underutilization.
By proactively addressing these issues, operators can optimize resource allocation and reduce stranded capacity.
Reduce, Reuse, RecycleThe transformative capabilities of digital twins do not end here. Using these tools, data centers can capture and repurpose waste heat from cooling systems for other applications, such as heating buildings or industrial processes. To do so, digital twins can replicate the physical facility and help manage the implementation of technology. This will reduce energy waste and lower overall carbon emissions.
Repurposing wasted heat is important because the EU Energy Efficiency Directive mandates that data centers with a high level of energy input utilize waste heat or implement other waste heat recovery measures.
In addition, the increasing heat caused by growing server density puts cooling systems under significant strain. Digital twins allow operators to model the effectiveness of alternative cooling methods and explore how these systems interact with the entire infrastructure.
Evaluating Cooling EffectivenessRe-evaluating data center cooling is an important strategy for reducing energy consumption. Cooling is one of the most energy-intensive elements of data center operations, particularly as AI workloads increase power demands. Digital twins are making it more feasible for data centers to adopt liquid cooling, which is gathering momentum.
At present, 45% of decision-makers use liquid cooling, and a further 19% plan to introduce it in the next year. This is largely because high-density server racks, intensive workloads, and increasing power densities are surpassing the capabilities of traditional air cooling. While air cooling can manage heat loads up to 20kW per rack, loads beyond 20–25kW are more efficiently and cost-effectively handled by a mix of liquid cooling and precision air cooling.
Using digital twins to implement liquid cooling, data center operators can examine factors that are otherwise difficult to detect or measure, such as overall cooling efficiency. They can assess the pros and cons of various liquid cooling options before investing in technology. The result is a customized solution tailored to the facility’s specific heat load requirements.
Transforming the Data Center TrajectoryClearly, data centers are serious about improving their environmental impact. However, implementation remains the biggest hurdle. Digital twins are proving to be the sustainability game-changer the industry needs, helping operators move from ambition to action.
Even the process of deploying digital twins drives immediate value, forcing facilities to gather their data, surface blind spots, and build a clear picture of their operations. This alone creates the foundation for smarter, more sustainable decision-making.
Those that turn to digital twins won’t just optimize data center performance, they’ll unlock a roadmap to a greener, more efficient, and future-proofed data center industry.
We rate the best web hosting services.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Microsoft has been on a mission as of late to brand everything from here to there an Xbox, mostly off the back of the company’s cloud streaming efforts. The latest Xbox? Why that’s your LG TV of course.
LG’s latest update to its TV operating system (first teased back in January) brings with it the Xbox app, albeit currently in a beta version. The result means that you can now play any cloud-streamed game on your display without a console. All you’ll need is an Xbox controller and a Game Pass Ultimate subscription to enable cloud streaming.
If you have a WebOS 24 or WebOS 25-compatible TV (to make it easier, if your LG TV launched in 2022 or after, or if it’s a 2021 StanbyME display), you’ll be able to install the Xbox app right away. There won’t be any downloads required except for the app itself. The update will roll out to StanByMe TVs after other WebOS 24-compatible models.
This isn’t a first for game streaming mind you: Samsung TVs have supported the Xbox cloud gaming app since 2022 and Amazon Fire TV devices have supported it since 2024.
With your Xbox controller in hand and Game Pass Ultimate subscription active, you’ll be able to play games like The Elder Scrolls IV: Oblivion Remastered, Minecraft, Call of Duty: Black Ops 6 and more from just your TV – just head to the Gaming Portal on your LG display and launch the Xbox app to get started.
Though be prepared for some quality issues. A fast internet connection will improve your cloud gaming experience massively, improving streaming quality and latency alike.
Here in Australia, I’ve personally had a lot of issues with cloud streaming Xbox games on my PC and Samsung projector with my 100Mbps-capable internet plan, but obviously this might be a non-issue for your household.
LG’s TVs are mighty impressive. The LG C4 is currently the top model on TechRadar’s list of the best TVs and the incoming flagship LG G5 scored a whopping five stars in our review. The mid-range LG C5 also scored an impressive five stars, remaining a top pick for folks craving a high-end screen without a gigantic price (the C5 will likely replace the C4 soon on our list).
If you care about having fast and responsive gameplay in the lounge room, you may be better off buying an Xbox console, such as the affordable Series S or powerful Series X, or alternatively you could pick up a PlayStation 5.
But for casual gaming where you might only play a handful of titles, if you’re looking for something to entertain the kids, or if you simply don’t want to pay the full price for a game, Xbox cloud gaming might be worth considering.
You might also like...Anker has revealed the latest 4K projector in its Nebula range, the X1.
Anker, makers of some of the best projectors, such as the Anker Nebula Mars 3, says the X1 is its highest-performing projector yet. The X1 follows in the footsteps of Anker's Cosmos range, which has delivered some of the best portable projectors, including the Anker Nebula Cosmos Laser 4K.
The 4K-resolution X1 uses an RGB triple laser light engine and is said to deliver 3,500 ANSI lumens. It's capable of displaying images up to 300 inches in size and its NebulaMaster technology is set to offer a 5000:1 native contrast ratio and 56,000:1 dynamic contrast ratio. Anker says it's the "perfect backyard projector for daytime and night-time use".
The X1 has four side-firing internal speakers powered by a total of 40W, and a separate pair of wireless speakers is an available option. These speakers have 8 hours of battery life and are USB-C rechargeable.
As an added audio feature, the X1's built-in speakers can be switched to subwoofer mode when combined with the wireless speakers, creating a 4.1.2-channel audio system.
From a design perspective, the X1 can tilt up to 25 degrees, allowing for easy placement on a wall, table or floor. It also features AI Spatial Adaptation, which uses real-time auto focus, auto keystone, auto optical zoom and auto screen fit. There's a built-in micro gimbal for added adaptability, and it comes with a carry handle for easy portability.
The X1 will also support Wi-Fi streaming with Google TV built-in for access to the best streaming services such as Netflix, Prime Video and Disney Plus.
Another new feature is its liquid cooling system, which Anker says will limit fan noise to 26dB (at a distance of 1m).
The Anker Nebula X1 will be available from May 21, starting at $2,199.99 / £2,199.99 (roughly AU$4,595 directly converted). An accessory bundle, with the two wireless speakers, a carry case and two wireless microphones designed with karaoke in mind, is available for $999.99 / £499.99 ($667 / AU$1,042 directly converted). Both will be on sale at Amazon and Nebula in the US and Nebula in the UK.
The ultimate summer projector? (Image credit: Anker )The headline of Anker's release of the X1 is that it is perfect for the outdoors day or night, its 3,500 ANSI lumens brightness putting it in the same category as the likes of the Samsung Premier 9 and Epson QB100, both of which we classed as 'super-bright'.
Although the X1 sounds like it will be super-bright, even the brightest and best 4K projectors can still struggle with outdoor daytime viewing, as the lumens required to project a decent image in brighter viewing conditions can be a real challenge.
However, 3,500 ANSI lumens is indeed very bright, and with the added benefit of Dolby Vision HDR, the X1 could produce images bright enough for outdoor viewing.
Admittedly, it's not a cheap projector, but compared with other portable projectors with similar specs, such as the JMGO N1S Ultra 4K, it's competitively priced for what it offers.
With the display specs listed and the option for an audio upgrade, plus two wireless microphones for anyone who fancies a bit of karaoke after their movie night, the Nebula X1 really could be the ultimate portable and outdoor projector cinema package. I'll be keen to get my hands on it and see how it fares.
You might also likeImages may be worth a thousand words, but Character.AI doesn't see any reason that the image shouldn't speak those words itself. The company has a new tool called AvatarFX that turns still images into expressive, speaking, singing, gesturing video avatars. And not just photos of people, animals, paintings of mythical beasts, even an inanimate object can talk and express emotion when you include a voice sample and script.
AvatarFX produces surprisingly convincing videos. Everything from lip-sync accuracy, nuanced head tilts, eyebrow raises, and even appropriately dramatic hand gestures is all there. In a world already swirling with AI-generated text, images, songs, and now entire podcasts, AvatarFX might sound like just another clever toy. But what makes it special is how smoothly it connects voice to visuals. You can feed it a portrait, a line of dialogue, and a tone, and Character.AI calls what comes out a performance, one capable of long-form videos too, not just a few seconds.
That's thanks to the model's temporal consistency, a fancy way of saying the avatar doesn’t suddenly grow a third eyebrow between sentences or forget where its chin goes mid-monologue. The movement of the face, hands, and body syncs with what’s being said, and the final result looks, if not alive, then at least lively enough to star in a late-night infomercial or guest-host a podcast about space lizards. Your creativity is the only thing standing between you and an AI-generated soap opera starring the family fridge. You can see some examples in the demo below.
Avatar aliveOf course, the magical talking picture frame fantasy comes with its fair share of baggage. An AI tool that can generate lifelike videos raises some understandable concerns. Character.AI does seem to be taking those concerns seriously with a suite of built-in safety measures for AvatarFX.
That includes a ban on generating content from images of minors or public figures. The tool also scrambles human-uploaded faces so they’re no longer exact likenesses, and all the scripts are checked for appropriateness. Should that not be enough, every video has a watermark to make it clear this isn’t real footage, just some impressively animated pixels. There’s also a strict one-strike policy for breaking the rules.
AvatarFX is not without precedent. Tools like HeyGen, Synthesia, and Runway have also pushed the boundaries of AI video generation. But Character.AI’s entry into the space ups the ante by fusing expressive avatars with its signature chat personalities. These aren’t just talking heads; they’re characters with backstories, personalities, and the ability to remember what you said last time you talked.
AvatarFX is currently in a test phase, with Character.AI+ subscribers likely to get first dibs once it rolls out. For now, you can join a waitlist and start dreaming about which of your friends’ selfies would make the best Shakespearean monologue delivery system. Or which version of your childhood stuffed animal might finally become your therapist.
You might also likeWhen Disney Cruise Line opened its new island destination in the Bahamas – Disney Lookout Cay at Lighthouse Point – it wasn’t just a vacation spot for island visitors. Instead, in coordination with its Animals, Science, and Environment (ASE) team, the brand launched a major conservation project that combined wildlife biology with modern technology, including radio telemetry and 3D printing.
While Disney Lookout Cay opened in June 2024, planning had been underway well before then, with the ASE Conservation team included from the start. A key decision was that Disney wouldn’t develop more than 16% of the land.
“We were going to leave a lot of the critical habitat, such as forest habitat, intact for the animals that were already living there,” Lauren Puishys, a Conservation & Science Tech with Disney’s ASE team, explained.
“We created an environmental impact analysis before any construction began,” Puishys said. That then turned into an Environmental Management plan, which was focused on learning about the bird population on the island and protecting them.
Sustainability Week 2025This article is part of a series of sustainability-themed articles we're running to observe Earth Day 2025 and promote more sustainable practices. Check out all of our Sustainability Week 2025 content.
The team identified key zones on the island that would remain untouched based on where birds were nesting, migrating, or foraging – all gathered through on-the-ground fieldwork. “You're collecting every bird you see, every bird you hear, and you're just writing this down to make observations about how many of these birds are in this region,” Puishys said.
One species quickly emerged as important, though – the great lizard cuckoo. “They're noisy, they're really cool looking,” Puishys explained, calling them ‘incredibly smart.’ Now, to track a population, though, in terms of patterns when moving around the island and where they were choosing to nest, Puishys and team combined old with new.
In this case, the team turned to the art of 3D printing to get close to the bird species in question, and then, through radio telemetry, mapped them on the island.
“I need a very specific bird,” Puishys recalled telling her colleague, Jose Dominguez, a member of Disney’s ASE Behavioral Husbandry team. Though he’s 3D modeled a variety of enrichment items for Disney’s Animal Kingdom theme park, he didn’t necessarily have experience modeling birds, so he called on other expert teams at Disney that did.
(Image credit: Disney Parks)Disney has teams unsurprisingly well-versed in 3D modeling using CADs and tools like Blender. “They were like, ‘Oh, absolutely, I would love to work on this,’” explained Dominguez.
They collaborated for months, refining the model through regular Zoom calls. “Lauren provided her input on if it was too big or it needs an extra toe, things like that,” said Dominguez. “Eventually, we got to our desired model shape, the great lizard cuckoo.”
The model was printed in PLA, a plant-based plastic, which Dominguez said is what Disney routinely uses for deployments in “behavior-based enrichment.” The model was then coated with the same durable outdoor paint used across properties. More specifically, “an outdoors acrylic-based UV-resistant paint, and then with a protective clear coating on top.
The outcome? A decoy bird coupled with audio recordings of real bird calls. It worked and was deployed.
The Great Lizard Cuckoo model in nature at Disney's Lookout Cay at Lighthouse Point. (Image credit: Disney Parks)“We had it down there with the speaker underneath it, and we had two different types of calls on there,” Puishys said. “At one point, an actual great lizard cuckoo called back and forth to it… So it was actually trying to communicate with the model, which was incredible to see.”
We have infrastructure around property on the rooftops of buildings and cell towers that's actually created to pick up that signal
Lauren Puishys, a Conservation & Science Tech with Disney’s ASE teamFinally, a bird approached the decoy, and Puishys was ready for it. “I was in the woods, out of sight from the cuckoo but in sight of the model, so I could see it myself. And then all I had to do was step out of the woods, and the bird was in the net.”
From there, the team attached a solar-powered radio telemetry tag to track the bird. “So there's small solar panels on it with a little antenna, and that's giving off a radio frequency of 434 megahertz,” Puishys said. “We have infrastructure around property on the rooftops of buildings and cell towers that's actually created to pick up that signal, which has an associated identifying eight-digit number and letter code for that animal.”
The Western Spindalis Radiotelemetry Tag, attached by the wildlife conservation biologists team. (Image credit: Disney Parks)Thanks to the tag and the infrastructure installed around the island in an unintrusive manner, Puishys can now track bird movements from her desk in Florida.
“We work pulling everything off of the cloud with an API key through the company, and we can just download it all to my desk using RStudio,” she said. “We’ve had it up now since pre-construction and now have over 35 million data points associated with this.”
We’ve had it up now since pre-construction and now have over 35 million data points associated with this
Lauren Puishys, a Conservation & Science Tech with Disney’s ASE teamThat data is captured through a highly structured array of nodes across the island, with about 25 of them being spaced around 400 meters apart.
Further, the data is stored on those nodes, then sent to the sensor station, which processes it and is uploaded via a cellular network so that the team can access it from anywhere. That includes Puishys’s desk in Florida, and it’s the most data the ASE team has ever collected on a terrestrial species.
For Puishys, the most exciting part isn’t just the success of the project – it’s how early they were brought in. “I honestly think our involvement as a Conservation team in the development of Disney Lookout Cay was our biggest leap,” Puishys said. “It kind of blew me away… and it was a big part about why I was so happy to join the team and help out with the project.”
The hope is that this approach – one that blends science, tech, and collaboration – becomes a template for future projects. “We hope that it worked out well enough that we can kind of be an example or a good model for other construction projects moving forward,” Puishys said.
You might also likeA new pilot program from Microsoft and Western Digital has demonstrated a novel method of recycling rare earth elements (REEs) from decommissioned hard disk drives.
The initiative, developed in collaboration with Critical Materials Recycling (CMR) and PedalPoint Recycling, successfully recovered nearly 90% of rare earth oxides and around 80% of the total feedstock mass from end-of-life drives and related components.
Using materials sourced from Microsoft’s U.S.-based data centers, the project processed approximately 50,000 pounds of shredded HDDs and mountings, converting them into high-purity elemental materials. These can now be reused across key sectors such as electric vehicles, wind energy, and advanced computing.
Old HDDs now have more valueThe project employs an acid-free, environmentally friendly recycling process that reduces greenhouse gas emissions by 95% compared to conventional mining and refining.
This approach not only recovers rare earths like neodymium, praseodymium, and dysprosium, which are essential for HDD magnetic systems, but also extracts valuable metals including copper, aluminum, steel, and gold, feeding them back into the U.S. supply chain. It shows that even external hard drives can have an eco-friendly second life.
Despite the critical role of rare earths in cloud infrastructure, current domestic recycling efforts in the U.S. recover less than 10% of these materials.
Meanwhile, over 85% of global REE production remains concentrated overseas, but this pilot aims to change that, offering a scalable, domestic solution that reduces landfill waste, enhances supply chain resilience, and lowers dependence on foreign sources.
“This is a tremendous effort by all parties involved. This pilot program has shown that sustainable and economically viable end-of-life (EOL) management for HDDs is achievable,” said Chuck Graham, corporate VP of cloud sourcing, supply chain, sustainability, and security at Microsoft.
Acid-free dissolution recycling (ADR), a technology developed at the Critical Materials Innovation (CMI) Hub, was central to this achievement.
“This project is significant because HDD feedstock will continue to grow globally as AI continues to drive the demand for HDD data storage,” said Tom Lograsso, director of CMI.
You may also likeI remember life before YouTube and life after. In the 20 years since Jawid Karim posted that first zoo video, YouTube has become a dominant force in media creation and consumption.
It's built industries and stars and forever altered viewing habits. I'd argue that it's the reason we now get most of our information from social video. And while AI is fast becoming the source for every answer (and some videos), we still get things done with YouTube's voluminous guidance.
As a long-time technology journalist, I'm embarrassed to admit I was a little late to the YouTube revolution, waiting to post my first video until almost 18 months after the initial launch. Even so, that first video made me a convert. I was so excited, I detailed the entire process in a PCMag post.
The video is in some ways emblematic of 2006's state-of-the-art. It's a silent, grainy 800-x-600 pumpkin carving animation. In hindsight, "Ghost Carves Halloween Pumpkin" looks awful, and yet, it set the template for a long and fruitful relationship, which even then featured many of the elements YouTube Pros rely on today.
There's the pithy and key title, an accurate, if brief description, thumbs up marks (miraculously, no one gave me a thumbs down), and dozens of comments, including many that noticed my less-than-expert animation work.
In those early days, it wasn't entirely clear what YouTube was meant for. Even the crew that launched it, Chad Hurley, Karim, and Steven Chen, could not agree on where the idea came from. At the time, Karim told USA Today that they wanted to build a platform where people could quickly discover highly publicized (trending) stories online. Others recall that the desire was a place where people could share videos of important life events.
In a way, early YouTube is a reflection of all those intentions. Certainly, my own YouTube library, which is around 260 videos, is proof of that. It took me years to try my hand at becoming an official "YouTuber" but only after I learned the craft by watching thousands of other people's pro-level creations.
There were, however, some who quickly recognized YouTube's storytelling potential. The breakthrough hit "Lonelygirl15" used YouTube's early confessional style to tell a complex story that, for a time, many people believed was real.
The story ran on YouTube for a few years, but it was soon just one of many tales and, as I see it, lost among an explosion of YouTube talent that started using the platform as a way to convey lengthy monologues and details about their interests in science, technology, entertainment, DIY, and more.
We are all made of StarsYouTube was the first media platform to lower the bar between filmed content and an audience. You no longer needed a TV network or film producer to greenlight your idea. If you could film and edit it, you could attract an audience.
When my 46-second Pumpkin animation was unexpectedly featured on YouTube's homepage, my views exploded. The short video soon boasted well over 200,000 plays. I spent years trying to recreate that success, but that was another early lesson of YouTube: virality is not promised.
It tickles me when TikTokkers moan about how the algorithm has abandoned them as if every video is supposed to hit 2 million views. YouTubers know all too well the vagaries of a platform and editors (then) and algorithms (now).
YouTube made stars of people like Justin Bieber and Shawn Mendes (don't let people tell you that it was all Vine). YouTubers like MKBHD and iJustine have built and held onto enviably devoted audiences that I think most network television shows would kill for. (If you want to have some fun, visit any of these YouTubers' pages, go to the video tab, and click on the "Oldest" link to see their first YouTube videos.)
In the meantime, YouTube altered our viewing habits and may have helped smooth the way for streaming platforms like Netflix, which launched its streaming platform two years after YouTube.
Watching high-quality videos online was quickly becoming an ingrained habit when Netflix first dumped Lilyhammer on us, but thankfully, it followed with House of Cards.
Over time, YouTube transformed from a place for sharing short, interesting videos to long-form, lean-back experiences. Today, it's stuffed with video podcasts, hour-long videos that couldn't survive on TikTok.
The transition from virality to information happened years ago, though. 2025's YouTube is as much about information as it is about entertainment. Parent company Google certainly assisted in this. How many times have you Googled how to do something and found a YouTube video that shows you exactly how it's done?
I'm not sure how I accomplish any unfamiliar tasks without YouTube's steady tutelage. With it, I've done everything from jump-starting my car to installing a bathroom fan, all under the confident guidance of a YouTube video.
YouTube's knowledge base across a wide range of topics is truly encyclopedic. I challenge you to find a topic that doesn't have a dozen or more video tutorials.
In truth, the world learns differently because of YouTube.
Generation YouTubeA 2022 Pew Research study found that 95% of teens use YouTube. TikTok was close behind, and by now, it may be neck and neck. Still, learning from video and using it as your foundational source for news and forming opinions is all YouTube's doing. I understand that people still watch cable news and form opinions based on specific information bubbles, but online video wasn't a primary news source until YouTube came along.
And it's not just young people. Statista found that people across all age ranges are watching videos, and the next generation will too, as 80% of parents said their under-11-year-olds are also watching YouTube.
I've seen these kids in their strollers, iPad in hand, staring intently at the latest Ms. Rachel video. And with YouTube entering its third decade, we are now living among adults who literally grew up with the platform. They've never known a world without YouTube, and their expectations for content are largely shaped by what they found there.
My point is, we made YouTube, and then YouTube made us. Happy 20th Birthday, YouTube.
You might also likeThe Washington Post has inked a deal with OpenAI to make its journalism available directly inside ChatGPT. That means, the next time you ask ChatGPT something like “What’s going on with the Supreme Court this week?” or “How is the housing market today?” you might get an answer including a Post article summary, a relevant quote, and a clickable link to the full article.
For the companies, the pairing makes plenty of sense. Award-winning journalism, plus an AI tool used by more than 500 million people a week, has obvious appeal. An information pipeline that lives somewhere between a search engine, a news app, and a research assistant entices fans of either or both products. And the two companies insist their goal is to make factual, high-quality reporting more accessible in the age of conversational AI.
This partnership will shift ChatGPT’s answers to news-related queries so that relevant coverage from The Post will be a likely addition, complete with attribution and context. So when something major happens in Congress, or a new international conflict breaks out, users will be routed to The Post’s trusted reporting. In an ideal world, that would cut down on the speculation, paraphrased half-truths, and straight-up misinformation that sneaks into AI-generated responses.
This isn’t OpenAI’s first media rodeo. The company has already partnered with over 20 news publishers, including The Associated Press, Le Monde, The Guardian, and Axel Springer. These partnerships all have a similar shape: OpenAI licenses content so its models can generate responses that include accurate summaries and link back to source journalism, while also sharing some revenue with publishers.
For OpenAI, partnering with news organizations is more than just PR polish. It’s a practical step toward ensuring that the future of AI doesn’t just echo back what Reddit and Wikipedia had to say in 2021. Instead, it actively integrates ongoing, up-to-date journalism into how it responds to real-world questions.
WaPo AIThe Washington Post has its own ambitions around AI. The company has already tested ideas like its "Ask The Post AI" chatbot for answering questions using the newspaper's content. There's also the Climate Answers chatbot, which the publication released specifically to answer questions and share information based on the newspaper's climate journalism. Internally, the newsroom has been building tools like Haystacker, which helps journalists sift through massive datasets and find story leads buried in the numbers.
Starry-eyed idealism is nice, but there are some open questions. For instance, will the journalists who worked hard to report and write these stories be compensated for having their work embedded in ChatGPT? Sure, there's a link to their story, but that doesn't count as a view or help lead a reader to other pieces by the author or their colleagues.
From a broader perspective, won't making ChatGPT a layer between the reader and the newspaper simply continue the undermining of subscription and revenue necessary to keep a media company afloat? Whether this is a mutually supportive arrangement or just AI absorbing the best of a lot of people's hard work while discarding the actual people remains to be seen. Making ChatGPT more reliable with The Washington Post is a good idea, but we'll have to check future headlines to see if the AI benefits the newspaper.
You Might Also LikeTwenty years ago today, the first video was uploaded to YouTube (the thrilling 'Me at the zoo'). To celebrate that milestone, Google has announced that YouTube's TV app experience is going to get a big upgrade soon. And as someone who watches a lot of big-screen YouTube, that's something I'm very much looking forward to.
Google hasn't revealed a lot about the "TV viewing upgrade" it has planned. It's apparently coming "this summer" (which means sometime between June and September, if you're in the southern hemisphere). But it has revealed a screenshot (below) of what it'll look like, plus a few hints of what's coming.
Apparently, we're going to get "easier navigation" alongside some "quality tweaks" and an improved playback experience. There will also be "streamlined access to comments, channel info, and subscribing". In other words, YouTube on your TV (not to be confused with YouTube TV) is going to become a lot more like the fully-featured browser experience.
(Image credit: Google)Alongside the improved TV experience, YouTube TV subscribers will also gain the ability to build their own multiview experience. This four-way grid has traditionally been reserved for sports fans, but this is being opened up to non-sports content with a "small group of popular channels" in the "coming months". So if you aren't feeling quite distracted enough yet, this multiview update could be for you.
A subtle but important update (Image credit: Google / YouTube)I've been watching YouTube on TV for years and it's always felt a few steps behind the full experience. The Apple TV app, for example, was only given a comments section relatively recently – and while that might sound like a mixed blessing, I've always found comments to be an important part of the experience for the channels I follow.
The incoming YouTube update for TVs appears to be more about design than functionality, but still looks a lot more modern and in tune with its mobile apps. There's a new button for adding the video to your playlists and it also appears to be easier to subscribe from within videos. There are no doubt more tweaks not shown in the single teaser image.
YouTube is naturally reserving lots of features for Premium subscribers, too. As part of its 20th birthday announcements today, Google also revealed that a '4x playback speed' option is coming to smartphones for Premium subscribers (a service that costs $13.99 / £11.99 / AU$14.99 per month).
I'm still sorely tempted to upgrade to YouTube Premium (particularly after reading my colleague David Nield's strong arguments in favor of doing so), but I'm glad to see YouTube is still upgrading the TV experience for us non-Premium mortals. Let's just hope it happens in the earlier rather than later interpretation of "summer".
You might also likeStreaming specialist Roku has launched a pair of new wireless security cameras that can send video footage straight to your phone or TV, letting you watch your yard without leaving the couch.
The Roku Battery Camera can run for up to six months on a single charge, while the Battery Camera Plus runs up to two years. Both cameras are weather-resistant, and can be set up indoors or out in a few seconds.
You can use the Roku Smart Home app or Roku Web View to customize your camera's settings, set up schedules, and receive notifications. The cameras can also be used as motion-detectors to activate some of the best smart lights or other connected devices.
(Image credit: Future) Blink and you'll miss itReal-world battery life will depend on which settings you choose and the weather (lithium-ion batteries tend to drain faster in cold conditions), but the Battery Camera Plus should be a serious rival to the Blink Outdoor 4, which also runs for up to two years before it needs recharging.
Both the Blink Outdoor 4 and Roku Battery Camera Plus boast 1080p resolution with motion detection and notifications, but the Roku camera also offers color night vision rather than black and white, which could give it the edge over the Blink model if the price is right.
You could also extend the Roku cameras' battery life even further by connecting an optional solar panel – something that's not possible with the Blink camera.
Roku has yet to announce official pricing for the two cameras, but it says they will be available "in the coming months". We're hoping to test both ourselves so we can see whether they deserve a place in our roundup of the best home security cameras to secure your smart home.
You might also likeFull spoilers immediately follow for Andor season 2 episodes 1 to 3.
Andor season 2 will take some very bold swings with the incredibly weird romantic relationship that develops between Syril Karn and Dedra Meero.
That's according to Kyle Soller and Denise Gough, who told TechRadar that they're still struggling to truly "work out" what's going on between their characters one year on after filming wrapped on the Star Wars show's latest season.
They aren't the only ones. Ever since Syril saving Dedra's life in Andor's season 1 finale – a moment that was infused with a strange romantic tension – fans have longed to learn if the pair would actually get together. Now that season 2's first three episodes are not only out, but also confirm they're involved in some form of romantic entanglement, viewers are equally fascinated by the dynamic that's played out thus far.
Indeed, threads on numerous Star Wars-based Reddit pages are filled with fans commenting on the curious atmosphere that arises whenever the Imperial officers are in the same room. Other social media platform are similarly awash with people trying to dissect the clearly uncomfortable nature of their dynamic.
Are Syril and Dedra really in love? Soller and Gough don't believe so (Image credit: Lucasfilm/Disney+)Fan examination of the romance between two members of Andor season 2's cast is sure to continue for weeks and months to come, too. Well, as long as some of them remain as fascinated by the relationship as Soller and Gough continue to be – the duo telling me they're still not sure what's actually going on between their characters in one of the best Disney+ shows' second and final season.
"I think it was really bold of [showrunner] Tony [Gilroy] to end season one with the two of them in this strange little cupboard [where they hide from the Ferrix rebellion]," Soller mused. "Now, going into the domestic intimacy of it all, and what that might look like for these two very strange animals, I think it's another masterstroke."
My biggest fear was 'Oh, great, they fall in love and then what?'
Denise Gough, Andor season 2 actor"But he [Tony] doesn't change them," Gough interjected. "Going into this season, my biggest fear was 'Oh, great, they fall in love and then what? What does domestication look like to them?'. But, once I got the scripts for season two, I was like 'Okay, it still looks weird and it's even more surreal than before.'"
There was plenty for audiences to enjoy regarding the pair's relationship in one of 2025's new Star Wars TV shows throughout its first three chapters, too.
Anybody else feel that Dedra wears the proverbial trousers in this relationship? (Image credit: Lucasfilm/Disney+)Whether it was the extremely uncomfortable nature of Dedra meeting Syril's overbearing mom Eedy in episode 3, or the general conversations that they engage in, it seems fans are smitten with this villainous duo enjoying some frivolity amid the evil nature of their day-to-day jobs as employees of the Galactic Empire.
So, will the good times last? Predictably, Gough and Soller wouldn't be drawn on how this dynamic will evolve over the next nine episodes. That said, they drop a couple of teases that viewers are sure to pore over during the wait for new episodes to air on one of the best streaming services every Tuesday (US) and Wednesday (UK and Australia).
"Relationships are all about power in one way or another," Soller said. "This is a beautiful realization of how that power imbalance can grow and take things in all kinds of directions. I always thought Syril wants Dedra, but I also think he wants to use her, so it's hard to get a read on where his motives truly lie."
The last thing I was expecting from #Andor S2 was a Syril and Dedra home life sitcom but I absolutely loved every second of it.Also, absolute mood: pic.twitter.com/Q5TRbjODtAApril 23, 2025
"At her core, Dedra is a monster," Gough added. "She does awful things based on where she's come from.
"Does she really care for Syril? I watch it as a viewer and I still don't know," Gough continued. "To me, she doesn't have that human programming inside her to make me think that she's in love with him. She's like a computer, but I also think that she definitely wants him. It's not a loving relationship where they turn down the lights and cuddle. It feels very transactional and strange. It baffles me and I still can't fully work it out."
My full review of Andor season 2 doesn't contain any spoilers about where Syril and Dedra's relationship goes next. Nevertheless, it'll give you some hints at what will happen in the weeks to come. Read that for more insights and then check out more of my exclusive coverage on the series below.
You might also likeWestern Digital recently held an investor day, with a primary focus on how it intends to supercharge hard drive capacity within a decade.
WD’s detailed roadmap showed a clear technological evolution from energy-assisted Perpendicular Magnetic Recording (ePMR) to Heat-Assisted Magnetic Recording (HAMR), and ultimately to Heat Dot Magnetic Recording (HDMR), at which point it will be targeting capacities in excess of 100TB.
By 2026, WD said its HDD capacity will reach 36TB-44TB thanks to HAMR technology which uses laser heating to temporarily lower the magnetic resistance (coercivity) of the disk, allowing for significantly denser data writing.
Waiting for demandIn a recent interview with PC Watch, Kimihiko Nishio, sales manager for Western Digital Japan, went into further details on the company’s plans.
“Other companies have started adopting HAMR with 30TB HDDs, but we believe HAMR’s true potential begins at 40TB," Nishio said.
"Until then, we'll continue using technologies like OptiNAND and UltraSMR to increase the capacity of existing HDDs up to 40TB.” OptiNAND, integrates flash memory with HDDs to boost capacity, performance, and reliability, while UltraSMR, uses advanced error correction to pack data tracks more densely than traditional SMR.
“We’re targeting the latter half of 2026 for the release of 40TB drives,” Nishio said, adding that WD is "currently developing HAMR with that goal in mind.” He explained that while data generation is booming, particularly due to AI advancements, storage demand is still catching up.
“Right now, there's a huge surge in demand for generative AI, but storage hasn't really benefited from it yet. Currently, the biggest beneficiaries are GPU servers. First, data is being generated in large volumes, and after that, it will need to be stored. That’s where we expect storage demand to spike.”
Western Digital is timing its production plans to coincide with this predicted demand. “We anticipate that spike will happen in the second half of 2026, which is why we're aligning our HAMR-based high-capacity HDD development to that timeframe,” Nishio said.
“Since HAMR production requires a complete overhaul of materials, starting production now while demand is still low (e.g., for 40TB drives) would result in high costs. But we expect that in 2–3 years, demand will rise, allowing us to offer them at reasonable prices.”
Nishio also shared Western Digital’s even more ambitious long-term vision. “Looking further ahead, we plan to release 100TB drives by 2030, after which we’ll pursue even greater capacities using new technologies,” he said.
You might also like