It's fair to say there's a sort of uneasiness when it comes to AI, an unknown that makes the general public a little on edge, unsure of what to expect from chatbots like ChatGPT in the future.
Well, one Reddit user got more than they bargained for in a recent conversation with ChatGPT's Advanced Voice Mode when the AI voice assistant started to speak like a demon.
The hilarious clip has gone viral on Reddit, and rightfully so. It's laugh-out-loud funny despite being terrifying.
Does ChatGPT voice turn into a demon for anyone else? from r/OpenAIIn the audio clip, Reddit user @freddieghorton asks ChatGPT a question related to download speeds. At first, ChatGPT responds in its "Sol" voice, but as it continues to speak, it becomes increasingly demonic.
The audio has clearly bugged out here, but the result is one of the funniest examples of AI you'll see on the internet today.
The bug happened in ChatGPT version v1.2025.098 (14414233190), and we've been unable to replicate it in our own testing. Last month, I tried ChatGPT's new sarcastic voice called Monday, but now I'm hoping OpenAI releases a special demonic voice for Halloween so I can experience this bug firsthand.
We're laughing nowYou know, it's easy to laugh at a clip like this, but I'll put my hands up and say, I would be terrified if my ChatGPT voice mode started to glitch out and sound like something from The Exorcist.
While rationality would have us treat ChatGPT like a computer program, there's an uneasiness created by the unknown of artificial intelligence that puts the wider population on edge.
In Future's AI politeness survey, 12% of respondents said they say "Please" and "Thank You" to ChatGPT in case of a robot uprising. That sounds ludicrous, but there is genuinely a fear, whether the majority of us think it's rational or not.
One thing is for sure, OpenAI needs to fix this bug sooner rather than later before it incites genuine fear of ChatGPT (I wish I were joking).
You Might Also LikeWelcome to our liveblog for Adobe Max London 2025. The 'creativity conference', as Adobe calls it, is where top designers and photographers show us how they're using the company's latest tools. But it's also where Adobe reveals the new features it's bringing to the likes of Photoshop, Firefly, Lightroom and more – and that's what we've rounded up in this live report direct from the show.
The Adobe Max London 2025 keynote kicked off at 5am ET / 10am BST / 7pm ACT. You can re-watch the livestream on Adobe's websiteand also see demos from the show floor on the Adobe Live YouTube channel.But we're also at the show in London and will be bringing you all of the news and our first impressions direct from the source.
Given Adobe has been racing to add AI features to its apps to compete with the likes of ChatGPT, Midjourney and others, that was understandably a big theme of the London edition of Adobe Max – which is a forerunner of the main Max show in LA that kicks off on October 28.
Here were all of the biggest announcements from Adobe Max London 2025...
The latest newsGood morning from London, where it's a classic grey April start. We're outside the Adobe Max London 2025 venue in Greenwich where there'll be a bit more color in the keynote that kicks off in about 15 minutes.
It's going to be fascinating to see how Adobe bakes more AI-powered tools into apps like Photoshop, Lightroom, Premiere Pro and Firefly, without incurring the wrath of traditional fans who feel their skills are being sidelined by some of these new tricks.
So if, like me, you're a longtime Creative Cloud user, it's going to be essential viewing...
We're almost ready for kick off (Image credit: Future)We've taken our spot in the Adobe Max London 2025 venue. As predicted, it's looking a bit more colorful in here than the grey London skies outside.
You can watch the keynote live on the Adobe Max London website,but we'll be bringing you all of the news and our early reactions here – starting in just a few minutes...
And we're off (Image credit: Future)Adobe's David Wadhwani (Senior VP and general manager of Adobe's Digital Media business) is now on stage talking about the first Max event in London last year – and the early days of Photoshop.
Interestingly, he's talking about the early worries that "digital editing would kill creativity", before Photoshop became mainstream. Definite parallels with AI here...
Jumping forward to Firefly (Image credit: Future)We're now talking Adobe Firefly, which is evolving fast – Adobe is calling it the "all-in-one app for ideation" with generative AI.
Adobe has just announced a new Firefly Image Model 4, which seems to be particularly focused on "greater photo realism".
A demo is showing some impressive, hyper-realistic portrait results, with options to tweak the lighting and more. Some photographers may not be happy with how easy this is becoming, but it looks handy for planning shoots.
Firefly's video powers are evolving (Image credit: Future)Adobe's Kelly Hurlburt is showing off Firefly's text-to-video powers now – you can start with text or your own sample image.
It's been trained on Adobe Stock, so is commercially viable in theory. Oh, and Adobe has just mentioned that Firefly is coming to iOS and Android, so to keep an eye out for that "in the next few months".
Firefly Boards is a new feature (Image credit: Adobe)We're now getting out first look at Firefly Boards, which is out now in public beta.
It's basically an AI-powered moodboarding tool, where you add some images for inspiration then hit 'generate' to see some AI images in a film strip.
A remix feature lets you merge images together and then get a suggested prompt, if you're not sure what to type. It's collaborative too, so co-workers can chuck their ideas onto the same board. Very cool.
You can use non-Adobe AI models too (Image credit: Adobe)Interestingly, in Firefly Boards you can also use non-Adobe models, like Google Imagen. These AI images can then sit alongside the ones you've generated with Firefly.
That will definitely broaden its appeal a lot. On the other hand, it also slightly dilutes Adobe's approach to strictly using generative AI that's been trained on Stock images with a known origin.
Adobe addresses AI concerns (Image credit: Future)Adobe's David Wadhwani is back on stage now to calm some of the recent concerns that have understandably surfaced about AI tools.
He's reiterating that Firefly models are "commercially safe", though this obviously doesn't include the non-Adobe models you can use in the new Firefly Boards.
Adobe has also again promised that "your content will not be used to train generative AI". That includes images and videos generated by Adobe's models and also third-party ones in Firefly Boards.
That won't calm everyone's concerns about AI tools, but it makes sense for Adobe to repeat it as a point-of-difference from its rivals.
We're talking new Photoshop features now (Image credit: Future)Adobe's Paul Trani (Creative Cloud Evangelist, what a job title that is) is on stage now showing some new tools for Photoshop.
Naturally, some of these are Firefly-powered, including 'Composition Reference' in text-to-image, which lets you use a reference image to generate new assets. You can also generate videos too, which isn't something Photoshop is traditionally known for.
The new 'Adjust colors' also looks a handy way to tweak hue, saturation and more, and I'm personally quite excited about the improved selection tools, which automatically pick out specific details like a person's hair.
But the biggest new addition for Photoshop newbies is probably the updated 'Actions panel' (now in beta). You can use natural language like 'increase saturation' and 'brighten the image' to quickly make edits.
It's Illustrator's turn for the spotlight now, with Michael Fugoso (Senior Design Evangelist) – the London audience doesn't know quite what to do with his impressive enthusiasm and 'homies' call-outs.
The headlines are a speed boost (it's apparently now up to five times faster, presumably depending on your machine) and, naturally, some new Firefly-powered tools like 'Text to Pattern' and, helpfully, generative expand (in beta from today).
Because you can never have enough fonts, there's also apparently 1,500 new fonts in Illustrator. That'll keep your designer friends happy.
(Image credit: Future) Premiere Pro gets some useful upgrades (Image credit: Future)AI is supposed to be saving us from organizational drudgery, so it's good to see Adpbe highlighting some of the new workflow benefits in Premiere Pro.
Kelly Weldon (Senior Experience Designer) is showing the app's improved search experience in the app, which lets you type in specifics like "brown hat" to quickly find clips.
But there are naturally some generative AI tricks, too. 'Generative Extend' is now available in 4K, letting you extend a scene in both horizontal and vertical video – very handy, particularly for fleshing out b-roll.
Captions have also been given a boost, with the most useful trick being Caption Translation – it instantly creates captions in 25 languages.
Even better, you can use it to automatically translate voiceovers – that takes a bit longer to generate, but will be a big boost for YouTube channels with multi-national audiences.
A fresh look at Photoshop on iPhone (Image credit: Future)It's now time for a run-through of Photoshop on iPhone, which landed last month – Adobe says an Android version will arrive "early this Summer".
There doesn't appear to be anything new here, which isn't surprising as the app's only about a month old.
The main theme is the desktop-level tools like generative expand and adjustment layers – although you can read our first impressions of the app for our thoughts on what it's still missing.
'Created without generative AI'This is interesting – Adobe's free graphics editor Fresco now has a new “created without generative AI" tag, which you can include in the image’s Content Credentials to help protect your rights (in theory). That label could become increasingly important, and popular, and in the years ahead.
Lightroom masks get better (Image credit: Future)One of the most popular new tricks on smartphones is removing distractions from your images – see 'Clean Up' in Apple Intelligence on iPhones and Samsung's impressive Galaxy AI (which we recently pitted against each other).
If you don't have one of those latest smartphones, Lightroom on mobile can also do something similar with 'generative remove' – that isn't new, but from the demos it looks like Adobe has given it a Firefly-powered boost.
But the new feature I'm most looking forward to is 'Select Landscape' in desktop Lightroom and Lightroom Classic. It goes beyond 'Select Sky' to automatically create masks for different parts of your landscape scene for local edits – I can see that being a big time-saver.
A new tool to stop AI stealing your work (Image credit: Future)This will be one of the biggest headlines from Max London 2025 – Adobe has launched a free Content Authenticity web app in public beta, which has a few tricks to help protect your creative works.
The app can apply invisible metadata, baked into the pixels so it works even with screenshotting, to any work regardless of which tool or app you've used to make it. You can add all kinds of attribution data, including your websites or social accounts, and can prove your identity using LinkedIn verification. It can also describe how an image has been altered (or not).
But perhaps the most interesting feature is a check box that says “I request that generative AI models not use my content". Of course, that only works if AI companies respect those requests when training models, which remains to be seen – but it's another step in the right direction.
A 'creative civil war' (Image credit: Future)The YouTuber Brandon Baum is on stage now talking (at some considerable length) about what he's calling the "creative civil war" of AI.
The diatribe is dragging on a bit and he may love James Cameron a bit too much, but there are some fair historical parallels – like Tron once being disqualified from the 'best special effects' Oscars because using computers was considered 'cheating', and Netflix once being disqualified from the Oscars.
You wouldn't expect anything less than a passionate defense of AI tools at an Adobe conference, and it probably won't go down well with what he calls creative "traditionalists". But AI is indeed all about tools – and Adobe clearly wants to make sure the likes of OpenAI doesn't steal its lunch.
That's a wrap (Image credit: Adobe)That's it for the Adobe Max London 2025 keynote – which contained enough enthusiasm about generative AI to power North Greenwich for a year or so. If you missed the news, we've rounded it all up in our guide to the 5 biggest new tools for Photoshop, Firefly, Premiere Pro and more.
The standout stories for me were Firefly Boards (looking forward to giving that a spin for brainstorming soon), the new Content Authenticity Web app (another small step towards protecting the work of creatives) and, as a Lightroom user, that app's new 'Select Landscape' masks.
We'll be getting involved in the demos now and bringing you some of our early impressions, but that's all from Max London 2025 for now – thanks for tuning in.
Bowers & Wilkins has launched, in its own words, "the most advanced and capable wireless headphone the brand has yet made", the Px7 S3.
Based on the very impressive PX7 S2 and PX7 S2e, the new headphones have re-engineered drive units, aptX adaptive and lossless audio, "greatly upgraded" ANC and an all-new design. And they have their own dedicated headphone amp inside, rather than the amp integrated into the chip platform, used by most headphones.
Let's start with that design. They're visibly slimmer than the PX7 S2e, and the carry case is more compact too. There's a redesigned arm mechanism and a new headband for a closer fit, and Bowers & Wilkins says it's improved the memory foam in the ear cups too. That means more comfort for longer listening, and the spec suggests you're going to want to spend a lot of time inside these over-ears.
We've been testing these headphones, so you don't need to wait for the full-fat, in-depth verdict: our Bowers & Wilkins Px7 S3 review is right there. Spoiler alert: it's five stars. But if you just want the low-down on what's inside, keep reading.
You can customize controls and personalize your headphones via the Music app. (Image credit: Bowers & Wilkins) Bowers & Wilkins Px7 S3: key features and pricingThe Px7 S3 are a first for the brand: their 40mm biocellulose drivers are powered by a discrete headphone amp (though still built into the unit) that the firm says delivers more scale and energy than you get from the average setup in the best wireless headphones, where the amp isn't cutomized for the particular driver design.
Speaking of, the drivers have a redesigned chassis, voice coil, suspension and magnet that delivers lower coloration and distortion, improved resolution and "superior dynamics". As with previous models the drivers are slightly angled to ensure a consistent distance from each point of the drivers' surface to your ears and deliver a spacious stereo image.
In addition to spacious audio, the Px7 S3 also deliver spatial audio for the first time in a B&W headphone – or at least they will soon. The feature is coming as an over-the-air update later in 2025.
The Px7 SE have aptX Adaptive 24/96 and aptX Lossless for higher quality audio over Bluetooth, and their DSP delivers 24-bit / 96kHz sound quality. You can also use the headphones with wired connections: 3.5mm analogue and USB-C cables are included.
Bluetooth LE Audio and Bluetooth Auracast will come this year too, again as an over-the-air update.
The other big improvement here is in the active noise cancellation. According to the firm, "Bowers & Wilkins engineers are confident that Px7 S3 features the most powerful and effective active noise cancelling technology the brand has ever developed."
That's a big claim, but there are eight microphones located around the periphery of each earcup with two measuring the output of each drive unit, four monitoring the ambient noise around you, and two more for "outstanding" vocal clarity.
With ANC on you can expect 30 hours of battery, and a 15-minute quick charge will give you up to 7 hours of playback.
The Px7 S3 are available in most countries from today, 24 April, in a choice of Anthracite Black, Indigo Blue and Canvas White. They are $429 / £399 (we're waiting on Australian pricing, but the UK price translates to around AU$830).
However, the list of countries where it's launching today doesn't include the US. Due to "evolving market conditions" the North and Latin America date will be announced shortly.
You might also likeAcross the globe, 70% of data center facility leaders say their national power grid is being stretched to its limits. Now, the sustainability warning bells aren’t just ringing, they’re deafening.
Behind much of this growing concern is the surging energy demand driven by artificial intelligence (AI) in data centers. To quantify the use of AI, McKinsey’s latest survey highlights that 78% of respondents say their organizations already use AI tools in at least one business function. From workplace productivity gains to life-saving capabilities such as detecting illnesses, AI innovations are second to none.
Data centers, which house the computing power for AI, must now focus on supporting its growth sustainably, ensuring national grids are protected. If not, we might run out of power, leading to data center outages that affect communities and livelihoods. However, according to Cadence’s Innovation Imperative, while 88% of data center operators say they’re actively working to enhance energy efficiency, only three in ten (31%) believe that they’re doing enough.
The good news is that data centers can reduce their energy impact by harnessing AI in smarter ways. AI-powered digital twins—virtual replicas of facilities—help operators shrink their environmental footprint, prevent costly outages, and finally, boost sustainability.
Ultimately, the research shows that data center operators want to make a difference. The challenge is knowing where to start. The first step is uncovering where the real problems lie and what’s truly driving excess energy use.
Apprehension on EnergyBefore beginning to invest in efficiency tactics, data centers must assess the challenges in their facilities.
Our latest research shows that almost two-thirds (60%) of facility leaders overprovision, which is allocating more resources to a system than necessary. This is due to concerns that scaling back will cause outages. While it’s understandable that facility leaders want reliable systems, it also wastes energy, drives up their footprint, and increases operational costs.
As energy needs rise to power AI, they lead to excessive overprovisioning, with immense energy waste.
Another challenge that Cadence uncovered is that many data centers struggle with stranded capacity. This is another unsustainable practice, where installed capacity in the data center cannot be used, and 29% of leaders reported stranded capacity as a constraint.
Picture stranded capacity like a game of Tetris, where data centers are playing five levels at the same time, trying to fit all the systems (blocks) into the data center. Operators are often unaware of doing this and can’t spot the available capacity. Thus, the facility fails to meet its design goals and has a costly impact on the planet.
Furthermore, while high-density servers are great for holding immense power, their high energy requirements can create several challenges for data centers. Currently, 59% of data center operators are using high-density servers, so it is important to make sure they run properly and effectively, with as little stranded capacity and over-provisioning as possible.
This is especially true with rack densities exceeding 100kW, and as high as 600kW, with the latest Rubin architecture that Nvidia presented at GTC.
Addressing These Matters is Not a “Nice-to-Have”Tackling data center energy challenges is now critical, especially as regulatory factors come into play. This includes stricter reporting requirements, such as the EU’s Energy Efficiency Directive, which requires carbon emission reporting.
Local communities are increasingly opposing data center facilities. This is primarily due to claims that such facilities consume large quantities of energy, competing for energy resources and water with the population. Recently, this has become a concern in Virginia, where some residents will soon be neighbors to a 466,000-square-foot data center.
Addressing these issues requires a nuanced, multi-faceted approach. From energy reporting and thermal modeling to capacity planning and workload optimization, digital twins will play a critical role in tackling stranded capacity, reducing excessive energy use in data centers, and allowing them to trial renewable energies.
Reducing Inefficient Resource AllocationBy simulating real-time operations, digital twins enable operators to fully utilize available capacity, optimize energy consumption, and minimize the environmental impact on surrounding areas.
Minimizing overprovisioning is a good place to start to reduce energy consumption. Digital twins, enhanced by AI, offer a powerful solution. Through real-time data and historical trends, operators can create a virtual environment that mirrors the physical facility. This allows them to test different scenarios, evaluate the impact of resource allocation decisions, and identify potential areas of over-provisioning.
It also provides an excellent stranded capacity solution. By integrating sensors and data collection mechanisms, operators can continuously monitor the performance of various components, such as power consumption, cooling efficiency, and overprovisioning. This data can then be analyzed using predictive analytics to identify potential bottlenecks or areas of underutilization.
By proactively addressing these issues, operators can optimize resource allocation and reduce stranded capacity.
Reduce, Reuse, RecycleThe transformative capabilities of digital twins do not end here. Using these tools, data centers can capture and repurpose waste heat from cooling systems for other applications, such as heating buildings or industrial processes. To do so, digital twins can replicate the physical facility and help manage the implementation of technology. This will reduce energy waste and lower overall carbon emissions.
Repurposing wasted heat is important because the EU Energy Efficiency Directive mandates that data centers with a high level of energy input utilize waste heat or implement other waste heat recovery measures.
In addition, the increasing heat caused by growing server density puts cooling systems under significant strain. Digital twins allow operators to model the effectiveness of alternative cooling methods and explore how these systems interact with the entire infrastructure.
Evaluating Cooling EffectivenessRe-evaluating data center cooling is an important strategy for reducing energy consumption. Cooling is one of the most energy-intensive elements of data center operations, particularly as AI workloads increase power demands. Digital twins are making it more feasible for data centers to adopt liquid cooling, which is gathering momentum.
At present, 45% of decision-makers use liquid cooling, and a further 19% plan to introduce it in the next year. This is largely because high-density server racks, intensive workloads, and increasing power densities are surpassing the capabilities of traditional air cooling. While air cooling can manage heat loads up to 20kW per rack, loads beyond 20–25kW are more efficiently and cost-effectively handled by a mix of liquid cooling and precision air cooling.
Using digital twins to implement liquid cooling, data center operators can examine factors that are otherwise difficult to detect or measure, such as overall cooling efficiency. They can assess the pros and cons of various liquid cooling options before investing in technology. The result is a customized solution tailored to the facility’s specific heat load requirements.
Transforming the Data Center TrajectoryClearly, data centers are serious about improving their environmental impact. However, implementation remains the biggest hurdle. Digital twins are proving to be the sustainability game-changer the industry needs, helping operators move from ambition to action.
Even the process of deploying digital twins drives immediate value, forcing facilities to gather their data, surface blind spots, and build a clear picture of their operations. This alone creates the foundation for smarter, more sustainable decision-making.
Those that turn to digital twins won’t just optimize data center performance, they’ll unlock a roadmap to a greener, more efficient, and future-proofed data center industry.
We rate the best web hosting services.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Microsoft has been on a mission as of late to brand everything from here to there an Xbox, mostly off the back of the company’s cloud streaming efforts. The latest Xbox? Why that’s your LG TV of course.
LG’s latest update to its TV operating system (first teased back in January) brings with it the Xbox app, albeit currently in a beta version. The result means that you can now play any cloud-streamed game on your display without a console. All you’ll need is an Xbox controller and a Game Pass Ultimate subscription to enable cloud streaming.
If you have a WebOS 24 or WebOS 25-compatible TV (to make it easier, if your LG TV launched in 2022 or after, or if it’s a 2021 StanbyME display), you’ll be able to install the Xbox app right away. There won’t be any downloads required except for the app itself. The update will roll out to StanByMe TVs after other WebOS 24-compatible models.
This isn’t a first for game streaming mind you: Samsung TVs have supported the Xbox cloud gaming app since 2022 and Amazon Fire TV devices have supported it since 2024.
With your Xbox controller in hand and Game Pass Ultimate subscription active, you’ll be able to play games like The Elder Scrolls IV: Oblivion Remastered, Minecraft, Call of Duty: Black Ops 6 and more from just your TV – just head to the Gaming Portal on your LG display and launch the Xbox app to get started.
Though be prepared for some quality issues. A fast internet connection will improve your cloud gaming experience massively, improving streaming quality and latency alike.
Here in Australia, I’ve personally had a lot of issues with cloud streaming Xbox games on my PC and Samsung projector with my 100Mbps-capable internet plan, but obviously this might be a non-issue for your household.
LG’s TVs are mighty impressive. The LG C4 is currently the top model on TechRadar’s list of the best TVs and the incoming flagship LG G5 scored a whopping five stars in our review. The mid-range LG C5 also scored an impressive five stars, remaining a top pick for folks craving a high-end screen without a gigantic price (the C5 will likely replace the C4 soon on our list).
If you care about having fast and responsive gameplay in the lounge room, you may be better off buying an Xbox console, such as the affordable Series S or powerful Series X, or alternatively you could pick up a PlayStation 5.
But for casual gaming where you might only play a handful of titles, if you’re looking for something to entertain the kids, or if you simply don’t want to pay the full price for a game, Xbox cloud gaming might be worth considering.
You might also like...Anker has revealed the latest 4K projector in its Nebula range, the X1.
Anker, makers of some of the best projectors, such as the Anker Nebula Mars 3, says the X1 is its highest-performing projector yet. The X1 follows in the footsteps of Anker's Cosmos range, which has delivered some of the best portable projectors, including the Anker Nebula Cosmos Laser 4K.
The 4K-resolution X1 uses an RGB triple laser light engine and is said to deliver 3,500 ANSI lumens. It's capable of displaying images up to 300 inches in size and its NebulaMaster technology is set to offer a 5000:1 native contrast ratio and 56,000:1 dynamic contrast ratio. Anker says it's the "perfect backyard projector for daytime and night-time use".
The X1 has four side-firing internal speakers powered by a total of 40W, and a separate pair of wireless speakers is an available option. These speakers have 8 hours of battery life and are USB-C rechargeable.
As an added audio feature, the X1's built-in speakers can be switched to subwoofer mode when combined with the wireless speakers, creating a 4.1.2-channel audio system.
From a design perspective, the X1 can tilt up to 25 degrees, allowing for easy placement on a wall, table or floor. It also features AI Spatial Adaptation, which uses real-time auto focus, auto keystone, auto optical zoom and auto screen fit. There's a built-in micro gimbal for added adaptability, and it comes with a carry handle for easy portability.
The X1 will also support Wi-Fi streaming with Google TV built-in for access to the best streaming services such as Netflix, Prime Video and Disney Plus.
Another new feature is its liquid cooling system, which Anker says will limit fan noise to 26dB (at a distance of 1m).
The Anker Nebula X1 will be available from May 21, starting at $2,199.99 / £2,199.99 (roughly AU$4,595 directly converted). An accessory bundle, with the two wireless speakers, a carry case and two wireless microphones designed with karaoke in mind, is available for $999.99 / £499.99 ($667 / AU$1,042 directly converted). Both will be on sale at Amazon and Nebula in the US and Nebula in the UK.
The ultimate summer projector? (Image credit: Anker )The headline of Anker's release of the X1 is that it is perfect for the outdoors day or night, its 3,500 ANSI lumens brightness putting it in the same category as the likes of the Samsung Premier 9 and Epson QB100, both of which we classed as 'super-bright'.
Although the X1 sounds like it will be super-bright, even the brightest and best 4K projectors can still struggle with outdoor daytime viewing, as the lumens required to project a decent image in brighter viewing conditions can be a real challenge.
However, 3,500 ANSI lumens is indeed very bright, and with the added benefit of Dolby Vision HDR, the X1 could produce images bright enough for outdoor viewing.
Admittedly, it's not a cheap projector, but compared with other portable projectors with similar specs, such as the JMGO N1S Ultra 4K, it's competitively priced for what it offers.
With the display specs listed and the option for an audio upgrade, plus two wireless microphones for anyone who fancies a bit of karaoke after their movie night, the Nebula X1 really could be the ultimate portable and outdoor projector cinema package. I'll be keen to get my hands on it and see how it fares.
You might also likeImages may be worth a thousand words, but Character.AI doesn't see any reason that the image shouldn't speak those words itself. The company has a new tool called AvatarFX that turns still images into expressive, speaking, singing, gesturing video avatars. And not just photos of people, animals, paintings of mythical beasts, even an inanimate object can talk and express emotion when you include a voice sample and script.
AvatarFX produces surprisingly convincing videos. Everything from lip-sync accuracy, nuanced head tilts, eyebrow raises, and even appropriately dramatic hand gestures is all there. In a world already swirling with AI-generated text, images, songs, and now entire podcasts, AvatarFX might sound like just another clever toy. But what makes it special is how smoothly it connects voice to visuals. You can feed it a portrait, a line of dialogue, and a tone, and Character.AI calls what comes out a performance, one capable of long-form videos too, not just a few seconds.
That's thanks to the model's temporal consistency, a fancy way of saying the avatar doesn’t suddenly grow a third eyebrow between sentences or forget where its chin goes mid-monologue. The movement of the face, hands, and body syncs with what’s being said, and the final result looks, if not alive, then at least lively enough to star in a late-night infomercial or guest-host a podcast about space lizards. Your creativity is the only thing standing between you and an AI-generated soap opera starring the family fridge. You can see some examples in the demo below.
Avatar aliveOf course, the magical talking picture frame fantasy comes with its fair share of baggage. An AI tool that can generate lifelike videos raises some understandable concerns. Character.AI does seem to be taking those concerns seriously with a suite of built-in safety measures for AvatarFX.
That includes a ban on generating content from images of minors or public figures. The tool also scrambles human-uploaded faces so they’re no longer exact likenesses, and all the scripts are checked for appropriateness. Should that not be enough, every video has a watermark to make it clear this isn’t real footage, just some impressively animated pixels. There’s also a strict one-strike policy for breaking the rules.
AvatarFX is not without precedent. Tools like HeyGen, Synthesia, and Runway have also pushed the boundaries of AI video generation. But Character.AI’s entry into the space ups the ante by fusing expressive avatars with its signature chat personalities. These aren’t just talking heads; they’re characters with backstories, personalities, and the ability to remember what you said last time you talked.
AvatarFX is currently in a test phase, with Character.AI+ subscribers likely to get first dibs once it rolls out. For now, you can join a waitlist and start dreaming about which of your friends’ selfies would make the best Shakespearean monologue delivery system. Or which version of your childhood stuffed animal might finally become your therapist.
You might also likeWhen Disney Cruise Line opened its new island destination in the Bahamas – Disney Lookout Cay at Lighthouse Point – it wasn’t just a vacation spot for island visitors. Instead, in coordination with its Animals, Science, and Environment (ASE) team, the brand launched a major conservation project that combined wildlife biology with modern technology, including radio telemetry and 3D printing.
While Disney Lookout Cay opened in June 2024, planning had been underway well before then, with the ASE Conservation team included from the start. A key decision was that Disney wouldn’t develop more than 16% of the land.
“We were going to leave a lot of the critical habitat, such as forest habitat, intact for the animals that were already living there,” Lauren Puishys, a Conservation & Science Tech with Disney’s ASE team, explained.
“We created an environmental impact analysis before any construction began,” Puishys said. That then turned into an Environmental Management plan, which was focused on learning about the bird population on the island and protecting them.
Sustainability Week 2025This article is part of a series of sustainability-themed articles we're running to observe Earth Day 2025 and promote more sustainable practices. Check out all of our Sustainability Week 2025 content.
The team identified key zones on the island that would remain untouched based on where birds were nesting, migrating, or foraging – all gathered through on-the-ground fieldwork. “You're collecting every bird you see, every bird you hear, and you're just writing this down to make observations about how many of these birds are in this region,” Puishys said.
One species quickly emerged as important, though – the great lizard cuckoo. “They're noisy, they're really cool looking,” Puishys explained, calling them ‘incredibly smart.’ Now, to track a population, though, in terms of patterns when moving around the island and where they were choosing to nest, Puishys and team combined old with new.
In this case, the team turned to the art of 3D printing to get close to the bird species in question, and then, through radio telemetry, mapped them on the island.
“I need a very specific bird,” Puishys recalled telling her colleague, Jose Dominguez, a member of Disney’s ASE Behavioral Husbandry team. Though he’s 3D modeled a variety of enrichment items for Disney’s Animal Kingdom theme park, he didn’t necessarily have experience modeling birds, so he called on other expert teams at Disney that did.
(Image credit: Disney Parks)Disney has teams unsurprisingly well-versed in 3D modeling using CADs and tools like Blender. “They were like, ‘Oh, absolutely, I would love to work on this,’” explained Dominguez.
They collaborated for months, refining the model through regular Zoom calls. “Lauren provided her input on if it was too big or it needs an extra toe, things like that,” said Dominguez. “Eventually, we got to our desired model shape, the great lizard cuckoo.”
The model was printed in PLA, a plant-based plastic, which Dominguez said is what Disney routinely uses for deployments in “behavior-based enrichment.” The model was then coated with the same durable outdoor paint used across properties. More specifically, “an outdoors acrylic-based UV-resistant paint, and then with a protective clear coating on top.
The outcome? A decoy bird coupled with audio recordings of real bird calls. It worked and was deployed.
The Great Lizard Cuckoo model in nature at Disney's Lookout Cay at Lighthouse Point. (Image credit: Disney Parks)“We had it down there with the speaker underneath it, and we had two different types of calls on there,” Puishys said. “At one point, an actual great lizard cuckoo called back and forth to it… So it was actually trying to communicate with the model, which was incredible to see.”
We have infrastructure around property on the rooftops of buildings and cell towers that's actually created to pick up that signal
Lauren Puishys, a Conservation & Science Tech with Disney’s ASE teamFinally, a bird approached the decoy, and Puishys was ready for it. “I was in the woods, out of sight from the cuckoo but in sight of the model, so I could see it myself. And then all I had to do was step out of the woods, and the bird was in the net.”
From there, the team attached a solar-powered radio telemetry tag to track the bird. “So there's small solar panels on it with a little antenna, and that's giving off a radio frequency of 434 megahertz,” Puishys said. “We have infrastructure around property on the rooftops of buildings and cell towers that's actually created to pick up that signal, which has an associated identifying eight-digit number and letter code for that animal.”
The Western Spindalis Radiotelemetry Tag, attached by the wildlife conservation biologists team. (Image credit: Disney Parks)Thanks to the tag and the infrastructure installed around the island in an unintrusive manner, Puishys can now track bird movements from her desk in Florida.
“We work pulling everything off of the cloud with an API key through the company, and we can just download it all to my desk using RStudio,” she said. “We’ve had it up now since pre-construction and now have over 35 million data points associated with this.”
We’ve had it up now since pre-construction and now have over 35 million data points associated with this
Lauren Puishys, a Conservation & Science Tech with Disney’s ASE teamThat data is captured through a highly structured array of nodes across the island, with about 25 of them being spaced around 400 meters apart.
Further, the data is stored on those nodes, then sent to the sensor station, which processes it and is uploaded via a cellular network so that the team can access it from anywhere. That includes Puishys’s desk in Florida, and it’s the most data the ASE team has ever collected on a terrestrial species.
For Puishys, the most exciting part isn’t just the success of the project – it’s how early they were brought in. “I honestly think our involvement as a Conservation team in the development of Disney Lookout Cay was our biggest leap,” Puishys said. “It kind of blew me away… and it was a big part about why I was so happy to join the team and help out with the project.”
The hope is that this approach – one that blends science, tech, and collaboration – becomes a template for future projects. “We hope that it worked out well enough that we can kind of be an example or a good model for other construction projects moving forward,” Puishys said.
You might also likeA new pilot program from Microsoft and Western Digital has demonstrated a novel method of recycling rare earth elements (REEs) from decommissioned hard disk drives.
The initiative, developed in collaboration with Critical Materials Recycling (CMR) and PedalPoint Recycling, successfully recovered nearly 90% of rare earth oxides and around 80% of the total feedstock mass from end-of-life drives and related components.
Using materials sourced from Microsoft’s U.S.-based data centers, the project processed approximately 50,000 pounds of shredded HDDs and mountings, converting them into high-purity elemental materials. These can now be reused across key sectors such as electric vehicles, wind energy, and advanced computing.
Old HDDs now have more valueThe project employs an acid-free, environmentally friendly recycling process that reduces greenhouse gas emissions by 95% compared to conventional mining and refining.
This approach not only recovers rare earths like neodymium, praseodymium, and dysprosium, which are essential for HDD magnetic systems, but also extracts valuable metals including copper, aluminum, steel, and gold, feeding them back into the U.S. supply chain. It shows that even external hard drives can have an eco-friendly second life.
Despite the critical role of rare earths in cloud infrastructure, current domestic recycling efforts in the U.S. recover less than 10% of these materials.
Meanwhile, over 85% of global REE production remains concentrated overseas, but this pilot aims to change that, offering a scalable, domestic solution that reduces landfill waste, enhances supply chain resilience, and lowers dependence on foreign sources.
“This is a tremendous effort by all parties involved. This pilot program has shown that sustainable and economically viable end-of-life (EOL) management for HDDs is achievable,” said Chuck Graham, corporate VP of cloud sourcing, supply chain, sustainability, and security at Microsoft.
Acid-free dissolution recycling (ADR), a technology developed at the Critical Materials Innovation (CMI) Hub, was central to this achievement.
“This project is significant because HDD feedstock will continue to grow globally as AI continues to drive the demand for HDD data storage,” said Tom Lograsso, director of CMI.
You may also likeI remember life before YouTube and life after. In the 20 years since Jawid Karim posted that first zoo video, YouTube has become a dominant force in media creation and consumption.
It's built industries and stars and forever altered viewing habits. I'd argue that it's the reason we now get most of our information from social video. And while AI is fast becoming the source for every answer (and some videos), we still get things done with YouTube's voluminous guidance.
As a long-time technology journalist, I'm embarrassed to admit I was a little late to the YouTube revolution, waiting to post my first video until almost 18 months after the initial launch. Even so, that first video made me a convert. I was so excited, I detailed the entire process in a PCMag post.
The video is in some ways emblematic of 2006's state-of-the-art. It's a silent, grainy 800-x-600 pumpkin carving animation. In hindsight, "Ghost Carves Halloween Pumpkin" looks awful, and yet, it set the template for a long and fruitful relationship, which even then featured many of the elements YouTube Pros rely on today.
There's the pithy and key title, an accurate, if brief description, thumbs up marks (miraculously, no one gave me a thumbs down), and dozens of comments, including many that noticed my less-than-expert animation work.
In those early days, it wasn't entirely clear what YouTube was meant for. Even the crew that launched it, Chad Hurley, Karim, and Steven Chen, could not agree on where the idea came from. At the time, Karim told USA Today that they wanted to build a platform where people could quickly discover highly publicized (trending) stories online. Others recall that the desire was a place where people could share videos of important life events.
In a way, early YouTube is a reflection of all those intentions. Certainly, my own YouTube library, which is around 260 videos, is proof of that. It took me years to try my hand at becoming an official "YouTuber" but only after I learned the craft by watching thousands of other people's pro-level creations.
There were, however, some who quickly recognized YouTube's storytelling potential. The breakthrough hit "Lonelygirl15" used YouTube's early confessional style to tell a complex story that, for a time, many people believed was real.
The story ran on YouTube for a few years, but it was soon just one of many tales and, as I see it, lost among an explosion of YouTube talent that started using the platform as a way to convey lengthy monologues and details about their interests in science, technology, entertainment, DIY, and more.
We are all made of StarsYouTube was the first media platform to lower the bar between filmed content and an audience. You no longer needed a TV network or film producer to greenlight your idea. If you could film and edit it, you could attract an audience.
When my 46-second Pumpkin animation was unexpectedly featured on YouTube's homepage, my views exploded. The short video soon boasted well over 200,000 plays. I spent years trying to recreate that success, but that was another early lesson of YouTube: virality is not promised.
It tickles me when TikTokkers moan about how the algorithm has abandoned them as if every video is supposed to hit 2 million views. YouTubers know all too well the vagaries of a platform and editors (then) and algorithms (now).
YouTube made stars of people like Justin Bieber and Shawn Mendes (don't let people tell you that it was all Vine). YouTubers like MKBHD and iJustine have built and held onto enviably devoted audiences that I think most network television shows would kill for. (If you want to have some fun, visit any of these YouTubers' pages, go to the video tab, and click on the "Oldest" link to see their first YouTube videos.)
In the meantime, YouTube altered our viewing habits and may have helped smooth the way for streaming platforms like Netflix, which launched its streaming platform two years after YouTube.
Watching high-quality videos online was quickly becoming an ingrained habit when Netflix first dumped Lilyhammer on us, but thankfully, it followed with House of Cards.
Over time, YouTube transformed from a place for sharing short, interesting videos to long-form, lean-back experiences. Today, it's stuffed with video podcasts, hour-long videos that couldn't survive on TikTok.
The transition from virality to information happened years ago, though. 2025's YouTube is as much about information as it is about entertainment. Parent company Google certainly assisted in this. How many times have you Googled how to do something and found a YouTube video that shows you exactly how it's done?
I'm not sure how I accomplish any unfamiliar tasks without YouTube's steady tutelage. With it, I've done everything from jump-starting my car to installing a bathroom fan, all under the confident guidance of a YouTube video.
YouTube's knowledge base across a wide range of topics is truly encyclopedic. I challenge you to find a topic that doesn't have a dozen or more video tutorials.
In truth, the world learns differently because of YouTube.
Generation YouTubeA 2022 Pew Research study found that 95% of teens use YouTube. TikTok was close behind, and by now, it may be neck and neck. Still, learning from video and using it as your foundational source for news and forming opinions is all YouTube's doing. I understand that people still watch cable news and form opinions based on specific information bubbles, but online video wasn't a primary news source until YouTube came along.
And it's not just young people. Statista found that people across all age ranges are watching videos, and the next generation will too, as 80% of parents said their under-11-year-olds are also watching YouTube.
I've seen these kids in their strollers, iPad in hand, staring intently at the latest Ms. Rachel video. And with YouTube entering its third decade, we are now living among adults who literally grew up with the platform. They've never known a world without YouTube, and their expectations for content are largely shaped by what they found there.
My point is, we made YouTube, and then YouTube made us. Happy 20th Birthday, YouTube.
You might also like