Developer Remedy Entertainment has released a new gameplay trailer for its three-player co-op first-person shooter (FPS) FBC: Firebreak as part of the latest Future Games Show.
Set in the same universe as third-person action game Control, FBC: Firebreak sees players working together as part of Firebreak: an elite team within the fictional Federal Bureau of Control (FBC) responsible for protecting the agency against the most dangerous paranormal threats.
The game features Jobs, repayable missions with uniquely designed challenges, objectives, and environments. The gameplay trailer gives us a glimpse at Paper Chase, one such Job where deadly sticky notes have begun replicating at an alarming pace. We see a wide variety of enemies, including a terrifying giant sticky note creature that spews streams of the stationary item between attacks.
You can see the trailer for yourself below.
Remedy has also explained one of the game's key mechanics: Crisis Kits. These are different sets of gear designed to compliment both your preferred playstyle and the rest of your team. There's the Jump Kit, which focuses on controlling the battlefield with electrical attacks, the melee-focussed Fix Kit, and the water-based Splash Kit.
Every Crisis Kit comes with a unique tool powered by a strange item recovered by the FBC. This includes a garden gnome that is capable of being launched to summon a lightning storm, a regenerating piggy bank you can stick on the end of a wrench for added damage, and a teapot loaded on to a fluid launcher to create blasts of boiling hot water.
Everything looks like a lot of fun and I'm certainly looking forward to diving in with friends. FBC: Firebreak is set to launch in summer and is coming to PC, Xbox Series X, Xbox Series S, and PlayStation 5. It will be part of PC Game Pass, Xbox Game Pass Ultimate, and the PlayStation Plus Game Catalog (via PS Plus Extra and Premium) on day one.
You might also like...Windows 11 has a new preview out and it does some useful – albeit long-awaited – work in terms of accelerating the rate at which files are pulled out of ZIPs within File Explorer, plus there are some handy bug fixes here – and a minor feature that’s been ditched.
All this is happening in Windows 11 preview build 27818 (which is in the Canary channel, the earliest external test build).
As mentioned, one of the more notable changes means you’ll be able to extract files from ZIPs, particularly large ZIP archives, at a quicker pace in File Explorer.
A ZIP is a collection of files that have been lumped together and compressed so they take up less space on your drive, and unzipping such a file is the process whereby you copy those files out of the ZIP.
File Explorer – which is the name for the app in Windows 11 that allows you to view your folders and files (check here for a more in-depth explanation) – has a built-in ability to deal with such ZIP files, and Microsoft has made this work faster.
Microsoft explains in the blog post for this preview build: “Did some more work to improve the performance of extracting zipped files in File Explorer, particularly where you’re unzipping a large number of small files.”
It’s worth noting that this is a performance boost that only applies to File Explorer’s integrated unzipping powers, and not other file compression tools such as WinRAR or 7-Zip (which, in case you missed it, are now natively supported in Windows 11).
Elsewhere in build 27818, Microsoft has fixed some glitches with the interface – including one in File Explorer, where the home page fails to load and just shows some floating text that says ‘Name’ (nasty) – and a problem where the remote desktop could freeze up.
There’s also a cure for a bug that could cause some games to fail to launch after they’ve been updated (due to a DirectX error), and some other smoothing over of general wonkiness like this.
Finally, Microsoft informs us that it has deprecated a minor feature here. The suggested actions that popped up when you copied a phone number (or a future date) in Windows 11 have been disabled, so these suggestions are now on borrowed time.
(Image credit: Future / Jeremy Laird) Analysis: Curing sluggishness rather than ushering in super-zippy performanceWindows Latest noticed the change to ensure ZIP performance is better in File Explorer with this preview, and tested the build, observing that speeds did indeed seem to be up to 10% faster with larger, file-packed ZIPs.
Clearly, that’s good news – and it’s great to see Microsoft’s assertion backed up by the tech site – but at the same time, this is more about fixing poor performance levels, rather than providing super-snappy unzipping.
Complaints about File Explorer’s unzipping capabilities being woefully slow in Windows 11 date back some time, particularly in scenarios where loads of small files are involved – so really, this is work Microsoft needs to carry out rather than any kind of bonus. If Windows Latest’s testing is on the money, a 10% speed boost (at best) may not be enough to placate these complainers, either, but I guess Microsoft is going to continue to fine-tune this aspect of File Explorer.
There are plenty of other issues to iron out with File Explorer too, as I’ve discussed recently – there are a fair few complaints about its overall performance being lackluster in Windows 11, so this is a much broader problem than mere ZIP files.
Furthermore, Microsoft breaking File Explorer for some folks with last month’s February update doubtless didn’t help any negative perceptions around this central element of the Windows 11 interface.
You may also like...Philips Hue bulbs and lamps are some of the best smart lights around, and they're already pretty easy to set up; but a new app update has made things even easier, letting you add several lights to a room at once.
Once you've installed app version 5.38, which is available now for Android and iOS, you'll be able to simply scan the QR codes on several Hue devices to add them to the app together, rather than doing them one at a time.
That should be handy if you've splurged on a new set of smart bulbs in the Amazon spring sale, and will reduce headaches if you move house and need to set everything up again.
The editor of Hueblog.com has already experimented by adding a dimmer switch to their (no doubt extensive) setup, and reports that it works perfectly.
You can now use QR codes to add lights to a room in the Philips Hue app, plus other devices like dimmer switches (Image credit: Signify)If the device you want to add doesn't have a QR code, you can bypass the new option by tapping the 'No QR code' button, and the app will find it for you the old-fashioned way, then allow you to assign it to a room.
Still no AIThis is a helpful addition to the Philips Hue app, but we're still waiting for the major software update that will add the generative AI assistant that Signify (the company behind Philips Hue) promised back in January.
According to Signify, the assistant will be able to create "personalized lighting scenes based on mood, occasion or style," and will let you use natural language to describe what you want rather than using a photo as a starting point or picking shades from a color wheel.
The company hasn't announced when the new tool will arrive, but it should be available before the end of the year – hopefully in time to let you describe your perfect festive lighting, and have all your fixtures adapt automatically. I'm dreaming of a bright Christmas.
You might also likeA few days ago, Apple analyst Jeff Pu claimed in a research note that Apple’s A20 chip – which will come to the iPhone 18 lineup – would offer disappointing performance increase over past chips. Now, Pu has just reversed course on this idea.
In the original report, Pu claimed that the A20 chip will be made with a 3-nanometer process dubbed N3P. While this is expected to bring improvements to performance and efficiency, they’re only likely to be modest changes compared to the iPhone 17’s A19 chip, which is also likely to be made using a 3nm process.
That was odd because it clashed with another report from Pu’s employer GF Securities, which outlined that Apple would use a 2nm process in the A20.
After being contacted by MacRumors, Pu has updated the report to clarify that the A20 could actually be made using a 2nm process. If correct, this would likely mean much more significant performance increases, and could make the iPhone 18 a tempting prospect if you’re thinking of upgrading your device.
Protecting your iPhone screen (Image credit: Future | Alex Walker-Todd)There’s more good news for iPhone fans in the form of a fresh patent uncovered by Patently Apple. Here, Apple describes a new technique that would strengthen the iPhone’s front surface with a mixture of glass and other components.
In the patent, Apple explains that combining several different materials can result in a front iPhone screen that's resistant to scratches, can cut down reflections, and can prevent the screen from becoming burnished over time.
This is done by taking the front glass and applying a hard coating that's resistant to scratches and burnishing. Below that, an 'interference layer' made up of several compounds can be included, which helps to cut down on reflections when you look at the screen. The idea is to give your iPhone a range of different protections without making the display too thick or heavy.
It’s an interesting idea, but we might have to wait a little while until we see it. Apple only filed the patent in September 2024, so it’s very unlikely that this tech has found its way into the iPhone 16 range. Whether it will arrive in the iPhone 17 is anyone’s guess, but with six months to go until Apple reveals its next iPhones, we’ll be keeping our eyes peeled.
You might also likeApple Intelligence continues to dominate headlines for everything but its AI capabilities, as Apple now faces a lawsuit for false advertising over its AI-powered Siri.
The lawsuit, which Axios originally reported, claims Apple has falsely advertised its Apple Intelligence software that launched alongside the iPhone 16 lineup of smartphones.
The lawsuit claims that Apple has misinformed customers by creating "a clear and reasonable consumer expectation that these transformative features would be available upon the iPhone's release,"
Now, six months after the launch of the iPhone 16 and iPhone 16 Pro, some of the Apple Intelligence features showcased in promotional campaigns have been delayed, with no expected release schedule.
Most notably, the lawsuit highlights an ad starring The Last of Us actor, Bella Ramsey, where Ramsey showcased Siri's AI capabilities including personal context and on-screen awareness to help them schedule appointments. That ad, which was available from September has now been removed from YouTube following the announcement of Siri's delay.
Filed in San Jose, California, by Clarkson Law Firm, which has previously sued Google and OpenAI, the lawsuit targets Apple's iPhone features that haven't shipped yet and not the capabilities of Apple Intelligence features like Genmoji that have.
You can read the full lawsuit online, but the key argument reads, "Contrary to Defendant's claims of advanced AI capabilities, the Products offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance. Worse yet, Defendant promoted its Products based on these overstated AI capabilities, leading consumers to believe they were purchasing a device with features that did not exist or were materially misrepresented."
We'll have to wait and see if anything comes of this legal battle, but considering Apple has only delayed Siri's upgrade, we could see the AI improvements launch before anything comes to pass.
Apple Intelligence's redemption arcJust yesterday, reports of a Siri leadership shakeup started to surface. And, with exec Mike Rockwell expected to be named as the person to oversee the launch of Siri's AI upgrade, there's reason to be optimistic.
Rockwell is known for his impact in bringing Apple Vision Pro to market, and it shows a real effort from the company to overhaul the current Siri approach so that consumers finally get the capabilities promised.
If Rockwell's direction can get Siri back on track, then Apple Intelligence as a whole could still be a success. After all, once the dust settles, if Apple has a capable AI offering in its smartphones, we'll all quickly forget about the lawsuits and the bad press.
That's not to say we shouldn't hold Apple accountable for advertising features that are still not available on a device six months after launch, but if any company deserves a chance at redemption it's the Cupertino-based firm.
You might also likeSix years on from its initial reveal, System Shock 2: 25th Anniversary Remaster is finally releasing for consoles and PC later this year, following the first System Shock's remaster in 2023.
After a name change (it was originally called System Shock 2: Enhanced Edition) and six years of careful but challenging development, the highly-anticipated remaster is finally coming to PS5, PS4, Xbox Series X|S, and PC on June 26, 2025.
A lengthy PlayStation Blog, written by Nightdive Studios communications manager Morgan Shaver, goes into detail on why the remaster has taken so long to develop. In summary, it's a combination of incomplete source code and developer Nightdive's penchant for attention to detail.
Nightdive's Alex Lima chimes in here, saying that "extensive reverse engineering" was required to have System Shock 2 playable on modern hardware.
“The game engine that System Shock 2 uses is large and complicated,” adds Nightdive's Lexi Mayfield. “It was originally designed for PCs from the late 1990s with a mouse and keyboard and was only used for three games. As a result, porting the game to PlayStation was a long and arduous process, from both a coding and interface perspective.”
(Image credit: Nightdive Studios)For System Shock 2: 25th Anniversary Remaster, players can expect improved visuals as well as support for advanced shaders and much higher refresh rates, leading to much better presentation and performance overall.
Originally a PC exclusive, the game has also received controller support for the first time ever now that it's coming to consoles. Actions like leaning around corners, quick-swapping items, weapons, and psi powers have been "streamlined" for controllers. A new quickbar and context menu should also mean players will spend less time fiddling around their busy inventories.
Personally, I'm a huge fan of the original System Shock 2. I love almost everything about it, from its terrifying mutated human enemies, horrific atmosphere, and an incredible soundtrack that bounces between moody horror and fast-paced, pulse-pounding techno.
The star of the show is undoubtedly SHODAN, a rogue AI that serves as System Shock 2's primary villain. SHODAN is delightfully evil, her warped speech patterns constantly flitting between creepy and silly without ever going overboard in either department. She's so good at both taunting and mocking the player, making for a constantly entertaining and intimidating threat.
You might also like...Oppo has just teased a first look at its new Watch X2 Mini, and I'm hopeful that it could be our first glimpse of the new smaller version of the OnePlus Watch 3.
Just a few days ago, Oppo CEO Qiao Jiadong took to the Chinese social media platform Weibo to confirm that the company has three new products on the way, including a "small-size full smartwatch" (translated).
Now, Oppo has officially teased the Oppo Watch X2 Mini from the same account. It reveals a gold colorway, possibly a re-designed digital crown, and chamfered edges.
While you might not be interested in the new Oppo Watch X2 Mini, you might be curious to hear that the Oppo Watch X2 is actually just a rebadge of the OnePlus Watch 3. The two smartwatch makers are owned by the same parent company.
As such, this could be our first glimpse of the new smaller version of the OnePlus Watch 3, which is the best Android smartwatch for battery life in 2025.
OnePlus Watch 3's smaller version: What we know (Image credit: Future)This could well, then, be our first look at the new smaller version of the OnePlus Watch 3, which might well share the same design. But it doesn't tell us much else about the watch.
As reported by NotebookCheck, the smaller X2 mini is likely to feature a 42mm case, which would suggest that the OnePlus Watch 3 will be the same size. That would make perfect sense given that the larger model is 46mm, making the 42mm the likely complimentary size.
Naturally, both watches will have a smaller display and less battery life than the larger X2/OnePlus Watch 3. However, Oppo and OnePlus have cracked an excellent dual-processor system that gives that watch industry-leading battery life in this category, and this should still feature – and a smaller touchscreen will also draw less power.
Size aside, OnePlus also says it's working on LTE support beyond China for the OnePlus Watch 3, so stay tuned for that too before the end of the year.
You may also likeThe UK Government has released its guidelines on protecting technical systems from future quantum computers.
The National Cyber Security Centre set a timeline for the UK industry and government agencies to follow with key dates, firstly, by 2028 all organizations should have a defined set of migration goals, and an initial plan, and should have carried out a ‘full discovery exercise’ to assess infrastructure and determine what must be updated to post-quantum computing.
By 2031, organizations should carry out the highest priority migration activities, and have a refined plan for a thorough roadmap to completing the change. Finally, by 2035, migration should be completed for all systems, services, and products.
Large-scale threatsThe UK Government labelled the move a ‘mass technology change that will take a number of years’ - but why is the migration needed?
The Government outlines that the threat to cryptography from future ‘large-scale, fault-tolerant quantum computers’ is now well understood, and that technical systems will need to evolve to reflect this.
“Quantum computers will be able to efficiently solve the hard mathematical problems that asymmetric public key cryptography (PKC) relies on to protect our networks today,“ the guidelines confirm.
“The primary mitigation to the risk this poses is to migrate to post-quantum cryptography (PQC); cryptography based on mathematical problems that quantum computers cannot solve efficiently.”
The report warns that the total financial cost of PQC migration could be ‘significant’, so organizations should budget accordingly, including for “preparatory activities” as well as the migration itself.
For SMEs, the PQC should be more straightforward and seamless, as services will typically be updated by vendors, but in the case of specialised software, PQC-compatable replacements or upgrades should be identified and deployed in line with the above timetable.
You might also likeGoogle's Loss of Pulse Detection recently began rolling out to the United States after receiving clearance from health authorities in February. Now, Google has revealed how exactly it created the life-saving feature, and just what makes it so important.
The Google Pixel Watch 3 is the best Android smartwatch on the market owing to its excellent performance, stylish design, and decent battery life. At launch, it was unveiled with Loss of Pulse Detection, which can alert emergency services and bystanders if the wearer suffers a cardiac arrest.
Now, Google has revealed some of the behind-the-scenes work that went into the feature in pursuit of solving "a seemingly intractable public health challenge."
As Google notes, out-of-hospital cardiac arrest (OHCA) events cause millions of deaths worldwide, with one-half to three-quarters of events going unwitnessed.
Per Google, "About half of unwitnessed OHCA victims receive no resuscitation because they are found too late and attempted resuscitation is determined to be futile."
OHCA and successful resuscitation is all about time. The chain of survival, which ends with advanced care, starts with access to emergency services or bystanders who can deliver CPR or administer treatment with a defibrillator. However, timely awareness that someone is experiencing OHCA is crucial.
Witnessed events have a 7.7x higher survival rate than unwitnessed events, which is why Loss of Pulse Detection is so vital.
How Google made Loss of Pulse Detection (Image credit: Google)Google says that its Loss of Pulse Detection relies on a multimodal algorithm using photoplethysmography (PPG), a process that uses light to measure the changes in blood volume, along with accelerometer data from onboard sensors.
There are multiple "gates" that must be passed because the events are so rare, and false positives are less than ideal.
Before an alert goes out, there's data from the PPG sensor (normally used to monitor your heart rate), a machine learning algorithm to check the transition from pulsatile (having a pulse) to pulseless, and further sensor checks to confirm the absence of a weak pulse using further LEDs and photodiodes.
It's all a very technical way of saying your Pixel Watch needs to be absolutely sure your heart has stopped beating before triggering an alert, rather than alerting because a user has taken off their watch, for instance.
Google says that during development it partnered with cardiac electrophysiologists and their patients, including patients with scheduled testing of implanted cardiac defibrillators, where Google measured planned temporary heart stoppages.
Google says that the other vital aspect of developing the feature, aside from accuracy, is responsibility. It detailed further the efforts it has made to minimize false positives, and also notes that skin tone is not a barrier to the efficacy of the feature.
Google also says the design accounts for maximizing battery life, using data from sensors that would already be activated to trigger subsequent further checks, rather than running a background monitoring system all the time.
The full blog is a fascinating insight and well worth the read. As noted, Loss of Pulse Detection is now available in the US, along with all the other territories it is already live in, including the UK and 14 other European countries.
You may also likeFinding the email you need in a crowded Gmail inbox should finally be a lot easier thanks to another AI-powered new update.
The email provider is rolling out a new, smarter search function that will list results in terms of relevance, rather than just in chronological order.
Factoring in details such as recency, most-clicked emails, and frequent contacts, the company says this means the emails you’re actually looking for should be far more likely to be at the top of your search results.
Gmail "Most relevant" (Image credit: Google)“With this update, the emails you’re looking for are far more likely to be at the top of your search results — saving you valuable time and helping you find important information more easily,” the company wrote in a blog post announcing the news.
Users will still be able to search for the most recent results, with Gmail adding a toggle to switch between "Most relevant" and "Most recent" results, based on how they like to search.
Google says the move can help reduce search time, pinpointing the information people are looking for more quickly and accurately.
The feature is rolling out now to personal Google accounts across the world, and will be available on the Gmail app for Android and iOS, with business users also set to receive the feature soon.
You might also likeSuper-slim phones are great from an aesthetic standpoint, but the shrunken form factor can lead to performance constraints – so its reassuring to see a new Samsung Galaxy S25 Edge benchmark leak that suggests it's going to be up to speed with the rest of the series.
As per the benchmark (via SamMobile), the Galaxy S25 Edge looks set to come with the 8-core Qualcomm Snapdragon 8 Elite inside. That's the variant set to a higher clock speed that we've seen in other Samsung devices, including the Samsung Galaxy S25.
What's more, the single-core score of 2,969 and the multi-core score of 9,486 suggest that performance is going to be on a par with the Galaxy S25 Ultra – the phone we described as "the ultimate Android" in our Samsung Galaxy S25 Ultra review.
There are some caveats here: this could well be a Galaxy S25 Edge running unfinished software, for example. But it's good to see these early results pointing in the right direction, ahead of the phone's expected launch in April.
Staying cool The other phones in the series, including the Galaxy S25, are already available to buy (Image credit: Philip Berne / Future)The chipset fitted inside a phone doesn't always tell the whole story of its performance potential: to prevent that chipset from overheating and crashing the phone, it'll be accompanied by various safety measures and cooling features.
How effective that cooling is – and thus how fast the chipset can run – depends on multiple factors, but generally speaking, the more space available, the better the cooling (which is why desktop PCs can be much more powerful than laptops).
While the Samsung Galaxy S25 Edge is rumored to be a mere 5.84mm thick, front to back, it's also expected to be taller and wider than the standard Galaxy S25. That could well mean Samsung can fit in a more advanced cooling system.
All should be revealed within the next few weeks, when Samsung unveils the phone in full – after giving us brief glimpses of what it looks like. It seems very likely Apple will follow with its own super-slim phone later in the year, the iPhone 17 Air.
You might also likeThere’s both good and bad Pixel news today, but the good news will affect more people than the bad, so let’s start there.
Reddit users are finding that Pixel phones with Tensor chipsets (meaning everything from the Google Pixel 6 onwards) are achieving much higher GPU scores on Geekbench 6 than they did at launch. This is widely being attributed to the Android 16 beta, but Android Authority reports seeing similarly upgraded performance on Android 15.
So chances are you don’t need to grab a beta version of Android to see improvements, but rather that recent stable software updates have massively boosted GPU performance.
The exact boost varies depending on model, but Android Authority claims its Pixel 6a unit saw a nearly 23% GPU performance increase, while elsewhere there are reports of a 62% improvement for the Pixel 7a, a 31% improvement for the Pixel 8, and even a 32% improvement for the recent Google Pixel 9.
Android Authority speculates that Google achieved this through including newer GPU drivers in recent Android updates, as while all recent Pixels use an Arm Mali GPU, they don’t always ship with the latest available GPU driver version.
How much impact these performance improvements will have in the real world remains to be seen, but they’re nice to see, and could help extend the lifespan of older Pixel models.
No Satellite SOS for the Pixel 9a The Google Pixel 9a (Image credit: Google)Now for the bad news, and this relates specifically to the new Google Pixel 9a, which we’ve learned doesn’t support Satellite SOS. Google confirmed as much to Android Authority, and this is a feature found on other Google Pixel 9 models which allows you to contact emergency services in areas without Wi-Fi or cell signal.
So it’s a potentially life-saving tool, and while Google didn’t say why it’s absent here, it’s likely because the Pixel 9a uses an older Exynos Modem 5300, rather than the 5400 used by the rest of the Pixel 9 series.
While this is a feature that you’ll hopefully never need to use, it would be reassuring to have, and this isn’t the only omission in the Pixel 9a, as we also recently learned that it lacks several AI tools offered by the rest of the Pixel 9 line.
In fact, this phone has had a slightly troubled launch, with not just these omissions emerging, but also a delay in sales of the phone while Google investigates a “component quality issue”.
Still, the silver lining there is that this delay allowed time for these omissions to be uncovered, so you might think twice about buying the Google Pixel 9a. Certainly, we’d wait until we’ve had a chance to put it through a full review before purchasing one.
You might also likeWhile the Hitman: World of Assassination trilogy has been a stand-out success across PlayStation, Xbox, and PC its transition from flat gaming to VR has been a tough ride. Exploring IO Interactive’s sandbox levels in virtual reality has its charm, graphics woes, lacking motion controls, and general bugginess have negatively impacted prior releases across PSVR, Steam, and recently the Meta Quest 3.
But fourth time’s the charm, so to speak, as with the latest Hitman: World of Assassination release on PSVR 2, IOI has seemingly cracked the VR formula – at least based on my experience in a roughly hour-and-a-half-long demo.
I’ve been looking for an excuse to get back into Hitman, this is it - it really could be the next best PSVR 2 game.
Getting to gripsMy day started off smoothly. I was whisked away to Sapienza – a fictional Italian coastal town introduced in Hitman (2016) – with the goal of eliminating Silvio Caruso, Francesca De Santis, and the biological weapon they’ve created, with me taking out the human targets with an exploding golf ball and sniper rifle respectively.
One shot, one kill (Image credit: iOi)Here I got to grips with developer IO Interactive’s ultimate take on what a VR Hitman should be. As expected you’re thrust into a first-person view, with this PSVR 2 interpretation featuring a suite of motion controls to replace the usual button prompts. Reloading a firearm is an involved process – you have to manually eject the empty cartridge, grab and insert a new one, then cock the pistol to be able to fire again – and to break into areas you aren’t allowed to enter you’ll need to pull out your lockpick, the stolen key card you swiped, or your trusty crowbar to physically crack open the barrier in your way.
The only time you don't have to manually do Agent 47’s job for him is when you’re blending in or climbing.
IO Interactive told me that while some players say they want to stay in first-person the whole time and perform 47’s blending-in techniques for themselves, that doesn’t work for the gameplay as a whole.
Blending in is a time for players to catch their breath, take stock of their situation, and watch out for people hunting them or those who could rumble their disguise – a third-person view facilitates this in a way a first-person one can’t, and from playing the game I can see what they mean. Climbing in third-person also has the added benefit that it’s less nauseating for many than the first-person alternative.
However the team has found ways to use VR in other ways to make this PSVR 2 version more than a simple port, such as with dual-wielding. Obvious applications are that you can go into a mission with dual-wielded guns blazing and forgo Agent 47’s ‘Silent Assassin’ reputation, but others include new takedown techniques.
With a blunt object in each hand, you can knock out two guards simultaneously, making it easier to sneak around undetected and complete a mission with that important Silent Assassin, Suit Only rating.
Much better than Hitman's PSVR and Steam attempts (Image credit: I/O) A whole world to exploreSpeaking of Hitman: World of Assassination as 'just' a PSVR 2 port, this is the (almost) full-on World of Assination package but in VR.
Some missions have been cut (at least for now) such as the bonus Patient Zero campaign (I say for now, as the IOI team gave me the impression it wanted to bring these levels to VR eventually), Hitman 2’s sniper missions, and some of the more elaborate Elusive Targets – like the recent The Splitter mission featuring Jean-Claude Van Damme.
Otherwise, everything’s there. In Sapienza, I was delighted to see the Kraken easter egg was still present – even if I didn’t quite have the time or aim to solve it – and in Berlin, I took on The Drop Elusive Target mission starring real-world DJ Dimitri Vegas.
I also noticed that everything ran fairly smoothly. Even on Berlin’s crowded dance floor and at Miami’s packed car race event I didn’t experience any noticeable stuttering. Graphics-wise it's a step down from what you’ll be used to on the PS5’s flat game, however, it didn’t look bad by any stretch – though I’ll want to test the game out further before passing a final judgment on the performance.
And returning to the Hitman PSVR 2 experience is something I can’t wait for. I love the Hitman trilogy and this PSVR 2 version has truly done it justice in a way I’m sure many players feel the other VR attempts haven’t quite managed to.
The full VR game releases on March 27 as a $9.99 / £8.99 add-on to the original PS5 game (which you’ll also need to own), and I’ll be one of the first in line.
You might also likeAI continues to spark debate and demonstrate remarkable value for businesses and consumers. As with many emerging technologies, the spotlight often falls on large-scale, infrastructure-heavy, and power-hungry applications. However, as the use of AI grows, there is a mounting pressure on the grid from large data centers, with intensive applications becoming much less sustainable and affordable.
As a result, there is a soaring demand for nimbler, product-centric AI tools. Edge AI is leading this new trend, by bringing data processing closer to (or embedded within) devices, on the tiny edge, meaning that basic inference tasks can be performed locally. By not sending raw data off to the cloud via data centers, we are seeing significant security improvements in industrial and consumer applications of AI, which also enhances the performance and efficiency of devices at a fraction of the cost compared to cloud.
But, as with any new opportunity, there are fresh challenges. Product developers must now consider how to build the right infrastructure and the required expertise to capitalize on the potential of edge.
The importance of local inferenceTaking a step back, we can see that AI largely encompasses two fields: machine learning, where systems learn from data, and neural network computation, a specific model designed to think like a human brain. These are supplementary ways to program machines, training them to do a task by feeding it with relevant data to ensure outputs are accurate and reliable. These workloads are typically carried out at a huge scale, with comprehensive data center installations to make them function.
For smaller industrial use-cases and consumer industrial applications – whether this is a smart toaster in your kitchen or an autonomous robot on a factory floor – it is not economically (or environmentally) feasible to push the required data and analysis for AI inference to the cloud.
Instead, with edge AI presenting the opportunity of local inference, ultra-low latency, and smaller transmission loads, we can realize massive improvements to cost and power efficiency, while building new AI applications. We are already seeing edge AI contribute towards significant productivity improvements for smart buildings, asset tracking, and industrial applications. For example, industrial sensors can be accelerated with edge AI hardware for quicker fault detection, as well as predictive maintenance capabilities, to know when a device’s condition will change before a fault occurs.
Taking this further, the next generation of hardware products designed for edge AI will introduce specific adaptations for AI sub-systems to be part of the security architecture from the start. This is one area in which embedding the edge AI capability within systems comes to the fore.
Embedding intelligence into the productThe next stage in the evolution of embedded systems is introducing edge AI into the device architecture, and from there its “tiny edge”. This refers to tiny, resource-constrained devices that process AI and ML models directly on the edge, including microcontrollers, low-power processors and embedded sensors, enabling real-time data processing with minimal power consumption and low latency.
A new class of software and hardware is now emerging on the tiny edge, giving the possibility to execute AI operations in the device. By embedding this capability within the architecture from the start, we are making the ‘signal’ itself become the data’, rather than wasting resources transforming it. For example, tiny edge sensors can gather data from the environment that a device is in, leveraging an in-chip engine to produce a result. In the case of solar farms, sensors within a solar panel can specifically detect nearby arc faults across power management systems. When extreme voltages occur, it can automatically trigger a shutdown failsafe and avoid an electrical fire.
With applications like arc fault detection as well as battery management or on-device face or object recognition driving growth in this space, we will see the market for microcontrollers capable of supporting AI on the tiny edge grow at a CAGR of over 100% (according to ABI Research). To realize this potential, more work is needed to bridge the gap between the processing capabilities of cloud-based AI and targeted applications from devices that are capable of working on, or being, the edge.
However, like with any new technology: where there is a demand, there is a way.
We are already seeing meaningful R&D results focused on this challenge, and tiny AI is starting to become embedded in all types of different systems – in some cases, consumers are already taking this technology for granted, literally talking to devices without thinking ‘this is AI’.
Building edge AI infrastructureTo capitalize on this emerging opportunity, product developers must first consider the quality and type of data that goes into edge devices, as this determines the level of processing, and the software and hardware required to deal with the workload. This is the key difference between typical edge AI, operating on more powerful hardware, capable of handling complex algorithms and datasets, and tiny AI, which focuses on running lightweight models that can perform basic inference tasks.
For example, audio and visual information - especially visual - are extremely complex and need a deep neural architecture to analyze the data. On the other hand, it is less demanding to process data from vibrations or electric current measurements recorded over time, so developers can utilize tiny AI algorithms to do this within a resource-constrained or ultra-low power, low latency device.
It is important to consider the class of device and microcontroller unit needed in the development stage, based on the specific computational power requirements. In many cases, less is more, and running a lighter, tiny AI model improves the power efficiency and battery life of a device. With that said, whether dealing with text or audio-visual information, developers must still undertake pre-processing, feeding large quantities of sample data into learning algorithms to train the AI using them.
What’s on the horizon?The development of devices that embed AI into the tiny edge is still in its infancy, meaning there’s scope for businesses to experiment, be creative, and figure out exactly what their success factors are. We are at the beginning of a massive wave, which will accelerate digitalization in every aspect of our life.
The use-cases are vast, from intelligent public infrastructure, such as the sensors required for smart, connected cities, to remote patient monitoring through non-invasive wearables in healthcare. Users are able to improve their lives, and ease daily tasks, without even realizing that AI is the key factor.
The demand is there, with edge AI and tiny AI already transforming product development, redefining what’s classified as a great piece of technology, enabling more personalized predictive features, security, and contextual awareness. In just a few years, this type of AI is going to become vital to the everyday utility of most technologies – without it, developers will quickly see their innovations become obsolete.
This is an important step forward, but it doesn’t come without challenges. Overcoming these challenges can only happen through a broader ecosystem of development tools, and software resources. It’s just a matter of time. The tiny edge is the lynchpin through which society will unlock far greater control and usefulness of its data and environment, leading to a smarter AI-driven future.
We feature the best Computerized Maintenance Management System software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
There’s a magic in the very name of Wonka! And now Netflix has given a green light to The Golden Ticket, a reality competition series inspired by Willy Wonka, that legendary candy maker initially appearing in Roald Dahl’s 1964 children’s classic, Charlie and the Chocolate Factory.
As a precocious California child of the ‘70s back when the last Ice Age melted, celebrated author Roald Dahl’s imaginative kids’ book centered around an eccentric confectioner and a poor European lad who finds a Golden Ticket to tour Willy Wonka’s mysterious headquarters was always the first title I grabbed off my bedroom bookshelf whenever staying home from school.
Years later, I was enthralled to see my all-time favorite book adapted into a Hollywood feature film, Willy Wonka and the Chocolate Factory, starring Gene Wilder as the kooky cacao wizard. I even enjoyed (to a lessor degree) director Tim Burton’s Johnny Depp-led version from 2005 and Wonka, the most recent musical iteration by filmmaker Paul King starring Timothée Chalamet.
Netflix’s next appetizing new TV series based on Roald Dahl's story, aptly called The Golden Ticket, is best described as mixing the ingredients of logistical tactics and fun interactive gameplay, all while sugar-crazed contestants seek to achieve entry to a “retro-futuristic” candy-making factory using their Golden Ticket and negotiating through a number of chocolatey challenges to complete the various objectives.
“We are thrilled to bring the magic of The Chocolate Factory to life like never before,” said Jeff Gaspin, vp of unscripted material at Netflix. “This one-of-a-kind reality competition blends adventure, strategy and social dynamics, creating an experience that is as captivating as it is unpredictable. For the first time, a lucky few won’t just have to imagine the experience — they’ll get to step inside the factory and live it.”
With the profusion of popular cooking shows, obstacle challenges, and food-based reality programs scattered across the streaming landscape these days, a chocolate-coated project centered on Roald Dahl’s masterpiece seems destined for instant success. Eureka Productions (The Mole, Dating Around, TwentySomethings Austin) will serve as the series producers.
Netflix purchased the rights to Roald Dahl’s entire catalog of intellectual property back in 2021, which includes books such as The Fantastic Mr. Fox, Matilda, James and the Giant Peach, The BFG, The Witches, and the sequel to Charlie and the Chocolate Factory titled Charlie and the Great Glass Elevator. This endeavor will be Netflix’s first dip into the world of Willy Wonka.
Although there’s no official release date yet for Netflix’s The Golden Ticket, we’ll be sure to deliver the full scoop on any upcoming details and developments for what is sure to be one of the best Netflix shows.
You might also likeHot on the heels of its announcement that NotebookLM's Audio Overviews are now available in Gemini, Google has revealed that a new feature, Mind Maps, will now be available as an option in NotebookLM.
Mind maps are great at helping you understand the big picture of a subject in an easy-to-understand visual way. They consist of a series of nodes, usually representing ideas, with lines that represent connections between them.
The beauty of mind maps is that they show you the connections between ideas in a way that helps make those connections more obvious.
Another string to its bowNotebookLM is Google’s AI research helper. You feed it articles, documents, even YouTube videos and it produces a notebook summarizing the main points of the subject and you can chat to it and ask questions, as you would a normal AI chatbot.
Its best feature is that you can also generate an Audio Overview in NotebookLM, which is an AI-generated podcast between two AI hosts that discusses the subject, so you can listen to it and absorb the key points while doing something else at the same time. The Audio Overview can sound so natural it’s hard to believe you’re not listening to two humans talking!
Now Mind Maps have been added as another string to NotebookLM’s bow for helping you absorb information. They work in either the standard free version of NotebookLM or the paid-for Plus version.
(Image credit: Google/Apple) Better understandingTo generate a Mind Map you simply open one of your notebooks in NotebookLM, or create a new one, then click on the new Mind Map chip in the Chat window (the central panel).
Once you’re viewing your Mind Map (it appears in the Studio panel once it has been generated) you can zoom in or out, expand and collapse branches, and click on nodes to ask questions about specific topics.
NotebookLM is shaping up to be an essential tool for students who have a lot of information to digest, and don’t necessarily read very quickly. Using the power of AI you can get AI to do a lot of the leg work for you, then present you with the key bits of information, and Mind Maps is just another way for NotebookLM to help you on your path to better understanding.
You may also like