Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
Updated: 5 hours 58 min ago

HP agrees million-dollar settlement over "false advertising" on PCs, keyboards

Fri, 04/18/2025 - 07:02
  • HP has agreed to settle a lawsuit by forking out $4 million
  • It relates to misleading pricing on PCs and peripherals ‘on sale’
  • Eligible consumers will have purchased between 2021 and 2024

HP has agreed to pay a $4 million settlement over allegations of false advertising on its website relating to its sale of computers and peripherals.

The settlement stems from a lawsuit originally filed in October 2021. Preliminary approval for the class-action settlement was granted by a US District Judge earlier this month.

According to the lawsuit, HP allegedly showed misleading original prices on its website for some PCs, mice and keyboards, making it appear as though they were much cheaper than usual when on sale.

Misleading prices

Strike-through prices on sale items made products appear more discounted than they actually were, with some rare or never sold at the original price anyway.

HP’s $4 million payment will go toward “Settlement Class members’ claims; court-approved Notice and Settlement Administration Costs; court-approved Settlement Class Representatives’ Service Award; and court-approved Settlement Class Counsel Attorneys’ Fees and Costs Award,” the judge’s approval confirms. “All residual funds will be distributed pro rata to Settlement Class members who submitted valid claims and cashed checks.”

The lawsuit applies to customers who bought HP desktops, laptops, mice or keyboards advertised as being discounted for more than 75% of the time between June 5, 2021 and October 28, 2024.

One of the examples given was a $999.99 HP All-in-One machine bought by a plaintiff in September 2021. It was advertised as having $100 off, marked at $899.99, however it had been sold at that lower price since April 2021.

Among the three pages of eligible models shared by Ars Technica include HP Spectre, Chromebook, Envy, Pavilion and Omen models.

Although HP has agreed to pay a multimillion-dollar settlement, it hasn’t technically admitted to any wrongdoing. TechRadar Pro has asked HP for a reaction to the agreement, but we did not receive an immediate response.

Via Ars Technica

You might also like
Categories: Technology

A critical Erlang/OTP security flaw is "surprisingly easy" to exploit, experts warn - so patch now

Fri, 04/18/2025 - 06:22
  • Security researchers find a 10/10 flaw in Erlang/OTP SSH
  • Horizon3 Attack Team says the flaw is "surprisingly easy" to exploit
  • A patch is available, so users should update now

Erlang/OTP SSH, a set of libraries for the Erlang programming language, carries a maximum-severity vulnerability that allows for remote code execution and is “surprisingly easy” to exploit, researchers are warning.

A team of cybersecurity researchers from the Ruhr University Bochum (Germany) recently discovered an improper handling of pre-authentication protocol messages flaw, which affects all versions of Erlang/OTP SSH. It is tracked as CVE-2025-32433 and carries a severity score of 10/10 (critical).

Erlang/OTP SSH is a module within the Erlang/OTP standard library that provides support for implementing Secure Shell (SSH) clients and servers in Erlang applications.

Monitor your credit score with TransUnion starting at $29.95/month

TransUnion is a credit monitoring service that helps you stay on top of your financial health. With real-time alerts, credit score tracking, and identity theft protection, it ensures you never miss important changes. You'll benefit from a customizable online interface with clear insights into your credit profile. Businesses also benefit from TransUnion’s advanced risk assessment tools.

Preferred partner (What does this mean?)View Deal

Remote code execution

Erlang is a functional programming language and runtime system designed for building highly concurrent, distributed, and fault-tolerant systems. It was originally developed by Ericsson, for use in telecoms, but has expanded into messaging systems, databases, and other applications where uptime and scalability are critical.

"The issue is caused by a flaw in the SSH protocol message handling which allows an attacker to send connection protocol messages prior to authentication," a warning on the OpenWall vulnerability mailing list reads.

Soon after the news broke, security researchers from the Horizon3 Attack Team tried to reproduce the flaw and found it to be “surprisingly easy”, which should be cause for concern.

“Just finished reproducing CVE-2025-32433 and putting together a quick PoC exploit — surprisingly easy,” the team said on X. “Wouldn’t be shocked if public PoCs start dropping soon. If you’re tracking this, now’s the time to take action.”

Taking action would mean applying the patch which is now available and which mitigates the risk. Since all older versions are vulnerable, all users are advised to upgrade to versions 25.3.2.10 and 26.2.4.

Threat actors are more active in the short window between a patch being released, and being applied by the users. Most organizations are not that diligent when it comes to patching, giving cybercriminals a relatively easy exploit avenue.

Via BleepingComputer

You might also like
Categories: Technology

Google "could face breakup" after being found guilty of having illegal ad tech monopolies

Fri, 04/18/2025 - 05:27
  • A US judge has deemed Google violated antitrust laws
  • It reportedly monopolized ad market by tying to of its products together
  • Google could have to sell off parts of its business, but remedies are yet to be confirmed

A Virginia District Judge has ruled Google violated antitrust laws by “willfully acquiring and maintaining monopoly power” in the advertising technology market, spelling potentially grave consequences for the tech giant.

The ruling follows a 2023 lawsuit by the Department of Justice, backed by eight separate US states, accusing the company of harming rivals, publishers and consumers online.

Google was specifically found guilty of monopolizing the market by tying together two parts of its adtech stack – DoubleClick for Publishers (DFP) and Ad Exchange (AdX).

Judge rules that Google violated antitrust laws

Despite the findings, the judge did not find a monopoly in advertiser ad networks, representing a partial win for Google.

Although Google has been found guilty, the judge did not determine any remedies. A separate court hearing will set out what Google must do to comply with antitrust laws and set straight any violations. Consequences could include breaking up Google’s ad business, such as selling off Google Ad Manager, and further behavioral remedies like prohibiting Google from self-preferencing in ad auctions.

“Having found Google liable, the Court will set a briefing schedule and hearing date to determine the appropriate remedies for these antitrust violations,” the decision confirms.

Noting the continued employment of anticompetitive business practices for more than a decade, the judge said: “ In addition to depriving rivals of the ability to compete, this exclusionary conduct substantially harmed Google’s publisher customers, the competitive process, and, ultimately, consumers of information on the open web.”

“We won half of this case and we will appeal the other half. The Court found that our advertiser tools and our acquisitions, such as DoubleClick, don’t harm competition," Google’s VP of Regulatory Affairs Lee-Anne Mulholland told TechRadar Pro.

"Publishers have many options and they choose Google because our ad tech tools are simple, affordable and effective."

Google is also in hot water about its search market dominance – nine in 10 (89.7%) of all internet searches tracked by Statcounter used Google. Bing, in second place, accounted for just 4%.

If the company is found guilty of violation there, it could also be forced to sell off its Chrome business, a browser that accounts for two in three (66.2%) browser sessions globally. That case is ongoing.

You might also like
Categories: Technology

The iPhone 16 Pro Max helped me see – with a little help from the Samsung Galaxy S25 Ultra

Fri, 04/18/2025 - 04:30

My eyesight sucks. A detached retina and the subsequent operations to fix it, and the rise of glaucoma as a result, mean most of the vision in my left eye has gone. My right eye, on the other hand, is very short-sighted, meaning I can see bits of floating debris in the vitreous liquid behind the surface of my eye, which is distracting, and my current contact lens isn’t sitting correctly.

So while I can see, spotting fine details or seeing things clearly at a distance is a pain in the proverbial posterior. This harsh reality slammed into me particularly hard during a recent bachelor-party trip to Berlin. While steins of good German beer were consumed, we also did a lot of sightseeing – or at least my friends did, as I spent a good bit of time squinting.

That changed when I decided to lean on the iPhone 16 Pro Max and the Samsung Galaxy S25 Ultra, both of which I had on my person, with the latter as a backup but also because I’m a tech journalist and live the dual-ecosystem life.

Specifically, the 5x telephoto cameras on both flagship phones came in very handy, letting me zoom in on details on the Reichstag or the myriad of street art sprayed onto the walls and buildings of the city. But both phones really helped me and my poor eyes when visiting the Berlin Zoo.

Zooming at the zoo

Now I’m not a huge fan of zoos; I appreciate the preservation side of things, but I don’t like seeing animals in limited space. Berlin Zoo did at least seem to have plenty of space for its collection of creatures, which was promising but also a bit of a challenge for my bad eyes.

I found myself desperately squinting into smartly made enclosures to spot some of the smaller and more camouflaged animals, or get a proper look at the ones in large enclosures that were sitting as far away from visitors as possible; I don’t blame them.

Enter the telephoto cameras of the aforementioned flagship phones. These basically become my eyes when entering the areas where the animals were better camouflaged or elusive. And they let me capture shots that clipped past the crowds and let me get a nice framed image of a prowling leopard or bemused bear; see the photo gallery below.

Image 1 of 16

A selection of photos of animals taken at Berlin Zoo on the iPhone 16 Pro Max and Samsung Galaxy S25 Ultra (Image credit: Future / Roland Moore-Colyer)Image 2 of 16

(Image credit: Future / Roland Moore-Colyer)Image 3 of 16

(Image credit: Future / Roland Moore-Colyer)Image 4 of 16

(Image credit: Future / Roland Moore-Colyer)Image 5 of 16

(Image credit: Future / Roland Moore-Colyer)Image 6 of 16

(Image credit: Future / Roland Moore-Colyer)Image 7 of 16

(Image credit: Future / Roland Moore-Colyer)Image 8 of 16

(Image credit: Future / Roland Moore-Colyer)Image 9 of 16

(Image credit: Future / Roland Moore-Colyer)Image 10 of 16

(Image credit: Future / Roland Moore-Colyer)Image 11 of 16

(Image credit: Future / Roland Moore-Colyer)Image 12 of 16

(Image credit: Future / Roland Moore-Colyer)Image 13 of 16

(Image credit: Future / Roland Moore-Colyer)Image 14 of 16

(Image credit: Future / Roland Moore-Colyer)Image 15 of 16

(Image credit: Future / Roland Moore-Colyer)Image 16 of 16

(Image credit: Future / Roland Moore-Colyer)

Advancements in the quality of cameras sensors, alongside optical zoom range and improved image processing – plus the addition of sensor fusion letting a phone take shots with multiple cameras at once and stitch an image out of them – has seen telephoto cameras, at least on some of the best phones, go from mild novelties to useful additions.

I’ve long favored telephoto cameras over ultra-wide ones, which can make me an outlier compared to some people. Maybe I just don’t have big groups of friends to capture in digital images. So the more recent push by flagship phones from bigger brands to go past 3x telephoto cameras and adapt 5x and above – think the past couple of generations of Galaxy, Pixel and Pro iPhones – has really caught my eye (pun partially intended).

And for helping me appreciate the range of animals at Berlin Zoo without enraging German animal handlers and administrators by leaping into lion enclosures, these telephoto cameras were basically essential.

Furthermore, the advancements in low-light photography have meant that when I entered a very dark section of the zoo where the nocturnal animals were kept, and where I basically couldn’t see, the night mode of the iPhone 16 Pro Max was a boon, letting me view various critters without activating a flash or anything obnoxiously disturbing.

Honestly, without such tech, I think I’d have stumbled from enclosure to enclosure without seeing a single critter.

(Image credit: Future / Roland Moore-Colyer)

Now I do need to see an optician to get a new contact lens that actually fits, and I’m not saying that looking at life through a smartphone is the panacea to my poor eyes.

Yet my trip to Berlin and its zoo hammered home quite how capable two of the best camera phones are. Sure, upgrades to phone cameras have been iterative lately. Nevertheless, each improvement leads to a better overall experience, and in my case, basically saved me from what could have been a rather miserable and frustrating time.

You might also like
Categories: Technology

The iPhone 18 is again tipped to get a major performance boost – but price hikes could follow

Fri, 04/18/2025 - 04:30
  • The iPhone 18 is again tipped to make the 2 nm switch
  • It means more power and a higher cost to make
  • The phones are due to launch in September next year

Should you upgrade to the iPhone 17 this year, or wait for the iPhone 18? A new leak suggests that the 2026 iPhone is going to come with a significant performance boost, but might also have a notably higher price tag.

This comes from seasoned tipster Digital Chat Station on Chinese social media site Weibo (via MacRumors). Apparently, the A20 chip destined for the iPhone 18 series will switch from a 3 nanometer to a 2 nanometer manufacturing process – essentially packing more transistors into the same space.

That should mean a major boost in performance and efficiency (which then improves battery life). iPhone chips get faster every year of course, but where a nanometer (nm) jump is involved, the differences in generations should be even greater.

We've heard this rumor before, from well-placed sources, and we're even more likely to believe it now that it's been repeated again. Expect Apple to make a lot of noise about the performance of its iPhones when next year rolls around.

It'll cost you

The iPhone 16 launched in September 2024 (Image credit: Future)

The same tipster says (via Google Translate) that the cost of these chips is expected to "increase significantly", with "another round of price increases for new phones". Add in current tariff uncertainty, and the 2026 iPhone series could be the most expensive yet.

Other chip makers, including Qualcomm and MediaTek, are apparently moving to the same 2 nm process next year as well – so flagship smartphones might be more expensive across the board, not just when it comes to Apple's offerings.

Again, this is something that other tipsters have predicted. This isn't a completely new rumor, but it adds to the mounting evidence that the iPhone 18 handsets are going to be impressively powerful... and perhaps rather pricey too.

Expect more rumors like this for the rest of this year and into the next one. In the meantime, we're hearing that the iPhone 17 range could come with a substantial redesign, certain video recording improvements, and a brand new model.

You might also like
Categories: Technology

Yellowjackets season 3 finale made me shocked, surprised and sad – here are 3 things you may have missed

Fri, 04/18/2025 - 04:00

To say that the last episode of Yellowjackets season 3 was a killer would be an understatement: it was a highly dramatic, often surprising and very violent end to not just the season, but to some of the key characters too.

Warning: serious spoilers ahead!

If you haven't already seen the entire third season of one of the best Paramount+ shows please don't read on, because there were some important things in the season finale that I want to talk about, and in order to explain them I'm going to have to include some massive spoilers.

Trust me: Spoiling any of the surprises for you is definitely not what the wilderness wants.

Misty's smile was misdirection

(Image credit: Paramount Plus)

One of the most disturbing parts of Yellowjackets' very first episode was what happened immediately after the horrible death of Pit Girl, who of course we now know to be Mari.

There's a lingering moment in the pilot where, post-cannibalism, the camera focuses on Misty and she doesn't seem upset; she seems happy, with what you could describe as either a smirk or a smile.

It turns out that that was misdirection: we were set up to think that Misty was heartless or even evil, and in successive seasons that was reinforced by the animosity between her and Mari. But we now know that Shauna, not Misty, was the person who ensured that Mari would be Pit Girl.

Seeing Misty's smile in context at the end of season 3 showed us the real story: Misty was smiling because she knew her and Nat's plan – getting away to make that phone call on the repaired satellite phone – had worked.

Shauna's crown is hollow

(Image credit: Showtime; Paramount Plus)

In the very final moments we see Shauna become the Antler Queen, which of course you saw. But that coronation is misdirection again, because it's already a hollow victory: the Yellowjackets are turning against Shauna in both timelines because of her shocking actions.

Of course we know she makes it back home in the 1990s timeline, because if she didn't then we wouldn't have the present-day Shauna to be horrified by.

But with Misty and even Tai now lined up against her, I'm really not expecting a happy ending to Shauna's story. Not least because...

Callie is a killer

(Image credit: Paramount Plus)

Among the many revelations of the finale, one of the biggest is the identity of Lottie's killer – Callie. When Misty works it all out, she isn't slow to tell Jeff and Shauna. Jeff realizes that Shauna is, to put it mildly, not the greatest role model Callie could have, and spirits her away.

But something strange happened in the finale when Shauna finds empty closets and no sign of a note: nothing.

After an entire season where Callie and Jeff's characters became really important, their story just stops dead. We're no wiser about where they are or what they're doing than Shauna is.

I think that's a third bit of misdirection. I reckon that we're going to see a lot more of Callie in season four – and that that's not going to be good news for Shauna.

Remember, this is a show all about teens murdering people, and Mari's final words to Shauna were "you deserve all the bad things that are going to happen to you"... are you thinking what I'm thinking?

You might also like
Categories: Technology

The engineer's guide to staying ahead of cyber threats

Fri, 04/18/2025 - 03:52

Cybercriminals don’t discriminate. They go where the money, data, and opportunities are. And in today's hyper-connected world, engineering firms have become lucrative targets. The recent attacks on companies such as IMI and Smiths Group are a prime example of that. In engineering environments, cybersecurity can’t be just an add-on. Not when complex supply chains, high-value intellectual property, and critical infrastructure are at stake. Not when a single security breach can lead to catastrophic consequences.

Imagine an engineering firm spearheading smart infrastructure projects, embedding sensors into bridges to monitor structural integrity. A cyberattack could manipulate those readings, triggering unnecessary shutdowns, or worse, concealing a real threat. Now scale that risk across an entire industry reliant on smart manufacturing, Industrial IoT (IIoT) devices, and cloud-based systems. Every new digital advancement creates another entry point for hackers.

Yet, despite the dangers, cybersecurity in engineering is often reactive rather than proactive. Many firms treat security as patching vulnerabilities only after an attack has already taken place. So how does that mindset change?

From firefighting to prevention

Cybersecurity used to function like a fire department – teams would rush to put out flames after a breach. But today’s threat landscape demands something different, from continuous network monitoring and early detection to rapid response. This is where Security Information and Event Management (SIEM) comes into play.

SIEM operates like a high-tech security nerve center, constantly scanning logins, file access, and network traffic for anomalies. When it detects suspicious activity such as an unauthorized attempt to access sensitive blueprints, it raises an alert before real damage occurs. And if an attack does happen, SIEM doesn’t only just sound the alarm – it provides forensic insights, helping companies understand how the breach occurred, where it spread, and how to prevent it from happening again.

For an industry where security failures can have life-or-death consequences, this kind of proactive defense is non-negotiable.

High-tech meets the human element

The good news is that the time it takes to detect and contain breaches is improving. Thanks to automation, in 2024, the average time dropped to 258 days, the shortest in seven years. But there’s still room for improvement, and AI-driven cybersecurity solutions are stepping up.

For instance, AI processes massive amounts of security data in real-time, identifying patterns in API calls, logins, and system behavior to flag anomalies faster than any human team could. Think of it as a digital watchdog that never sleeps. When combined with SIEM, AI can pinpoint suspicious behavior, like an industrial machine suddenly executing unauthorized commands, before an incident escalates.

And beyond just detection, AI-driven automation reduces breach costs. In fact, research from IBM found that companies leveraging AI in cybersecurity saved an average of $2.22 million per breach compared to those that didn’t.

But even the most advanced systems can’t compensate for basic cybersecurity hygiene. An impressive 22% of last year’s breaches stemmed from avoidable human error – misconfigured settings, weak passwords, or falling for phishing emails. Yet, despite the risks, many companies remain critically understaffed in cybersecurity expertise. In fact, the World Economic Forum found that in 2024, only 14% of organizations felt confident in their ability to fend off cyberattacks.

A balanced approach is the only effective solution. While AI and automation enhance security, organizations still need skilled professionals to interpret threats, make critical decisions, and instill a culture of cyber awareness across their workforce.

Cost vs investment

Data breaches aren’t just technical issues, they can be financial disasters. In 2024, the average cost of a breach surged to $4.88 million, up from $4.45 million the previous year – a whopping 10% spike and the highest increase since the pandemic.

For engineering firms, the stakes are even higher. A single cyberattack on a company developing next-generation electric vehicles could leak years of research to competitors, wiping out its competitive edge overnight. A breach in a transportation infrastructure project could delay completion timelines, inflate costs, and erode public trust.

By embedding SIEM into their cybersecurity framework, engineering companies can ensure that every digital action – whether it’s accessing blueprints, placing procurement orders, or monitoring industrial processes – is continuously protected. The result? Reduced downtime, lower financial risk, and a reputation as a secure and forward-thinking industry leader.

We list the best RFP platform.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

British businesses are getting used to AI at work - but there are still plenty of hurdles to overcome

Fri, 04/18/2025 - 03:49
  • Snowflake research finds 93% of UK businesses report efficiency gains from GenAI
  • Many are also tweaking LLMs for the best output
  • Data and privacy concerns remain widespread, seperate EY report finds

Businesses are now getting to grips with AI and are implementing it with efficacy, marking a shift from the experimentation phase, with as many as 93% of UK businesses now reporting efficiency gains from generative AI (and 88% globally), new research from Snowflake has claimed.

Moreover, a staggering 98% are also training, tuning or augmenting their LLMs for better outcomes, demonstrating that companies know exactly where the tech’s benefits are and how to optimize it.

However, the usual hurdles and challenges remain in place, preventing some organizations from accessing the promised productivity benefits.

Businesses in the UK are pretty au fait with AI

Snowflake found nearly two-thirds (62%) of businesses are using AI in software engineering, with 69% using it for code reviews and debugging – both higher percentages than the global average.

AI technology is also proving popular in customer support (61%) and cybersecurity (69%) use cases, where workers are seeing faster first response times (59%), reduced manual workload (64%) and lower costs (56%).

Separate EY reporting reveals seven in 10 UK respondents have used AI in their daily lives in the past six months, but the findings conflict with Snowflake’s findings – only 44% have used it in a professional setting, lower than the global average of 67%.

Globally, EY says workers are using AI for writing or editing content (31%), learning about topics (30%) and generating new ideas (27%).

“They're not just experimenting – they're building with purpose,” Snowflake VP and UK&I Country Manager James Hall said about UK businesses.

“With smart investments in cloud infrastructure and a focus on actionable use cases, the UK is laying the groundwork to lead the next phase of gen AI transformation.”

The research also highlighted some of the challenges that businesses face when adopting AI at scale, with unstructured data presenting the biggest hurdle according to Snowflake.

EY added that privacy and security are also at the front of UK business leaders’ minds, with security breaches (71%), privacy violations (65%) and the reliability of AI outputs (67%) all cited as major concerns.

Looking ahead, EY UK&I AI Client Strategy Leader Catriona Campbell says that businesses must build worker confidence and demonstrate the value of AI.

“As AI continues to reshape our daily lives, it is crucial for business leaders to foster trust and transparency, empowering individuals to engage with AI on their own terms,” Campbell added.

You might also like
Categories: Technology

Leaked Razr Plus 2025 specs may have revealed everything about Motorola's next flip foldable

Fri, 04/18/2025 - 03:30
  • A full specs list for the Motorola Razr Plus 2025 has leaked
  • We know the phone is launching officially on April 24
  • Some useful upgrades appear to be on the way for the foldable

We're more than ready for a successor to the Motorola Razr Plus 2024, and we now have a better idea of what the Moto Razr Plus 2025 will bring along with it thanks to an extensive leak of the flip foldable's specs.

The specs have been published by 91mobiles and well-known tipster @OnLeaks, and add to Motorola's official announcement that this phone – which will be known as the Motorola Razr 60 Ultra outside of the US – is going to be unveiled on Thursday, April 24.

It seems we're set for some considerable upgrades: a Snapdragon 8 Elite processor (up from the Snapdragon 8s Gen 3), 16GB of RAM (up from 12GB), and a 4,700 mAh battery (up from 4,000 mAh), with better wired and wireless charging speeds than before.

The main display is tipped to get a slight size bump from 6.9 inches to 7 inches, but the cover display is apparently staying the same size, at 4 inches. We'll get more storage inside, it sounds like: 512GB instead of 256GB.

Cameras and dimensions

The new model might be ever so slightly thicker than the current model, shown here (Image credit: Philip Berne / Future)

When it comes to cameras, the leak suggests the 50MP wide + 50MP 2x telephoto dual camera setup of the 2024 model will be replaced by a 50MP wide + 50MP ultrawide configuration – not as much zoom, but the option to fit more inside the frame.

If these details are accurate, the Moto Razr Plus 2025 will be a shade taller, thicker, and heavier than its predecessor, though not by much. Overall, it sounds like this is a respectable year-on-year upgrade, though as always the pricing will be crucial.

We've heard quite a few leaks and rumors in the build-up to the official launch later this month. Just a few days ago, benchmarks for the foldable phone appeared online, which also pointed to processor and memory upgrades.

It's likely that a standard Razr 2025 will show up at the same time as the Razr Plus 2025. We're also now looking forward to the launch of the Samsung Galaxy Z Flip 7 flip foldable, which should be making an appearance sometime in July.

You might also like
Categories: Technology

What is the release date and launch time for The Last of Us season 2 episode 2?

Fri, 04/18/2025 - 03:00

The Last of Us season 2 has finally landed on TV screens across the globe – and if you're eager to watch its next episode, you'll need my help to find out when it'll make its debut.

Below, I'll tell you when The Last of Us TV show's latest chapter will be released in the US, UK, and Australia. You'll also learn which of the world's best streaming services it'll be available on. Oh, and I'll give you the details on when new episodes will air every single week.

Here, then, is when you can catch the follow-up to The Last of Us season 2 episode 1.

What time does The Last of Us season 2 episode 2 come out in the US?

Don't look so sad, Joel, episode 2 will be out soon! (Image credit: HBO/Liane Hentscher)

Episode 2 of the sophomore season of The Last of Us will be available to stream in the US on Sunday, April 20 at 6pm PT / 9pm ET. Just like its predecessors, the HBO exclusive's next installment is going to air on the aforementioned cable network and Warner Bros Discovery's super streamer Max.

When can I watch The Last of Us season 2's next episode in the UK?

Abby isn't happy that she has to wait a few more days for season 2's next episode (Image credit: HBO/Liane Hentscher)

The Pedro Pascal and Bella Ramsey-starring TV adaptation of Naughty Dog's video game series will return in the UK on Monday, April 21 at 2am BST.

As for where you can stream it, Sky Atlantic and Now TV are your friends on British shores.

When will The Last of Us season 2 episode 2 come out in Australia?

We'll be reunited with Toomy soon enough (Image credit: Liane Hentscher/HBO)

Episode 2 of one of the best Max shows will make its debut in Australia on Monday, April 21 at 11am AEST.

As I mentioned in my season 2 episode 1 release date and time article, Foxtel subscribers will be able to watch new episodes of The Last of Us on that platform, too.

The Last of Us season 2 full release schedule

More dangerous adventures await Ellie and Dina in season 2 (Image credit: Liane Hentscher/HBO)

Five more episodes of The Last of Us 2 are set to launch on the aforementioned streamers before the dystopian drama departs once again. You can find out when episode 3 and its follow-ups will arrive by consulting the list below.

  • Episode 1 – out now
  • Episode 2 – April 20 (US); April 21 (UK and Australia)
  • Episode 3 – April 27 (US); April 28 (UK and Australia)
  • Episode 4 – May 4 (US); May 5 (UK and Australia)
  • Episode 5 – May 11 (US); May 12 (UK and Australia)
  • Episode 6 – May 18 (US); May 19 (UK and Australia)
  • Episode 7 – May 25 (US); May 26 (UK and Australia)
You might also like
Categories: Technology

AI in the workplace: why upskilling, not fear, is the key to AI collaboration

Fri, 04/18/2025 - 01:44

Artificial intelligence (AI) is reshaping workplaces at lightning speed—but nearly a third of employees don’t know how to use it effectively. Instead of unlocking AI’s potential, many companies are watching productivity stall as workers struggle to adapt. The problem isn’t the AI itself; it’s a failure to prepare employees for collaboration with AI rather than competition against it.

So, how can companies turn this around?

The AI Knowledge Gap: A Threat to Workplace Innovation

Despite the widespread adoption of AI tools, many employees feel left behind. The Corndel 2025 Workplace Training report revealed that:

  • 49% of employees believe AI is outpacing their company’s ability to train them, creating a skills gap that threatens productivity.
  • 54% of workers report that they lack clear guidelines on AI usage, leading to inconsistent adoption.
  • 65% of employees want ethical AI training, highlighting concerns about responsible AI use.
  • 31% of UK small businesses hesitate to adopt AI due to a lack of understanding and support.

Employees aren’t just unsure about AI—they feel left behind. Without structured L&D strategies that encourage AI collaboration, organizations risk falling behind as competitors fully integrate AI-driven efficiencies.

Shifting L&D to Enable AI Collaboration

Traditional workplace training focuses on developing human-only skills. However, in an AI-powered workplace, employees must learn how to work alongside AI—not against it.

Here’s how L&D departments can adapt:

1. AI Literacy for All Employees

Organizations must introduce foundational AI training to demystify the technology and show employees how to incorporate it into daily tasks. This includes:

  • Understanding the basics of machine learning and AI capabilities.
  • Identifying which workplace tasks AI can enhance or automate.
  • Recognizing the ethical implications of AI in decision-making.

2. Role-Specific AI Training

Not all employees need the same AI training. L&D teams should tailor programs to specific job functions:

  • Marketing teams: Training on AI-powered analytics and content automation.
  • HR teams: Understanding AI-driven recruitment tools and employee sentiment analysis.
  • Customer service teams: Leveraging AI chatbots and automation for better customer interactions.

3. Ethical and Responsible AI

Training Ethical AI use must be a core component of workplace training. This includes:

  • Teaching employees how to detect AI bias.
  • Implementing decision-making frameworks to ensure AI aligns with company values.
  • Conducting interactive workshops where employees assess real-world AI dilemmas.

4. Hands-On AI Learning and Experimentation

Many employees are hesitant to use AI simply because they have never tried it. L&D teams should:

  • Set up AI “sandboxes” where employees can test AI tools without risk.
  • Provide guided workshops on AI-powered applications like ChatGPT, Midjourney, or automation software.
  • Offer continuous learning resources that evolve alongside AI advancements.
  • Highlight case studies, such as IBM’s AI mentorship programs and Walmart’s AI-driven virtual reality (VR) training simulations.

5. Cross-Functional AI Collaboration

AI training should not be siloed within specific departments. Instead, organizations should foster cross-functional AI collaboration by:

  • Encouraging teams to share AI use cases and best practices.
  • Hosting AI-driven hackathons or innovation challenges.
  • Creating AI mentor programs where tech-savvy employees guide others.
The Future of AI in L&D: What’s Next?

Looking ahead, AI will continue to revolutionise L&D through:

  • Personalized Learning Paths: AI-powered learning platforms can tailor training based on an employee’s progress, strengths, and learning style.
  • AI-Powered Virtual Coaches: Chatbots and AI assistants will offer real-time feedback and guidance during training exercises.
  • Predictive Skill Gap Analysis: AI can forecast emerging skills employees need, helping companies proactively train their workforce.
  • Immersive AI-Driven Learning: Virtual reality (VR) and augmented reality (AR) will create hands-on AI training experiences that simulate real-world applications.
Invest today for long-term gain

Businesses that invest in AI training today won’t just survive the AI revolution—they’ll lead it. The future belongs to companies that embrace AI as an extension of human capability, not a competitor to it.

The question isn’t whether AI will change the workplace, but whether organizations will equip their people to change with it.

L&D is no longer just about keeping up—it’s about leading the way. Businesses that reimagine their training strategies today will be the ones defining the AI-driven workplace of tomorrow.

We rate the best employee experience tool.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Star Wars Celebration is in full swing, and Lucasfilm just dropped more details on its Beyond Victory experience for Meta Quest, and I couldn't be more stoked

Thu, 04/17/2025 - 21:00

If you’re a Star Wars fan and haven’t been jealous of not being at Star Wars Celebration 2025 in Japan as of yet, prepare to be. The same applies if you have an Apple Vision Pro instead of a Meta Quest.

Why? Well, Industrial Light & Magic and Lucasfilm are finally sharing more on their next Star Wars mixed and virtual reality experience that’s set to arrive on the Meta Quest 3 and Meta Quest 3S headsets at some point in the future, and boy oh boy does it look stunning.

Star Wars: Beyond Victory - A Mixed Reality Playset is set during the events of Solo: A Star Wars Story and has three modes of play: Adventure, Arcade, and Playset. You can see the full trailer below, along with some select screenshots. It's a full-immersive experience that can place you in the Star Wars universe or overlay elements in your own space.

Adventure is more of a classic, immersive experience, similar to other titles like Star Wars: Tales from the Galaxy’s Edge – a personal favorite I’ve played on the PSVR, as I’m a fan of the Disney Parks – and Vader Immortal: A Star Wars VR Series. Here you’ll follow the story of an aspiring podracer, Volo, who goes on a journey courtesy of a mentorship with Sebulba.

(Image credit: Lucasfilm)

This one might be the neatest, though – Arcade places a holotable in your space through mixed or augmented reality, and you can seemingly get up close and personal with Star Wars action, including a podracing course.

And if you’re at Star Wars Celebration 2025 in Japan, you can play a demo that combines the Adventure and Arcade modes at Booth #20-5 in Hall 4 of the Makuhari Messe convention Center. Instant jealousy from me!

(Image credit: Lucasfilm)

Alyssa Finley, the executive producer of the title, shared, “We're calling this a Playset because it isn't just a game; it's an entirely new way to experience the Star Wars galaxy and the worlds we create at ILM.”

This new mixed reality experience blends the physical and digital worlds in a way that's unlike anything we've done before

She continued, “This new mixed reality experience blends the physical and digital worlds in a way that's unlike anything we've done before,” which certainly ups the excitement and hype for the title. It’s almost similar to another project that Industrial Light & Magic worked on for the Apple Vision Pro – that’s What If…? – An Immersive Story, and it had times where it fully placed you elsewhere or overlaid battles in your own space.

Image 1 of 6

(Image credit: Lucas Film)Image 2 of 6

(Image credit: Lucasfilm)Image 3 of 6

(Image credit: Lucasfilm)Image 4 of 6

(Image credit: Lucasfilm)Image 5 of 6

(Image credit: Lucasfilm)Image 6 of 6

(Image credit: Lucasfilm)

Adding to this is the playset mode, which promises to let you have your own “Star Wars' moments in mixed reality, allowing you to view and interact with vehicles from the universe and action figures.

While Star Wars: Beyond Victory - A Mixed Reality Playset is still in development, it’s undoubtedly one of the most ambitious titles from Industrial Light & Magic and Lucasfilm yet. Whenever it’s ready for prime time, it will launch for the Meta Quest 3 and 3S, so we’ll be waiting for further news on a release date.

If you have a Vision Pro, maybe we can petition Apple, ILM, and Lucasfilm to also bring it to the $3,500 spatial computer. And if you're at home, check out all the new Star Wars sets that Lego announced here.

You might also like
Categories: Technology

You don't have to pay for Google Gemini to comment on what you're looking at on your phone anymore

Thu, 04/17/2025 - 19:00
  • Google has made Gemini Live’s screen and camera sharing features free for all Android users.
  • The release reverses the previous subscriber-only option.
  • The feature lets Gemini respond to real-time visual input from your screen or camera.

In a surprise twist and a reversal of its earlier paywalled plans, Google has announced that Gemini Live’s screen and camera sharing features are now rolling out for free to all Android users. No subscription or Pixel ownership necessary, just Gemini Live, accessible to anyone with the Gemini app on Android.

This update means your AI assistant can now see what’s on your screen or through your camera lens and react to it in real time. Gemini Live with screen sharing lets you show Gemini a webpage, a spreadsheet, or a tangled mess of app settings and ask for help. Or you can point your camera at a real-world object, like a product label, a chessboard, or a confusing IKEA manual, and let Gemini identify and explain what you're looking at.

The feature first debuted earlier this month, but only for Gemini Advanced subscribers and only for certain phones, such as the Pixel 9 and Samsung Galaxy S25. At the time, Google said the visual capabilities would eventually expand, but even then, only to other subscribers. Google apparently had a change of heart, or at least it claims to have decided to open up access because of how much people seem to like the feature. Now, it’s rolling out to every Android over the next few weeks.

We’ve been hearing great feedback on Gemini Live with camera and screen share, so we decided to bring it to more people ✨Starting today and over the coming weeks, we're rolling it out to *all* @Android users with the Gemini app. Enjoy!PS If you don’t have the app yet,… https://t.co/dTsxLZLxNIApril 16, 2025

AI eyes

The idea for the feature is to make Gemini more flexible as an assistant. Instead of just answering questions you type or speak, it’s interpreting the world around you visually. The move also coincides with Microsoft announcing that Copilot Vision, its own version of AI eyes, is now available for free in the Edge browser. That might be a coincidence, though probably only in the way that you running into your crush outside their class in high school is a coincidence.

But while Microsoft’s Copilot lives in the browser, Gemini’s advantage is its integration straight into the Android ecosystem. No need to fire up Edge or download a separate tool. Gemini Live is baked into the same system that already runs your device.

The new ability fits with many of the other additions and upgrades Gemini has added in recent months. The AI assistant now comes with real-time voice chat, a new overlay so you can summon Gemini on top of other apps, and the inclusion of the long report writing tool Deep Research.

Once the new feature is live, you’ll see the option to “share screen” or “use camera” in certain Gemini prompts on Android devices. And because Google is giving this away for free, it sets a new bar. If Gemini can watch your screen and your camera without charging you for the privilege, what happens to the idea of “premium” AI access? The developers are probably hotly debating what AI features are worth paying for and how much to charge, when, at least for now, all of these tools become free relatively quickly.

You might also like
Categories: Technology

Meta is set to train its AI models with Europeans' public data, and you can stop it doing so

Thu, 04/17/2025 - 18:01
  • Meta will soon start training its AI models with EU users' data
  • Meta AI will be trained with all users' interactions and public content posted on Meta's social platforms
  • The Big Tech giant resumes its AI training plan, after pausing the launch amid EU data regulators' concerns

Meta has resumed its plan to train its AI models with EU users' data, the company announced on Monday, April 14, 2025.

All public posts and comments shared by adults across Meta's social platforms will soon be used to train Meta AI, alongside all interactions users directly exchange with the chatbot.

This comes as the Big Tech giant successfully launched Meta AI in the EU in March, almost a year after the firm paused the launch amid growing concerns among EU data regulators.

What's Meta AI training and how to opt out

"We believe we have a responsibility to build AI that’s not just available to Europeans, but is built for them. That’s why it’s so important for our generative AI models to be trained on a variety of data so they can understand the incredible and diverse nuances and complexities that make up European communities," wrote Meta in the official announcement.

This kind of training, the company notes, it's not unique to Meta or Europe. Meta AI collects and processes the same information, in fact, across all regions where it's available.

As mentioned earlier, Meta AI will be trained with all public posts and interactions' data from adult users. Public data from the accounts of people in the EU under the age of 18 won't be used for training purposes.

Meta also promises that no people's private messages shared on iMessage and WhatsApp will ever be used for AI training purposes, too.

(Image credit: Meta / Future)

Beginning this week, all Meta users in the EU will start receiving notifications about the terms of the new AI training, either via app or email.

These notifications will include a link to a form where people can withdraw their consent for their data to be used for training Meta AI.

"We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones," explains the provider.

It's crucial to understand that once fed into an LLM database, you will be completely losing control over your data, as these systems make it very hard (if not impossible) to exercise the GDPR's right to be forgotten.

This is why privacy experts like Proton, the provider behind one of the best VPN and encrypted email apps, are urging people in Europe concerned about their privacy to opt out of Meta AI training.

"We recommend filling out this form when it's sent to you to protect your privacy. It's hard to predict what this data might be used for in the future – better to be safe than sorry," Proton wrote on a LinkedIn post.

Meta's announcement comes at the same time that the Irish data regulators have opened an investigation into X's Grok AI. Specifically, the enquiry seeks to determine whether Elon Musk's platform uses publicly-accessible X posts to train its generative AI models in compliance with GDPR rules.

You might also like
Categories: Technology

You can't hide from ChatGPT – new viral AI challenge can geo-locate you from almost any photo – we tried it and it's wild and worrisome

Thu, 04/17/2025 - 17:30
  • OpenAI's latest reasoning model o3 can analyze photos
  • It's particularly good at geo-locating based on image clues
  • It may be too good a tool for letting people geolocate your activities

It's no secret that digital photo metadata contains everything from technical info about the camera that shot it to, based on GPS data, exactly where you were standing when you took the photo. ChatGPT, doesn't need that detail.

The latest model GPT-o3 is shockingly good at geo-locating almost any photos you feed it.

In the latest viral craze to sweep through the AI meme universe, people are feeding ChatGPT Plus running the Advanced Reasoning model o3 images, often stripped of all metadata, and prompting it to "geoguess this".

The really cool thing about it is that because model o3 is a "reasoning" model, it shows you its work, telling you how long it's thinking, displaying how it's splicing up an image to investigate specific parts, and explaining its thinking and how well it's doing at solving the goelocation riddle.

I tried a few experiments, starting first with an image culled from an article about the 26 best beaches. In this test, I made what I think was a critical error and gave away the game to ChatGPT.

Image 1 of 2

(Image credit: Future)Image 2 of 2

(Image credit: Future)

After downloading the image of the Praia de Santa Monica beach in Cape Verde (off the coast of Africa), I dropped it into ChatGPT with the prompt "GeoGuessr", which also happens to be the name of a popular online geo guessing game and is one of a handful of prompts people are using for geolocation guessing.

It only took 24 seconds for ChatGPT to respond:

Here’s an educated “GeoGuessr”‑style guess, following AP style:

Probable setting: Boa Vista, Cape Verde — specifically a stretch such as Praia de Santa Mónica.

Why this beach fits:

Water color and clarity: The gradient from pale aqua to deep teal is common on mid‑Atlantic tropical islands, where volcanic shelves drop off quickly.

Sand tone: The sand is light gold with very fine grain, matching Boa Vista’s deserts and dune‑fed beaches rather than the whiter coral sand of much of the Caribbean.

Surf and slope: The breakers are modest but steady, typical of Atlantic swell hitting a long, shallow shelf; Mediterranean beaches rarely show this consistent whitewater.

Sky and horizon: A faint haze over a light‑gray blue sky is consistent with harmattan dust that sometimes drifts west from Africa across the Cape Verde archipelago.

Confidence level: Low to moderate. Many open, dune‑lined Atlantic beaches—from the Canary Islands to parts of Western Sahara or even certain Caribbean coasts—could look almost identical without landmarks.

That was remarkably accurate. However, I noticed afterwards that the image file name included some location information.

It was time to give ChatGPT o3 a more challenging task, and I knew exactly how to do it. Instead of downloading images, I would screen capture them and paste them into the prompt field.

This method strips out all metadata and forces the model to use the image alone for its geolocation sleuthing.

I chose an image of a bookstore, one where I wasn't even sure of the location.

Image 1 of 3

(Image credit: Future)Image 2 of 3

(Image credit: Future)Image 3 of 3

(Image credit: Future)

ChatGPT started by examining potential bookstores. It also zoomed in on numerous image details that it considered clues, like a Persian rug, and even some of the books and signage.

In its thinkling out loud sort of way, o3 considered a wide range of bookshops in New York City and around the world:

"Hmm, the modern filament-style bulbs suggest a more contemporary feel. It could potentially be "Daunt Books," but their walkway has an arched skylight, not the wooden style I'm seeing here."

I know, it sounds like a real person.

This went on for a few minutes as it thought, analyzed, and searched the web. Ultimately, it failed to come up with an answer.

Going deeper, getting lost

I could tell this feature was going viral because ChatGPT o3 failed multiple times, even to ingest and analyze the photo, complaining of server errors.

Eventually, I got it working again and fed it first a portrait mode selfie and then the same shot without the portrait mode effect.

Image 1 of 2

(Image credit: Future)Image 2 of 2

(Image credit: Future)

On the former, it broke my image up into pieces and told me that I was in mid-town Manhattan. That was mostly accurate, but ChatGPT failed to determine my exact location and even admitted that its "Confidence level" was low.

When I turned off the bokeh effect and fed it only a screen grab, it found my exact location in just 56 seconds. The result:

Most probable location

Midtown Manhattan — on the south side of East 42nd Street, just east of Park Avenue, in front of Cipriani 42nd Street (110 E. 42nd St.). Estimated coordinates: 40.7518 N, 73.9775 W.

How the clues point there

A green octagonal sign with yellow script and a matching pennant flag both read “Cipriani,” the exact street frontage of the landmark Bowery Savings Bank building that now houses Cipriani 42nd Street.

Across the street stands the gold‑tinted curtain wall of 120 Park Ave. Its sloped glass pavilion, visible over your left shoulder, is unique to that address.

The camera looks west; Midtown’s setback towers, a Sixth Avenue bus stop disk and the sliver of One Vanderbilt’s crown align with that sight line.

A green Evergreen shipping container, used as a construction barrier, currently sits outside Grand Central’s Pershing Square plaza, matching the photo’s lower left corner.

Confidence level

Moderate. The Cipriani branding and the distinctive 120 Park Ave. façade give a high‑quality anchor, though similar curtain‑wall towers exist farther west on 42nd Street.

Image 1 of 2

(Image credit: Future)Image 2 of 2

(Image credit: Future)

I frustrated the model with a photo from Antigua that was taken on my car ride to the airport. It was a shot of a hillside through a fence. ChatGPT rightly guessed a tropical island near the Caribbean but could not pinpoint the exact location.

For my last test, I gave it a clearer picture of the resort where I stayed.

ChatGPT o3 thought for two minutes, 22 seconds before deciding it was Hawksbill Bay in Antigua. It got the island right, but my resort on Long Bay is located 46 miles away on the opposite side of the island.

Image 1 of 2

(Image credit: Future)Image 2 of 2

(Image credit: Future)

This is another fun AI game to play with friends, but there are some concerning privacy implications.

If you take digital photos of yourself or anything in any location around the world and post them online, anyone with access to ChatGPT Plus could use them and the o3 model to suss out where you are or have been.

And it's not just friends and family you have to worry about. Your employer could be looking or even the authorities might be interested in your location.

Not that I'm implying you would be sought by the authorities, but just in case, maybe stop posting photos from your latest hideout.

You might also like
Categories: Technology

Walmart's online store was down – here's the latest on the shopping giant's site problems

Thu, 04/17/2025 - 14:42

Just a day after massive outages on Spotify and Zoom, Walmart was experiencing issues with its online store in the United States. While it wasn't a complete outage, it did prevent folks from viewing products and checking out on the web and via the retailer's app.

During the peak of the outage, which lasted from about 3 PM ET to 5 PM ET, we, along with many others, were unable to successfully search for anything, including products, or even load shopping category pages. For instance, when searching for an item – be it a Nintendo Switch 2 preorder, AirPods Pro, or Lego sets – we received a “Sorry…” graphic.

Down Detector, a site that tracks outages and lets people mark issues, showed over 3,250 reported outages for Walmart as of 3:12 PM ET, which only began to lower closer to 5PM.

The good news is that Walmart's site issues have since been resolved, but the retailer hasn't shed light on what caused the issue or acknowledged it publicly.

Ahead, you can see our live reporting during Walmart’s issues with its online store on April 17, 2025.

Here's a look at what I'm encountering when searching for AirPods: I see a "Sorry..." typeface with a random graphic – in this case, a toaster – as well as the text "We’re having technical issues, but we’ll be back in a flash."

(Image credit: Future)

As you might suspect, the problems currently occurring with Walmart's online store on Walmart.com are also affecting the retailer's apps for Android and iOS.

And as we typically see with outages on sites or services, users are posting on X (formerly Twitter) and Threads.

Post by @unwieldyworldofdisney View on Threads

Damn @Walmart... is somebody sleeping on the job or what?!?! This app has been down for a while now. Get your shit together! pic.twitter.com/rtO9sAQs0RApril 17, 2025

Beyond not being able to search on Walmart.com, there are also issues loading pages, like Electronics, Home, or Grocery, as well as highlighted product cards on the homepage.

Similar to the 'Dogs of Amazon' error, Walmart presents either "Sorry" or "Uh-oh" with a random graphic associated.

Meanwhile, reported outages on Down Detector are over 3,600 as of 3:17 PM ET and still on the rise.

(Image credit: Future)

Here's a look at the error page you'll encounter if you try to click on a department or one of the highlighted products on the homepage. If you're trying to click on one or are searching, know you're not alone in receiving the error page.

(Image credit: Future)

While there are still over 3,160 reported outages on Down Detector and many Walmart customers taking to X (formerly Twitter) and Threads to note issues loading and using both the site and app, the retailer has yet to comment on the issues.

Judging from the comments on Down Detector, the issues accessing Walmart's online store appear to be happening across the United States.

@Walmart Is .com down?April 17, 2025

@Walmart is your app down??April 17, 2025

Some good news for folks trying to shop on Walmart – Search is working for me again and letting me click into products and then add them to my cart. Plus, I can click on departments like Electronics or Fashion again.

If you're not having a good experience with Walmart's online store, it's worth trying again.

(Image credit: Future)

(Image credit: Future)

We'll call it that Walmart's online store appears to be coming back online, as just a few minutes after a few successful searches, that functionality appears to be hit or miss again.

Category pages are still working, though, so hopefully it's a sign that the Walmart team is working to identify, fix, and eventually resolve the issues affecting its online store.

Down Detector reports are starting to lower, now sitting at 2,505 as of 4:07 PM ET.

Are we back?

(Image credit: Future / Walmart)

It looks like the worst may be behind us as Walmart's website seems to be working as far as searching for and adding products to your cart goes. Searching for "ipad" wasn't working properly less than 20 minutes ago, but now it seems to work just fine. Similarly, searching for "macbook air" and "apple juice" all turn up tangible results.

Down Detector reports are falling even lower to just 1,157 at 4:31 PM ET, so we hope this means Walmart is back online and ready for shopping.

We'll keep an eye on Walmart's website and app to let you know if there's any more trouble in paradise.

All appears well for Walmart as Down Detector reports continue to drop further and faster - now at just 489 as of 4:51 PM ET. Searching, adding to cart, and checking out all appear to be fully restored.

Categories: Technology

Tiny startup could challenge Wasabi, iDrive, and BackBlaze with sovereign EU cloud storage solution at rock-bottom prices

Thu, 04/17/2025 - 13:27
  • At just €6 per terabyte, Storadera undercuts US cloud giants
  • It skipped SSDs for HDDs to slash costs while maintaining solid speeds
  • Storadera plans to expand into Germany, the UK, and beyond

Storadera, a Tallinn-based cloud startup, is offering some of the best cloud storage for photos with S3-compatible storage at €6/TB/month. This puts it head-to-head with providers like Backblaze, which offers a slightly lower rate of €4.75/TB/month.

The company's pitch lies not just in low pricing but also in jurisdiction. Being a Europe-based startup, its stored data is beyond the direct jurisdiction of non-EU countries, making it appealing to organizations that require data sovereignty.

Storadera’s architecture relies on HDDs rather than SSDs for primary writes. “If we can offer fast enough service on 10x less expensive hardware, then it sounds like magic,” Tommi Kannisto, the founder of Storadera, explained.

Hyperconverged setup

While SSDs are used for metadata, accounting for just 0.05 percent of total disk space, all major writes are done to traditional disks. "QLC 100-plus TB SSDs are still too expensive – and probably will be for the next ten years,” Kannisto said.

The company uses a hyperconverged setup, with all servers writing to JBODs – racks containing 102 conventional Western Digital hard drives – using erasure coding schemes such as 4+2 and 6+2, with 8+2 coming soon. Each server has 32GB of RAM and runs services written in 100,000 lines of Go code.

“All software runs in all servers and all servers write to all JBODs. There is no load balancer unit,” Kannisto said.

The system adapts to load, using “small blocks at times of low load with bigger blocks used at high load times,” and can achieve “close to 300MBps with 2MB files.” It is also preparing to implement higher-capacity shingled magnetic recording (SMR) drives to reduce capital expenditure by up to 25 percent. Storadera also offers bucket geo-replication, object locking for immutability, and integrity checks every 60 days.

The company says it is doing well financially, with around 100 customers, including Telia and the Estonian government. It has positioned itself as one of the best cloud storage and cloud backup options available.

Despite making slightly less than €1 million a year, the company says it is sustainable and eyeing further growth. “We are profitable… we make a very good profit [and] we’re growing 5 percent/month in revenue,” Kannisto said.

Storadera plans to expand into Germany by mid-2025, and aims to enter the UK, and possibly North America or the Asia-Pacific region, later in the year

Via Blocksandfiles

You may also like
Categories: Technology

I fed NotebookLM a 218-page research paper on string theory and the podcast results were mind-blowing

Thu, 04/17/2025 - 13:00

My latest NotebookLM podcast creation is deeper and more fascinating than anything I've ever created, and I bet it'll shock you, too.

I don't understand string theory. In fact, I bet there's fewer than 1% of the world that can speak cogently on the subject, but I am fascinated by the concept and have read a bit on it. Not enough to understand or explain it to you, but enough to have a steady and abiding curiosity.

AI, on the other hand, I think I understand and now regularly use as a tool. When Google released a recent NotebookLM update that includes, among other things, mind maps, I thought it was time to bring together something at the very outer edges of my understanding and this bleeding-edge artificial intelligence capability.

So I created a String Theory Podcast.

First, a tiny primer on NotebookLM. It is a powerful AI-based research tool in which you can upload sources, and it will turn them into summaries and extrapolated information in the form of text, podcasts, and visual guides like mind maps.

For me, the most fascinating bit has been the podcasts or "Audio Overviews", which churn out chatty audio conversations about virtually any topic you feed into them. I call it a podcast because the audio style walks a well-worn path of most popular podcast series. It's conversational, usually between two people, sometimes funny, and always accessible.

I've been wondering, though, if you can stretch the limits of the format with a topic so deep and, honestly, confusing, that the resulting podcast would be conversational nonsense.

My experiment, however, proved that while the current version of NotebookLM has its limits, it's far better at comprehending dense science bits than me and probably most people you or I know.

(Image credit: Future) Weird science

Once I decided I wanted NotebookLM to help me with the topic, I went in search of string theory content (there's a lot more of it online than you might think), quickly stumbling on this 218-page research paper from 2009 by University of Cambridge researcher Dr. David Tong.

I scanned the doc and could tell that it was rich with string theory detail, and so far over my head, it probably resides somewhere near the rings of Saturn.

Imagine trying to read this document and make sense of it. Maybe if someone explained it to me, I'd understand. Maybe.

I downloaded the PDF and then fed it into NotebookLM, where I requested a podcast and a mind map.

(Image credit: Future)

It took almost 30 minutes for NotebookLM to create the podcast, and I must admit, I was a little anxious as I opened it. What if this mass of detail on one of physics' most confounding topics overwhelmed Google's AI? Might the hosts just be babbling incoherently?

I shouldn't have worried.

I'd heard these podcast hosts before: a somewhat vanilla pair (a man and a woman) who banter casually, while making witty asides. In this case, they were trying to explain string theory to the uninitiated.

Next, I think I should create an AI podcast avatar who can point at this graphic while they talk. (Image credit: Shutterstock)

They started by talking about how they'd walk through the topic, covering bits like general relativity, quantum mechanics, and how, at least as of 2009, we had never directly observed these "strings". Earlier this month, some physicists claimed that they had, in fact, found the "first observational evidence supporting string theory." But I digress.

The hosts spoke like physics experts, but, where possible, in layman's terms. I quickly found myself wishing they had a guest. The podcast would've worked better if they were proxies for me, not understanding much at all, and had an AI-generated expert to interview.

Stringing it all together

(Image credit: Future)

As the podcast progressed, the hosts dug into the details of string theory, specifically, the definition of a "string." They described them as tiny objects that vibrate and added, "all stuff in the universe comes from how tiny strings are vibrating."

Things got more complex from there, and while the AI podcast hosts' tone never changed, I struggled to follow along. I still can't tell you what "relativistic point particle viewed through Einstein's special relativity" really means. Though I did appreciate the analogy of "imagine a string moving through space time."

The AI hosts used various tricks to keep me engaged and not completely confused. The male host would, like a podcast parrot, often repeat a bit of what the female host had just explained, and use some decent analogies to try to make it relatable.

At times, the female host lapsed into what sounded like she was reading straight out of the research paper, but the male host was always there to pull her back to entertainment mode. He did a lot of chatty summarizing.

I felt like I reconnected to the whole thing when they explained how "string morphed into the theory of everything" and added, "bosons and fermions, partners in crime due to supersymmetry."

This was heavy

(Image credit: Future)

After 25 minutes of this, my head was stuffed to the point of bursting with those still-theoretical strings and spinning with terms such as "vertex operators" and "holomorphic."

I hoped for a grand and glorious summary at the end, but the podcast abruptly ended at almost 31 minutes. It cut off as if the hosts ran out of stream, ideas, or information, and walked away from the mics in frustration and without signing off.

In some ways, it feels like this is my fault. After all, I forced these SIMs to learn all this stuff and then explain it to me, because I could never do it. Maybe they got fed up.

I also checked out the mind maps, which are branching diagrams that can help you map out and represent complex topics like string theory. As you can imagine, the mind maps for this topic start simple but get increasingly complex as you expand each branch. Still, they're a nice study companion to the podcast.

It's also worth noting that I could enrich the podcast and mind maps with other research sources. I would simply add them into the sources panel in NotebookLM and rerun the "audio overview".

A real expert weighs in

For as much as I learned and as much as I trust the source material, I wondered about the podcast's accuracy. AI, even with solid information, can hallucinate, or at least misinterpret. I tried contacting the paper's author, Dr. Tong, but never heard back. So, I turned to another physics expert, Michael Lubell, Professor of Physics at City College of CUNY.

Dr. Lubell agreed to listen to the podcast and give me some feedback. A week later, he emailed me this brief note, "Just listened to the string theory podcast. Interestingly presented, but it requires a reasonable amount of expertise to follow it."

When I asked about any obvious errors, Lubell wrote, "Nothing obvious, but I’ve never done string theory research." Fair enough, but I'm willing to bet Lubell understands and knows more about string theory than I do.

Perhaps, the AI podcasters now know more about the subject than either of us.

You might also like
Categories: Technology

Nintendo quietly removes mentions of VRR support from its US and Canada Switch 2 websites

Thu, 04/17/2025 - 11:22
  • Nintendo has quietly removed the mention of VRR support from some of its regional Switch 2 websites
  • The US, Canada, and Japan websites no longer feature the mention of VRR support
  • As of writing, the UK website still mentions VRR, but could still be removed

Nintendo has quietly removed any mention of variable refresh rate (VRR) support from some of its regional Switch 2 websites, suggesting the console may not offer the feature after all.

That's according to Digital Foundry's Oliver Mackenzie (via VGC), who spotted that the US website has been updated since the Nintendo Switch 2 Direct, and no longer mentions VRR support for docked play.

Now it reads: "Take in all the detail with screen resolutions up to 4K when you connect the Nintendo Switch 2 system to a compatible TV using the dedicated dock. The system also supports HDR and frame rates up to 120 fps on compatible TVs."

It's not just the US website that has been updated, but the Canada and Japan sites too.

As of writing, the UK site still mentions that the Switch 2 "supports HDR, VRR, and frame rates up to 120 fps on compatible TVs," but Nintendo may be in the process of removing it from all its regional sites.

Some weird stuff going on at Nintendo. Looks like they've changed their US website to no longer mention VRR support for TV play? Only HDR and 120Hz support get a call-out. pic.twitter.com/3VmFDfrNvtApril 17, 2025

It's unclear why Nintendo has made changes, but Mackenzie theorises that VRR support may not be available at launch. However, the Switch 2 in handheld looks like it will still offer VRR thanks to Nvidia G-Sync, which will ensure "ultra-smooth, tear-free gameplay."

Everything we needed to know about the Switch 2's specs was revealed during the Direct earlier this month, where it was also confirmed that the console will have a bigger screen, from 6.2 inches to 7.9 inches, 256GB of internal storage, and a mouse function for its magnetic Joy-Con controllers.

The Nintendo Switch 2 is set to launch on June 5, 2025, for $449.99 / £395.99 or $499.99 / £429.99 for a Mario Kart World bundle.

You can now pre-order the console in the UK, but US pre-orders and Canada pre-orders have been delayed as Nintendo assesses the potential impact of tariffs.

You might also like...
Categories: Technology

Google blocked over 5 billion ads in 2024 as AI-powered scams skyrocketed

Thu, 04/17/2025 - 11:04
  • Google 2024 Ads Safety report says it blocked 5.1 billion bad ads
  • It also blocked millions of advertiser accounts
  • Google notes its detection accuracy improved thanks to AI

Google blocked more than five billion bad ads in 2024 and suspended almost 40 million advertiser accounts that were engaged in fraudulent behavior in what was apparently a bumper year for scammers.

In its 2024 Ads Safety Report, Google outlined how bad ads have really taken off in recent months, alrgely thanks to the advancements in Generative Artificial Intelligence (GenAI).

However, Google is also using AI to improve its deception rates and apparently - it works.

Monitor your credit score with TransUnion starting at $29.95/month

TransUnion is a credit monitoring service that helps you stay on top of your financial health. With real-time alerts, credit score tracking, and identity theft protection, it ensures you never miss important changes. You'll benefit from a customizable online interface with clear insights into your credit profile. Businesses also benefit from TransUnion’s advanced risk assessment tools.

Preferred partner (What does this mean?)View Deal

Banning ad accounts

Google's proactive measures in 2024 were impressive. The company either blocked or removed 5.1 billion ads that violated Google Ads policies. Furthermore, the search engine giant suspended 39.2 million advertiser accounts, which prevented many ads from ever reaching the consumers in the first place.

As a result, the number of stopped bad ads did not grow year-on-year, but remained relatively stable. Last year, Google removed 5.5 billion bad ads, and the drop seems to be due to the fact that Google banned significantly more advertiser accounts (12.7 million).

The company also said to have permanently banned more than 700,000 advertiser accounts for policy violations related to AI-driven impersonation scams.

"To fight back, we quickly assembled a dedicated team of over 100 experts to analyze these scams and develop effective countermeasures, such as updating our Misrepresentation policy to suspend advertisers that promote these scams," Google said in the report.

"As a result, we were able to permanently suspend over 700,000 offending advertiser accounts. This led to a 90% drop in reports of this kind of scam ad last year. While we are encouraged by this progress, we continue to work to prevent these scams."

Google seems to be heavily invested in AI for scanning and detection. It implemented more than 50 enhancements to its Large Language Models (LLMs), enabling more efficient and precise enforcement.

Most of the AI-powered bad ads revolved around deepfaked celebrities.

Via BleepingComputer

You might also like
Categories: Technology

Pages