Imagine a digital version of yourself that moves faster than your fingers ever could - an AI-powered agent that knows your preferences, anticipates your needs, and acts on your behalf. This isn't just an assistant responding to prompts; it makes decisions. It scans options, compares prices, filters noise, and completes purchases in the digital world, all while you go about your day in the real world. This is the future so many AI companies are building toward: agentic AI.
Brands, platforms, and intermediaries will deploy their own AI tools and agents to prioritize products, target offers, and close deals, creating a new universe-sized digital ecosystem where machines talk to machines, and humans hover just outside the loop. Recent reports that OpenAI will integrate a checkout system into ChatGPT offer a glimpse into this future – purchases could soon be completed seamlessly within the platform with no need for consumers to visit a separate site.
AI agents becoming autonomousAs AI agents become more capable and autonomous, they will redefine how consumers discover products, make decisions and interact with brands daily.
This raises a critical question: when your AI agent is buying for you, who’s responsible for the decision? Who do we hold accountable when something goes wrong? And how do we ensure that human needs, preferences, and feedback from the real world still carry weight in the digital world?
Right now, the operations of most AI agents are opaque. They don’t disclose how a decision was made or whether commercial incentives were involved. If your agent never surfaces a certain product, you may never even know it was an option. If a decision is biased, flawed, or misleading, there’s often no clear path for recourse. Surveys already show that a lack of transparency is eroding trust; a YouGov survey found 54% of Americans don't trust AI to make unbiased decisions.
The issue of reliabilityAnother consideration is hallucination - an instance when AI systems produce incorrect or entirely fabricated information. In the context of AI-powered customer assistants, these hallucinations can have serious consequences. An agent might give a confidently incorrect answer, recommend a non-existent business, or suggest an option that is inappropriate or misleading.
If an AI assistant makes a critical mistake, such as booking a user into the wrong airport or misrepresenting key features of a product, that user's trust in the system is likely to collapse. Trust once broken is difficult to rebuild. Unfortunately, this risk is very real without ongoing monitoring and access to the latest data. As one analyst put it, the adage still holds: “garbage in, garbage out.” If an AI system is not properly maintained, regularly updated, and carefully guided, hallucinations and inaccuracies will inevitably creep in.
In higher-stakes applications, for example, financial services, healthcare, or travel, additional safeguards are often necessary. These could include human-in-the-loop verification steps, limitations on autonomous actions, or tiered levels of trust depending on task sensitivity. Ultimately, sustaining user trust in AI requires transparency. The system must prove itself to be reliable across repeated interactions. One high-profile or critical failure can set adoption back significantly and damage confidence not just in the tool, but in the brand behind it.
We've seen this beforeWe’ve seen this pattern before with algorithmic systems like search engines or social media feeds that drifted away from transparency in pursuit of efficiency. Now, we’re repeating that cycle, but the stakes are higher. We’re not just shaping what people see, we’re shaping what they do, what they buy, and what they trust.
There's another layer of complexity: AI systems are increasingly generating the very content that other agents rely on to make decisions. Reviews, summaries, product descriptions - all rewritten, condensed, or created by large language models trained on scraped data. How do we distinguish actual human sentiment from synthetic copycats? If your agent writes a review on your behalf, is that really your voice? Should it be weighted the same as the one you wrote yourself?
These aren’t edge cases; they're fast becoming the new digital reality bleeding into the real world. And they go to the heart of how trust is built and measured online. For years, verified human feedback has helped us understand what's credible. But when AI begins to intermediate that feedback, intentionally or not, the ground starts to shift.
Trust as infrastructureIn a world where agents speak for us, we have to look at trust as infrastructure, not just as a feature. It’s the foundation everything else relies on. The challenge is not just about preventing misinformation or bias, but about aligning AI systems with the messy, nuanced reality of human values and experiences.
Agentic AI, done right, can make ecommerce more efficient, more personalized, even more trustworthy. But that outcome isn’t guaranteed. It depends on the integrity of the data, the transparency of the system, and the willingness of developers, platforms, and regulators to hold these new intermediaries to a higher standard.
Rigorous testingIt’s important for companies to rigorously test their agents, validate outputs, and apply techniques like human feedback loops to reduce hallucinations and improve reliability over time, especially because most consumers won’t scrutinize every AI-generated response.
In many cases, users will take what the agent says at face value, particularly when the interaction feels seamless or authoritative. That makes it even more critical for businesses to anticipate potential errors and build safeguards into the system, ensuring trust is preserved not just by design, but by default.
Review platforms have a vital role to play in supporting this broader trust ecosystem. We have a collective responsibility to ensure that reviews reflect real customer sentiment and are clear, current and credible. Data like this has clear value for AI agents. When systems can draw from verified reviews or know which businesses have established reputations for transparency and responsiveness, they’re better equipped to deliver trustworthy outcomes to users.
In the end, the question isn’t just who we trust, but how we maintain that trust when decisions are increasingly automated. The answer lies in thoughtful design, relentless transparency, and a deep respect for the human experiences that power the algorithms. Because in a world where AI buys from AI, it’s still humans who are accountable.
We list the best IT Automation software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
As AI tools become part of everyday life, most people believe they would be better equipped to spot AI-generated scams, but new research reveals a worrying trend: as people get more familiar with AI, they’re more likely to fall for these scams.
New research finds that the generations most confident in detecting an AI-generated scam are the ones most likely to get duped: 30% of Gen Z have been successfully phished, compared to just 12% of Baby Boomers.
Ironically, the same research found that fear of AI-generated scams decreased by 18% year-over-year, with only 61% of people now expressing worry that someone would use AI to defraud them. During the same period, the number of people who admitted to being successfully duped by these scams increased by 62% overall.
A Proliferation of ScamsTraditional scam attempts rely on mass, generic messages hoping to catch a few victims. Someone receives a message from the “lottery” claiming that a recipient won a prize, or a fake business offering someone employment. In exchange for providing their bank account details, the messages would promise money in return. Of course, that was never true, and instead the victim lost money.
With AI, scammers are now getting more personalized and specific. A phishing email may no longer be riddled with grammatical errors or sent from an obviously spoofed account. AI also gives scammers more tools at their disposal.
For example, voice cloning allows scammers to replicate the voice of a friend or family member with just a three second audio clip. In fact, we’re starting to see more people swindled out of money because they believe a message from a family member is asking for ransom, when it’s actually from a scammer.
The Trust BreakdownThis trend harms both businesses and consumers. If a scammer were to gain access to a customer’s account information, they could drain an account of loyalty points or make purchases using a stolen payment method. The consumer would need to go through the hassle of reporting the fraud, while the business would ultimately need to refund those purchases (which can lead to significant losses).
There’s also a long-term impact to this trend: AI-generated scams erode trust in brands and platforms. Imagine a customer receiving an email claiming to be from Amazon or Coinbase support, an unauthorized user was trying to gain access to their account, and that the user should call support immediately to fix the issue. Without obvious red flags, they may not question its legitimacy until it’s too late.
A customer who falls for a convincing deepfake scam doesn't just suffer a financial loss; their confidence in the brand is forever tarnished. They either become hyper-cautious or opt to take their business elsewhere, leading to further revenue loss and damaged reputations.
The reality is that everyone pays the price when scams become more convincing, and if companies fail to take steps to establish trust, they wind up in a vicious cycle.
What's Fueling the Confidence Gap?To address this confidence gap, it’s important to understand why the divide exists in the first place. Digital natives have spent years developing an intuitive sense for spotting "obvious" scams — the poorly written emails or suspicious pop-ups offering a free iPod. This exposure creates a dangerous blind spot: when AI-generated scams perfectly mimic legitimate communication, that same intuition fails.
Consider how the brain processes a typical workday. You're juggling emails, Slack messages, and phone calls, relying on split-second pattern recognition to separate signal from noise. A message from "your bank" looks right, feels familiar, and arrives at a plausible time.
The problem compounds when scammers use AI to perfectly replicate not just logos and language, but entire communication ecosystems. They're not just copying Amazon's email template; they're replicating the timing, context, and behavioral patterns that make legitimate messages feel authentic. When a deepfake voice call sounds exactly like a colleague asking for a quick favor, a pattern-matching brain tends to confirm that interaction as normal.
This explains why the most digitally fluent users are paradoxically the most vulnerable. They've trained themselves to navigate digital environments quickly and confidently. But AI-powered scams exploit that very confidence.
What Tech Leaders Should Do NowFor companies, addressing this overconfidence problem requires a multi-pronged approach:
Inform customers without fear-mongering: Help users understand that AI-powered scams are convincing precisely because they're designed to deceive the most confident, tech-savvy people. The goal isn't to make people stop using AI, rather to help them maintain appropriate skepticism.
Educate them on deepfake scams: Focus on identifying the key signs of a legitimate versus fraudulent message (sent from an unknown number, a message with false urgency, a suspicious link or PDF attached). Show current examples of deepfakes and AI-generated phishing, rather than just talking about traditional fraud awareness.
Keep communication channels transparent: Establish clear, verified communication channels and educate customers about how your company will and won't contact them. The good news is that many providers, including Google, Apple, and WhatsApp currently or will soon offer branded caller ID services.
This means companies can establish a business profile with these apps, adding another layer of verification. That way, when a verified business contacts a customer, their message will clearly show the brand name and a verified badge. Similarly, most brands now authenticate their outbound email to conform with the DMARC delivery standard and qualify for a branded trust mark to show up next to the subject line.
Invest in knowledge sharing: If one company is dealing with an influx of scam attempts, other companies are likely facing similar problems. Scammers often collaborate to share tactics and vulnerabilities; companies should do the same.
Many companies fight fraud by using technologies that incorporate insight-sharing “consortiums”—business networks where fraud patterns are shared across companies. By being open about current challenges, companies can better understand the risks and implement the proper safeguards to keep their customers safe.
The Strategic Advantage of Getting This RightThe businesses that will thrive in this environment are those that maintain identity trust—that is, the ability to recognize a user or interaction within a digital environment—while effectively combating increasingly sophisticated threats. Fraud prevention is no longer just about protection from losses, it’s a critical part of the customer experience. That’s because when customers feel safe, they shop confidently.
By tackling users’ AI blindspots while maintaining trust, companies gain a competitive edge. While the AI revolution has introduced incredibly capable tools, it’s also created unexpected vulnerabilities. Addressing this challenge requires more than just different tools. It demands a fundamental rethinking of how we maintain trust when seeing is no longer enough to believe.
We've listed the best Antivirus Software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Artificial Intelligence (AI) is one of the most talked-about technologies of our time. It dominates headlines, fuels boardroom ambition, and drives product roadmaps across every industry. From generative AI chatbots to multi-modal systems and autonomous agents, the sheer velocity of advancement is staggering. But while the pace of innovation is accelerating, it has also created a growing disconnect: everyone wants AI, but far fewer know what to actually do with it.
This gap between excitement and effective execution is fast becoming a defining challenge of the AI era. The technology is racing ahead, but organizational readiness is lagging. Many businesses know they need to act but are unclear on how to deploy AI in ways that are safe, strategic, and genuinely transformative.
To bridge this gap, education is critical. And we don’t just mean educating developers and data scientists, senior leadership needs a foundational understanding of AI’s capabilities and limitations. They must grasp where it can create value, what it takes to scale safely, and how to prepare the wider organization for what’s to come. Without this knowledge, AI risks becoming another overhyped tool that fails to deliver meaningful returns.
Where AI is already delivering valueDespite these challenges, AI is already making a tangible impact in focused, high-value areas. These use cases might not generate the loudest headlines, but they offer a glimpse of what’s possible when strategy and execution are aligned.
In customer service, AI is proving to be a powerful support tool. For example, it can generate real-time summaries and recommendations for call center agents, improving both the accuracy and speed of responses. AI-driven sentiment analysis is helping agents better understand customer mood and intent, leading to more empathetic and efficient interactions and a better overall customer experience.
Even more promising is the rise of agentic AI. This technology goes beyond supporting decisions; it can make them. It allows AI systems to reason, troubleshoot, and take action with minimal human input. In practical terms, that means handling common customer queries end-to-end, freeing up human agents for more complex cases.
AI is also boosting operational efficiency. It automates repetitive tasks such as document management, form filling, and data extraction. In sectors like insurance or healthcare, where case management involves large volumes of structured and unstructured data, AI can drastically cut processing times while improving consistency.
These use cases may seem behind the scenes, but they matter. They represent practical, measurable improvements to core operations. They reduce costs, enhance experiences, and give staff more time to focus on higher-value work. That’s real value, not just buzz.
The roadblocks to real impactBut let’s not pretend it’s all smooth sailing. For every success story, there are countless stalled pilots and unrealized ambitions. So, what’s holding businesses back?
First, data sensitivity is a major hurdle, especially in regulated industries like finance and healthcare. Questions about where data is stored, how it’s processed, and who can access it are under constant scrutiny. Compliance isn’t optional, and many AI deployments struggle to meet evolving privacy standards.
Security is another growing concern. As generative models become more sophisticated, so do the risks. Prompt injections, model poisoning, and adversarial attacks are no longer hypothetical, they’re real-world threats that demand serious governance.
Technical limitations also play a role. Hallucinations, where AI generates plausible sounding but incorrect outputs, remain a significant risk. In high-stakes settings like legal advice or medical triage, these errors can be costly or even dangerous. Many models still exhibit cultural or linguistic biases embedded in their training data; this erodes trust and limits wider adoption.
Then there’s the infrastructure challenge, training and running large models is resource intensive, requiring robust compute power, strong data governance, and an architecture capable of scaling. For many organizations, especially smaller ones, the investment can feel out of reach.
All of this contributes to a reality where AI is often deployed in silos or as experiments, rather than integrated at scale. Without a broader strategy and framework, these efforts struggle to drive sustained business value.
Why platform thinking mattersAgainst this backdrop, we’re seeing the emergence of platform-based approaches as a more sustainable model. Rather than building every AI capability from scratch, organizations are turning to purpose-built platforms that are secure, scalable, and designed with sector-specific needs in mind.
These platforms provide a structured environment where AI can be developed, tested, and deployed safely. They offer features like built-in compliance controls, explainability tools, and integration with existing systems. Crucially, they shift the conversation from isolated tools to integrated ecosystems.
That shift matters, it gives teams more confidence to innovate and leaders more visibility into where AI is making an impact. It also helps balance the tension between innovation and governance, a line that’s becoming increasingly important to walk.
What comes next: Less hype, more strategyAs AI maturity grows and attention shifts to even more advanced ideas, like artificial general intelligence and fully autonomous agents, businesses must keep their feet on the ground.
The winners won’t be those who rush the fastest, but those who build the most solid foundations.
That means adopting AI not as a silver bullet, but as a strategic asset. The focus should be about embedding AI into core workflows, upskilling teams, and designing governance models that support responsible use. It’s about building explainable, auditable systems. It’s about connecting AI initiatives to clear business goals and measuring what matters.
To do this well, organizations must invest in cultural readiness as much as technical capability. That includes fostering cross-functional collaboration, engaging stakeholders early, and creating a shared language around AI value. It means setting the right expectations and learning from early missteps. This may not always be flashy, but it’s what drives real progress.
The promise of AI is enormous. But the path to that promise runs through thoughtful, grounded, and strategic implementation. The businesses that get this right will be those that stop chasing the hype and start building what works.
Everyone wants AI. But only those who know what to do with it will unlock its full potential.
We've listed the best IT Automation software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
- Will arrive in January 2026
- Teaser trailer released in August 2025
- Production began in June 2025
- Main cast set to return
- New recurring characters revealed
- Season 2 will time jump to 10 months ahead
- Hopes for future seasons
The Pitt season 2 is coming in January 2026, only a year after the popular HBO Max show premiered on the streamer. The medical drama saw ER's Noah Wyle as the dynamic Dr. Michael 'Robby' Robinavitch taking charge of an incredibly stressful day at the Pittsburgh Trauma Medical Hospital.
And traumatic it most certainly was, culminating in a rather dramatic finale that fortunately, viewers won't have to wait too long to find some resolve. But, in true hospital fashion, as one intense shift ends, another begins (though with a time jump, which I'll get into more below) as the medical staff begin another day with even more drama. Here's everything we know so far from release date, confirmed cast, plot synopsis, and more.
Full spoilers for The Pitt season 1 to follow.
The Pitt season 2: is there a release date?We're so back.Season 2 of #ThePitt has begun filming. Stream Season 1 now on Max. pic.twitter.com/EfBYnrBzLuJune 16, 2025
The Pitt season 2 release date has been confirmed – and it's January 2026. Revealed by Max CEO Casey Bloys in conversation with Vulture back in March, he said: "The second season will premiere in January of 2026, a year later. This model of more episodes cuts down on the gap between seasons."
With season 1, we were treated to an epic 15 episodes worth of emergency room drama. And it appears season 2 will follow suit, Bloys added: "What I love about something like The Pitt is, I can get 15 episodes in a year. That's a really great addition to what we're already doing on the platform. And I'd like to do more shows in this model."
After a February 2025 renewal, the show headed into production on season 2 in June amid official news from HBO Max that the series had stayed among the top three of the streamer's most-watched titles globally.
The Pitt season 2 trailerThe Pitt season 2 got its first official teaser trailer in August and it reveals more high-octane medical drama unravelling in the emergency room as doctors struggle with an overwhelming rush of patients in dire need of help.
But, it did make us say, hang on, hasn't The Pitt season 2's first trailer spoiled a major season 1 cliffhanger? In the first five seconds, Dana can be seen back at work, standing behind the desk. Surprising news considering the season 1 finale saw her seriously questioning whether she could keep doing the job. She's back and I'm not mad about it, quite the opposite.
The Pitt season 2 teaser trailer is also great confirmation for other cast members, alongside Dana, returning for the next installment.
The Pitt season 2 confirmed castThe main cast will return for The Pitt season 2 (Image credit: HBO Max)Spoilers follow for The Pitt season 1.
Thanks to the teaser trailer, here's The Pitt season 2 confirmed cast we know so far:
There's one character that won't be returning for The Pitt season 2 and that's Tracey Ifeachor as Dr. Heather Collins, as confirmed by Deadline. While it's not clear the reasons behind her exit, Ifeachor posted on her official Instagram to say: "It was an absolute privilege to play Dr. Heather Collins in such a groundbreaking season and piece."
We also know about some new characters joining The Pitt season 2. Lawrence Robinson will play Brian Hancock, "a sweet, charming and kind-hearted patient who turns a soccer injury into a possible meet-cute with one of the doctors" (as per Deadline).
Sepideh Moafi also joins as a series regular playing an attending physician, as well as Charles Baker, Irene Choi, Laëtitia Hollard and Lucas Iverson in recurring roles, as exclusively revealed by Deadline.
Finally, in another reveal by Deadline, Zack Morris is also joining as Jackson Davis, "a patient brought to the ED after an uncontrollable outburst in the college library."
The Pitt season 2 story speculationThe Pitt season 2 picks up on Langdon's first day back (Image credit: HBO Max)Full spoilers follow for The Pitt season 1.
For The Pitt season 2, the cast will pick up in the emergency room 10 months after the intense shift that unravelled in season 1.
This time jump was revealed during Deadline's Contenders TV panel in April and it was further explained by the creative team that season 2 will take place over the Fourth of July weekend for another 15 hours and 15 episodes of medical emergencies.
And when it comes to the reason for this time jump, the show's creator R. Scott Gemmill revealed to TVLine that it has a lot to do with Dr. Langdon's recovery.
The season 1 finale saw Dr. Robby tell Langdon that if he wants to return to Pittsburgh Trauma Medical Center, then he has to check himself into a 30-day inpatient rehab. Of course, that's not 10 months. But, recovery isn't linear.
Gemmill said: "Thirty days is probably the minimum he would have to do. You can do 60, 90... and part of [the time jump] is driven by when he can shoot in Pittsburgh."
He added: "Nine, basically 10 months later, gives a lot of room for us to have developed a few stories in the interim and catch up with everyone. And with it being Langdon's first day back, we get to catch up as he catches up with all those people."
And like season 1, the next season will follow the same 15-hour schedule running from 7am to 10pm and all the intense medical situations that can bring in, especially over the Fourth of July weekend.
While the season 1 finale saw Dana's return unclear, she's back (Image credit: HBO Max)The recovery wasn't just for Langdon though with Dr. Robby having to address his own mental health issues and speaking to TVLine in April, Gemmill said: "Getting himself mentally healthy against is part of his journey."
With such stressful jobs, the pressure was unsurprisingly getting to the doctors and none more so than Dana Evans who we last saw packing up her things in the season 1 finale and telling Dr. Robby she was thinking about leaving the ER for good.
Fortunately, we know she didn't commit to this, appearing in the first official teaser trailer very much still part of the team (despite a stern look pointed towards Dr. Robby).
And with new characters joining for season 2, there's plenty of new faces – both doctors and patients – that I'm sure will bring their own personal dramas (and medical cases) to The Pitt.
What they won't be doing in the 15 hours that will unfold on our screens though, is ever leave the ER. Gemmill explained: "The reality is that we don't really leave our set. We don't leave the ER. We did a few things at the very end where we saw people going home and stuff.
"But beyond that, I don't expect us to go anywhere beyond the hospital and the ambulance bay until the last episodes of next season, and maybe we'll see a couple other parts of the hospital."
The real-life medical landscape is reflected in season 2 (Image credit: HBO Max)And although they're not stepping outside of the hospital, it doesn't mean they can't address real-time and real-life concerns that affect medical care in the US.
Speaking to Variety, executive producer John Wells explained that this includes President Trump's 'Big Beautiful Bill', outlining a 12% cut to Medicaid spending: "The Medicaid changes are going to have a significant impact, and you don't have to take a political position to discuss what the impact is actually going to be."
Gemmill added: "We take out platform very seriously. I think one of the things when you can reach 10 million people – and this was true back in the day on 'ER' as well – is with that amount of people listening, you have to be responsible for what you put out there."
Will The Pitt return after season 3?Could The Pitt become an annual drop for HBO Max? (Image credit: Max)With The Pitt season 2 landing on HBO Max in January, there's no news yet of a season 3... and beyond. It doesn't necessarily mean we'll have to wait until January for news of more though, given season 2 was treated to an early renewal.
But, for now, I don't have much to report other than Gemmill joking with Deadline that: "If there’s a season 12, we’ll do a musical. Right now, we kind of want to stick to what was working for us, but we’re still learning. It’s a process."
While season 12 sounds crazy to talk about now (and a musical even crazier), ER did run for 15 seasons. So, maybe it's not all that wild of an idea after all.
For more Max-focused coverage, read our guides on the best Max shows, best Max movies, The Last of Us season 2, and Peacemaker season 2.
Japan is preparing its next national supercomputer, FugakuNEXT, through a collaboration between Fujitsu, Nvidia and Riken.
The system is planned for operation around 2030 and aims to blend simulation and artificial intelligence into one tightly integrated platform.
For the first time in a Japanese flagship project, GPUs will be used as accelerators. Nvidia will (unsurprisingly) design the GPU infrastructure, Fujitsu will handle CPUs and system integration, and Riken will be involved in the software and algorithm work.
Feynman GPUThe result is expected to be an “AI-HPC platform” designed for science, industry, and AI-driven discovery.
The performance targets for the supercomputer are certainly ambitious. FugakuNEXT is designed to deliver more than 600EFLOPS of FP8 AI performance, which would make it the most powerful AI supercomputer yet announced.
The system is also expected to achieve up to a hundredfold increase in application performance compared with Fugaku, while staying within roughly the same 40MW power budget.
Nvidia’s long-term roadmap points to the Feynman GPU architecture (named after theoretical physicist Richard Feynman) arriving near 2028, so it could well play a role in powering FugakuNEXT.
Fujitsu is developing a successor to its MONAKA CPU for the project, tentatively named MONAKA-X, with more cores, extended SIMD capabilities, and Arm’s matrix computation engine for AI inference.
Coupled with Nvidia’s accelerators, the system is expected to run large simulations alongside demanding AI workloads.
Hardware alone won’t deliver the target gains so the project will also lean on innovations such as surrogate models, mixed-precision arithmetic, and physics-informed neural networks to accelerate performance while also preserving accuracy.
Makoto Gonokami, president of Riken, said, “It is a great honor for Riken to collaborate with Fujitsu and Nvidia in advancing the development of FugakuNEXT. Since ancient times, humankind has built civilizations and advanced societies through the science of computing. Today, the emergence of AI, advanced semiconductors, and quantum computers is bringing about a discontinuous transformation in computational science.”
Ian Buck, vice president at Nvidia, added, “FugakuNEXT will deliver zettascale performance with application speeds nearly 100 times faster – within the same energy footprint as its predecessor – accelerating research, boosting industrial competitiveness, and driving progress for people in Japan and around the world.”
(Image credit: Riken )You might also likeApple has confirmed that its next event is taking place on September 9, and all signs point to a big update for the Apple Watch line.
We believe, based on several months of leaks and rumors, that Apple will debut not one, not two, but three new Apple Watches. Currently, just three Apple Watches available to buy from the company: the Apple Watch Series 10, Apple Watch Ultra 2 and the Apple Watch SE (2022), with older models consigned to third-party sellers.
Rumors of a new trio of watches suggest the entire line is getting an upgrade. Here are the three new devices we believe will be announced at the Cupertino 'Awe dropping' event, and you can bet we'll be hard at work updating our guides to the best Apple Watches and best smartwatches.
Whether you've been paying attention to the leaks and rumors, or you're just catching up now, here's everything you need to know about the Apple Watches we reckon are coming on September 9.
1. Apple Watch Ultra 3(Image credit: Future)The Apple Watch Ultra 2 got a small upgrade last year in a new titanium black colorway, and it remains the gold standard when it comes to heart rate accuracy and versatility, recently being tested against a chest strap monitor.
Upgrades that we’re expecting from the Apple Watch Ultra 3 include satellite connectivity, as in the upcoming Google Pixel Watch 4. This feature would allow users to communicate from the watch without a phone in case of emergency, even if they're not using a data plan to connect to the internet.
If you do happen to be using a data plan with your Apple Watch, we’re expecting 5G connectivity for a serious boost to its navigation, communication and music streaming capabilities.
We’re also hoping for a new, more powerful chipset, and possibly high-blood-pressure detection. The Apple Watch Ultra and Ultra 2 are virtually identical in terms of their design, and we’re not expecting any radical changes to the chassis and protruding Action button.
2. Apple Watch Series 11(Image credit: Future)The Apple Watch Series 11 is the next mainline iteration of the Apple Watch.
Last year, the Series 10 got a big wraparound screen redesign, a slimmer body and a new chipset, so we’re not expecting any big design changes here, especially as there will likely be two other watches getting most of the attention. Another new chip is likely.
We know that, alongside the rest of the range, it’s going to be getting all the new software smarts from watchOS 26, including the AI-powered Workout Buddy feature and redesigned Workout app. It’s possible we’ll get a much-anticipated blood-pressure detection feature, but from a hardware perspective, the Series 11 is likely to be similar to the 10.
3. Apple Watch SE 3(Image credit: Future)Every couple of years, the Apple Watch combines elements from some of its older models with a cheaper-to-make chassis to give us a new entry into the SE series.
We labelled the SE 2 the best cheap Apple Watch you can buy, and the SE 3 is likely to provide the same great experience in a more affordable package. It’s unlikely the SE 3 will get the Series 10’s wraparound screen, instead probably getting an older Series 9 style display to make use of cheaper, now-defunct older parts.
Expect modern, AI-powered watchOS 26 software inside a model designed to cost around $250 / £250 / AU$500.
You might also like...For the past week, I’ve been testing the new Oakley Meta smart glasses – and while I love running in them, my fiancée (and running partner) wishes I’d stop wearing them.
In case you’ve missed it, the ongoing collaboration between Meta, Essilor Luxotica has spawned seven new smart glasses – one limited-edition design and six regular – that incorporate useful technology into Oakley’s HSTN specs.
Just like you’ll find in Meta’s smart Ray-Bans, these Oakleys boast a 12MP camera for first-person shots, open ear speakers for music, and a Meta AI assistant that can answer your questions and perform helpful tasks (provided they’re connected to your phone and the internet).
That’s not saying they’re identical, however. Some hardware has been upgraded slightly – like the camera that records higher quality video and the battery life is said to be longer – but the design is the biggest change.
(Image credit: Oakley / Meta)And this is why I love running in the HSTN smart glasses. The open ear speakers are handy for keeping me energized with music while I push myself, and I’ve found the HSTN frame is much better at hugging my face than the Wayfarers I have – meaning it doesn’t jostle or slip as much on my jogs.
They also boast Oakley’s 24K PRIZM lenses. These golden-tinted sunglasses aren’t just polarized to reduce harsh rays; they also offer improved contrast to your vision, which I’ve found in the 24K’s case makes it easier to spot terrain changes and grooves before I roll an ankle.
The Ruby PRIZM lenses are meant to be an even better running companion, though I will admit that a downside of these picks is that they’re only suitable for bright conditions. For general use, I stand by my belief that transition lenses are superior as they can morph between clear and shaded based on the sun’s intensity.
(Image credit: Oakley / Meta)So why, with all these successes, does my partner despise them? Well, she doesn’t think they suit me. It’s not the design itself, but the color of the frames, which, for the pair I’m testing, are white. Given my very pale complexion, she jokes that it’s hard to tell where the glasses end and my head begins.
I’m not sure I agree. I think the Oakley HSTN look rad, but if you agree these smart glasses aren’t a good fashion fit for me, then I’m not annoyed – I think this is actually a good thing.
That’s because while they are a gadget, they’re also a clothing accessory. While you can find designs and colors that suit everyone, distinct and personal fashion choices require designs like these HSTNs that maybe don’t work for everyone, but really suit the people they do work for.
This is one of the big reasons I’m excited to see Android XR partnering with brands like Gentle Monster and other fashion-first brands – as I’m hopeful we’ll continue to see inventive designs get the ‘smart’ treatment.
(Image credit: Oakley / Meta)Based on my experience, I can definitely recommend the Oakley smart glasses just as easily as I recommended the Meta Ray-Ban glasses before them.
My only advice would be to go and try them on first. Firstly, because the different PRIZM lenses will suit different sports from a practical perspective, but also to make sure you like how you look in them.
It’s not something we’re used to thinking about with tech, but wearables aren’t just redefining tech, they’re redefining fashion in equal measure – and you don’t want this accessory to wind up like those other fashion faux pas you regret buying.
You might also likeSK Hynix has confirmed it has started mass production of its new 321-layer QLC NAND flash memory, making it the first in the industry to cross the 300-layer threshold with QLC technology.
The company completed development of the chip earlier in 2025, and says it plans to launch commercial products in the first half of 2026, once customer validation is finished.
The chip features 2Tb capacity per die, double that of previous solutions.
Power efficiency improvementsTo address the slower performance that often comes with higher-density QLC NAND, SK Hynix expanded the number of planes within the chip from four to six.
This change allows for greater parallel processing, which improves read and write speeds while keeping power use low.
The company says its data transfer speeds are twice as fast compared to its prior QLC offerings, with write speeds up to 56% faster and read performance improved by 18 percent.
Power efficiency during write operations is also up by more than 23%, something that will matter in large data environments where energy costs are closely monitored.
Although the long-term aim is to use the technology in enterprise SSDs for data centers and ultra-high-capacity storage aimed at AI servers, the company says PC SSDs will be the first products to ship with the 321-layer chips.
That means consumers may see benefits before enterprise customers, although the initial focus will not necessarily be on low-cost, high-capacity drives.
"With the start of mass production, we have significantly strengthened our high-capacity product portfolio and secured cost competitiveness," said Jeong Woopyo, Head of NAND Development.
"We will make a major leap forward as a full-stack AI memory provider, in line with the explosive growth in AI demand and high-performance requirements in the data center market."
SK Hynix also plans to use its stacking technology, which allows up to 32 dies in one package, in future ultra-capacity solutions. It expects this to be especially important in AI-driven storage markets where both density and efficiency are key selling points.
While the arrival of this NAND marks a big step toward larger, more affordable storage, it is unlikely that cheap 8TB consumer SSDs will arrive any time soon, due to high manufacturing costs, packaging complexity, and validation cycles.
You might also likeBeats is no stranger to teasing forthcoming hardware – think earbuds or speakers – on its social channels, and earlier today, the Apple-owned brand did just that. This team is teasing the Powerbeats Fit, which looks to be the next generation of the popular Beats Fit Pro earbuds, and simultaneously a rebranding.
Shown off in a fresh hue of orange on athletes Saquon Barkley, Justin Jefferson, and Jayden Daniels, these earbuds are promised to “Fit Every Move.” That’s likely a nod to the in-ear design of these, which use a wing tip to fit snugly and securely in the ear, unlike the Powerbeats Pro 2, which wrap around the ear.
Beats Fit Pro first launched way back in November of 2021 and has been on the market, with several new colors, including a partnership with Kim Kardashian. These earbuds still fill a nice spot within the Beats lineup, but compared to the Powerbeats Pro 2, there are certainly a few upgrades I hope we’ll be seeing soon, when the successor drops as the Powerbeats Fit.
The teaser concludes with a promised launch for Fall 2025, which could occur in mid-to-late September, October, or November of this year. With that in mind, here are three things we hope the Powerbeats Fit will offer.
The arrival of heart-rate tracking(Image credit: Beats)Considering the Powerbeats Pro 2 introduced the heart-rate tracking function, and AirPods Pro 3 are rumored to offer the capability as well, I hope we see these arrive in the smaller, lighter form factor of the Powerbeats Fit.
Yes, the actual tracking is a bit limited, and if you’re in the Apple ecosystem with an Apple Watch, that wearable will override the earbuds. Even so, the earbuds would offer tracking ability when both are in your ears for select workout apps, as well as on Android via the companion Beats app. It would bolster the feature set here a bit as we’d assume the Powerbeats Fit will feature active noise cancellation and a transparent mode like the Beats Fit Pro.
To power the arrival of the heart-rate tracking sensor, we’d expect to see a jump in the silicon powering these earbuds as well. Currently, the Beats Fit Pro features the Apple-made H1 Chip, but the Powerbeats Fit would hopefully step things into more modern territory with the likes of the H2 chip, the same one that powers the Powerbeats Pro 2.
A step up in durability(Image credit: Beats)The Beats Fit Pro currently offers IPX4 sweat and water resistance, which means they can survive light splashes. And that’s also the same degree of durability that the Powerbeats Pro 2 offer, but considering Beats is teasing these with professional athletes and many Beats earbuds or headphones owners like to use these during workouts, runs, or general training, an upgrade in this regard to at least IP55 or IPX7 would be great to see.
Considering the rating on the Powerbeats Pro 2, however, this one might be less likely – especially as it seems Beats is keeping the existing design here.
A longer runtimeA post shared by Beats by Dre (@beatsbydre)
A photo posted by on
Beats Fit Pro currently offers six hours of playback with noise cancellation turned on and seven hours with that mode off. You get a few recharges in the case, which Beats says offers 24 hours of battery life.
I’d like to see a step up here, at least closer to the excellent runtime of Powerbeats Pro 2 – those earbuds offer 10 hours of playback and 45 hours when you factor in recharge in the case. That’s a fantastic number, and while the Powerbeats Fit look to be a bit smaller than these, the newer chip and maybe some improvements in battery tech could help to make this a reality.
Similar to the transition from Powerbeats Pro to Powerbeats Pro 2, we’ll see if the design team at Beats was able to slim down the case size here. Fingers crossed that it sticks with a USB-C port.
The good news is that, considering Beats posted the teaser today, August 28, 2025, we only likely have a few weeks to go. Considering Beats rarely makes appearances during Apple events, it’s unlikely we’ll learn more about it at the September 9, 2025, event. However, Beats will likely share more in the weeks after that and officially introduce the Powerbeats Fit.
Let’s just hope the price stays competitive, as the Beats Fit Pro currently has an MSRP of $199 / £199 / AU$299.
You might also likeA prisoner at New Jersey State Prison has publically voiced frustration at being forced to rely on floppy disks for critical legal work.
The US state's prison system restricts inmates to using floppy disks, each with a maximum capacity of 1.44MB, but each prisoner is allowed 20 floppy disks, a limit which barely matches the needs of complex legal correspondence.
Writing for the Prison Journalism Project, Jorge Luis Alvarado said, “Inside New Jersey State Prison, it’s like 1985, where we rely on out-of-date word processors, electric typewriters, and floppy disks that are going extinct in the free world.”
Outdated tools in modern timesAlvarado explains even a single legal brief can exceed this size, requiring the use of multiple disks to store one document.
Such a process becomes cumbersome, and with the added risk of corruption, the format introduces real uncertainty into how files are preserved.
In addition, since major companies like Sony stopped manufacturing floppies about 15 years ago, their scarcity only adds to the impracticality of the rule.
The reliance on floppy media seems especially arbitrary, given that they have only about a year of lifespan left and that flash drives became widely adopted more than two decades ago.
In the early 2000s, USB drives quickly eclipsed floppies, offering both speed and durability.
Today, they are inexpensive, compact, and reliable, with capacity far surpassing anything the floppy era could provide.
Even consumer SSD options now span into the terabyte range, with the largest SSD models rivaling enterprise storage.
Devices once labeled the fastest SSD can manage transfers that dwarf anything possible with legacy media.
However, authorities argue that the ban on flash drives is a matter of security, suggesting they could be misused within prison environments.
While this position explains the reluctance to modernize, it leaves prisoners disadvantaged when dealing with legal matters where technology should serve as a bridge, not a barrier.
Alvarado describes a process where lawyers must copy digital files onto flash drives, only to have them transferred back to floppy disks through a single library computer.
Delays are inevitable, with access often taking days at a time.
Some researchers estimate that between four and six percent of those incarcerated in the United States may be innocent.
Therefore, even if a fraction of these individuals face barriers to appeals due to outdated technology, the issue extends far beyond mere inconvenience.
Via Toms Hardware
You might also likeA new joint cybersecurity advisory from the National Security Agency (NSA) and other agencies like CISA, the UK’s NCSC, Canada’s CSIS, Japan’s NPA and many more looks ti expose advanced persistent threat (APT) actors believed to be sponsored by the Chinese Government.
According to the advisory, Chinese firms have been providing products and services to China’s Ministry of State Security and the military - which in turn, it is claimed, props up hacking groups.
These threat actors target infrastructure like telecommunications, government, military, transport, and energy agencies - specifically in a global hacking campaign linked to the notorious Salt Typhoon group.
Supplying components“The data stolen through this activity against foreign telecommunications and Internet service providers (ISPs), as well as intrusions in the lodging and transportation sectors, ultimately can provide Chinese intelligence services with the capability to identify and track their targets’ communications and movements around the world," the advisory warns.
Some of the firms named in the advisory, like Sichuan Juxinhe Network Technology Co. Ltd, have already been sanctioned for their ties to the group.
Other named companies include Beijing Huanyu Tianqiong Information Technology Co., Ltd., and Sichuan Zhixin Ruijie Network Technology Co., Ltd, all of which are thought to be linked.
The report also outlines specific threat hunting guidance and mitigations against these groups, particularly in quickly patching devices, monitoring for unauthorized activity, and tightening device configuration.
Earlier in 2025, Salt Typhoon was discovered carrying out a cyber espionage campaign that breached multiple communications firms, with hackers lingering inside US company networks for months.
The group was observed abusing vulnerabilities in Microsoft Exchange Servers, which allowed them to breach networks and exfiltrate data. A fix for this flaw has been available for years, but research suggests that nearly 91% of the 30,000 affected instances remain un-patched - highlighting the importance of deploying effective patch management software.
China has always strenuously denied any ties to this group, and to any other cyber-espionage campaigns.
You might also likeAt the recent Flash Memory Summit, a new name from New Zealand surfaced in a bid to cause waves in the enterprise storage space.
Novodisq presented its Novoblade system, a platform built to combine dense storage, compute acceleration, and network capacity in a compact design.
The Novoblade modules are designed as blade servers, each offering 576TB of raw storage built on flash drives. The drives themselves are based on E2 form factor SSD units with capacities reaching 144TB per device.
How Novoblade is structuredThe company says a 2U enclosure can hold up to 20 modules, which equates to 11.75PB of capacity in a single shelf.
Scaling this configuration across an entire 42U rack, Novodisq projects that storage can rise to 230PB.
Alongside the storage figures, Novodisq promotes Novoblade as a hyperconverged design that integrates compute resources directly into each blade.
These include ARM64 cores, FPGA resources, and optional AI or machine learning engines, with networking supported by 200Gbps or 400Gbps Ethernet.
The company positions this as a platform that can replace conventional NAS arrays, with up to 95% lower energy consumption. Such claims, however, are difficult to validate without detailed independent benchmarks.
While the theoretical capacity appears high, the price of such a system raises serious questions.
The company has not announced official figures, but estimates can be made from existing hardware, as a single 122.88TB SSD currently (August 2025) costs close to $14,000.
Using that as a reference, and accounting for Novoblade’s proprietary 144TB SSDs, a single blade with four drives could already exceed $60,000 before considering added compute and networking.
With 20 blades in a 2U enclosure, the total could approach $1.2 million. Extending this to a full 42U rack with 230PB of raw storage means costs would rise well beyond $2 million.
This positions Novoblade as an extremely dense solution, but one that only highly specialized organizations could justify financially.
On paper, these numbers suggest one of the densest deployments yet described, but practical use and performance remain untested.
Novodisq describes the Novoblade as both a storage server and a converged compute platform.
It can expose block, file, and object interfaces, or integrate into distributed systems such as Ceph or Lustre.
At the moment, major players in the storage field continue to focus on balancing capacity with performance.
Therefore, it remains uncertain whether Novodisq can provide not only the largest or fastest SSD arrangements but also sustainable pricing and support.
You might also likeNvidia has released the Jetson AGX Thor developer kit, calling it the next step toward robotics systems which can function in real time.
The system, built on the Blackwell GPU line, is framed as a platform for “physical AI” and advanced robotic functions across manufacturing, logistics, healthcare, farming, retail, and transport.
Nvidia says it can deliver up to 7.5 times more AI compute and over three times the energy efficiency of its Jetson Orin line, which has been in wide use since 2022.
Offers supercomputer-level capacityNvidia went on to describe Jetson Thor as “the ultimate supercomputer to drive the age of physical AI and general robotics.”
“We’ve built Jetson Thor for the millions of developers working on robotic systems that interact with and increasingly shape the physical world,” said Jensen Huang, founder and CEO of Nvidia.
“With unmatched performance and energy efficiency, and the ability to run multiple generative AI models at the edge, Jetson Thor is the ultimate supercomputer to drive the age of physical AI and general robotics.”
With a quoted figure of 2,070 FP4 teraflops in a 130-watt envelope, it is positioned as powerful enough to run multiple generative models at once.
It supports vision-language-action models like Isaac GR00T N1.5, along with other LLM systems.
The device also integrates 128GB of memory, which is expected to make it capable of handling larger AI workflows at the edge.
Several robotics players are already listed as early adopters, including Agility Robotics, Amazon Robotics, Boston Dynamics, Caterpillar, Hexagon, and Medtronic.
Meta has also been named as an early partner, while companies such as John Deere, OpenAI, and Physical Intelligence are said to be testing the system.
“Nvidia Jetson Thor offers the computational horsepower and energy efficiency necessary to develop and scale the next generation of AI-powered robots that can operate safely and effectively in dynamic, real-world environments, transforming how we move and manage goods globally,” said Tye Brady, chief technologist at Amazon Robotics.
Nvidia notes more than two million developers already use its robotics stack, with over 7,000 customers having deployed Jetson Orin hardware in edge AI projects.
Jetson Thor runs on the Nvidia Jetson software platform, which is designed to support multiple AI tools at once.
The package integrates with Nvidia Isaac for simulation, Metropolis for vision AI, and Holoscan for real-time sensor processing.
This arrangement is intended to allow one system-on-module to support many AI writer models and workflows, rather than requiring several separate chips.
The developer kit is available now at $3,499 and the production systems, including carrier boards, will be distributed worldwide through its partners.
You might also likeThe bug that recently emerged in Windows 11, which is reportedly breaking some SSDs, is being investigated by Microsoft and its partners - and now we've heard back from one of the parties involved.
This is Phison, which manufactures SSD controllers used across a wide range of drives from various manufacturers, and is involved in this controversy because some reports suggest that SSDs using these controllers were more likely to be affected by the bug.
Phison has now shared the results from its extensive testing pertaining to this matter, as Neowin reports, issuing the following statement: "Phison dedicated over 4,500 cumulative testing hours to the drives reported as potentially impacted and conducted more than 2,200 test cycles. We were unable to reproduce the reported issue, and no partners or customers have reported that the issue affected their drives at this time."
So, Phison feels it's in the clear, what with a whole lot of testing having turned up nothing, and no reports coming to the company directly from its customers, either. Of course, reports from individual consumers are going to go directly to the SSD maker (not those responsible for the controller), but when Phison says "partners or customers," it is talking about those drive manufacturers (and others, too, no doubt Microsoft included).
What hasn't helped Phison's cause here is a fake document that did the rounds online just after the bug came to light in Windows 11's August update. This purported to contain a list of affected Phison controllers, but was completely fabricated as the company quickly made clear.
(Image credit: Shutterstock)Analysis: Microsoft's findings are still to comeAlthough Phison has conducted extensive testing, this can't be regarded as a definitive conclusion. Microsoft's investigation into this SSD breaking bug in Windows 11 is still being carried out, and until we see the result of that, there remains doubt as to exactly what's going on here.
Reports of SSD failures still remain scattered. So it must be noted, that this seems to be a rare issue. At any rate, I'm hoping Microsoft will make its findings known sooner rather than later, and clear this matter up - as it's only becoming more confusing with this latest instalment of the saga.
Phison also tacked on some advice with its statement on best practices to "support high-performance storage devices" undergoing extended workloads, such as shifting large files - like prolonged write operations which apparently triggered the Windows 11 bug. Phison observes that a "proper heatsink or thermal pad" will help in terms of maintaining optimal temperatures and ensuring the drive doesn't get too hot (or throttles as a result).
Note that imparting this advice isn't directly related to the bug - meaning Phison isn't saying you should be using a heatsink to avoid coming off the rails with this Windows 11 glitch. This is just general advice aimed at all high-end SSD owners, letting them know that if they are running intense workloads over long durations, using extra cooling is advised.
Mind you, if your SSD doesn't have a heatsink already, adding one is a somewhat fiddly affair, especially for the less tech-savvy (although they are less likely to be running a high-performance solid-state drive, admittedly).
You might also likeIBM and AMD have announced plans to “build the future of computing” by collaborating on new architecture to blend quantum systems with high-performance hardware in a bid to solve some of the world's most difficult problems.
The partnership will combine IBM’s expertise in building quantum computers and related software with AMD’s background in processors, graphics, and AI accelerators in a step toward quantum-centric supercomputing.
The companies are looking at ways in which to integrate AMD CPUs, GPUs, and FPGAs with IBM’s quantum computers, with the ultimate goal to accelerate emerging algorithms that neither quantum nor classical systems can handle on their own.
Pushing past the limits"Quantum computing will simulate the natural world and represent information in an entirely new way," said Arvind Krishna, Chairman and CEO, IBM.
"By exploring how quantum computers from IBM and the advanced high-performance compute technologies of AMD can work together, we will build a powerful hybrid model that pushes past the limits of traditional computing."
The two tech giants will work together to build open-source platforms that can scale and support research in fields such as drug development, materials science, and supply chain optimization.
Lisa Su, Chair and CEO of AMD, also emphasized the importance of the partnership, saying, "High-performance computing is the foundation for solving the world's most important challenges. As we partner with IBM to explore the convergence of high-performance computing and quantum technologies, we see tremendous opportunities to accelerate discovery and innovation."
AMD has previously worked on some of the world’s fastest supercomputers, including Frontier and El Capitan.
This hybrid approach is also expected to support IBM’s roadmap toward fault-tolerant quantum computing, a milestone the company has said it hopes to reach before the end of the decade.
IBM has already begun similar work with other partners including Riken in Japan, as well as institutions like Cleveland Clinic and Lockheed Martin.
An initial demonstration is planned for later this year and will show how IBM quantum computers can work alongside AMD technology to deliver hybrid quantum-classical workflows.
The partnership will support open-source ecosystems, such as Qiskit, in a bid to encourage the development of algorithms for quantum-centric supercomputing.
You might also likeCybercriminals are trying to deliver backdoor malware to US-based organizations by tricking them to sign fake non-disclosure agreements (NDA), experts have warned.
A new report from security researchers Check Point outlined how in the campaign, the miscreants pose as a US-based company, looking for partners, suppliers, and similar.
Often, they buy abandoned or dormant domains with legitimate business histories to appear authentic. After that, they reach out to potential victims, not via email (as is standard practice) but through their “Contact Us” forms or other communication channels provided on the website.
Dropping MixShellWhen the victims get back to their inquiry, it’s usually via email, which opens the doors to deliver the malware.
However, the attackers don’t do it immediately. Instead, they build rapport with the victims, going back and forth for weeks until, at one point, they ask their victims to sign an attached NDA.
The archive contains a couple of documents, including clean PDF and DOCX files to throw the victims off, and a malicious .lnk file that triggers a PowerShell-based loader.
This loader ultimately deploys a backdoor called MixShell, which is a custom in-memory implant featuring a DNS based command and control (C2) and enhanced persistence mechanisms.
Check Point did not discuss the number of potential victims, but it did say that they are in the dozens, varying in size, geography, and industries.
The majority (around 80%) are located in the United States, with Singapore, Japan, and Switzerland, also having a notable number of victims. The companies are mostly in industrial manufacturing, hardware & semiconductors, consumer goods & services, and biotech & pharma.
“This distribution suggests that the attacker seeks entry points across wealthy operational and supply chain-critical industries instead of focusing on a specific vertical,” Check Point argues.
The researchers couldn’t confidently attribute the campaign to any known threat actor, but said that there is evidence pointing to the TransferLoader campaign, and a cybercriminal cluster tracked as UNK_GreenSec.
Via The Record
You might also likeLuxury electronics brand Loewe have teamed up with luxury timepiece creators Jacob & Co to create two sets of headphones so expensive you'll need to give them a bodyguard.
The Loewe x Jacob & Co. over-ears have "have reimagined headphones as objets d’art." There are two versions: the Noir Rainbow, whose ear cups feature a 14K rose gold circle with 15.97 carats of multi-colored sapphires; and Ice Diamond, which is "radiant" with a 14K white gold ring and 12.47 carats of white diamonds.
Whichever pair you choose you're making a statement, and that statement is "I clearly don't pay enough tax". Because the cheaper Rainbow pair is €99,000 (about $115,235 / £85,440 / AU$176,945) and the Ice Diamond pair is €119,000 (about $138,500 / £102,700 / AU$212,690).
The Ice Diamond model is "radiant with 12.47 carats of white diamonds" (Image credit: Loewe)Loewe diamond headphones: features and availabilityIf you happen to have enough cash for a six-figure set of headphones you'd better move fast: there will only be five pairs of each model.
I suspect the would-be buyers couldn't care less about the specs, but whichever pair you go for you're getting hi-res audio "with expert tuning", adaptive ANC, integrated AI "for voice assistant and real-time translation" and up to 65 hours of battery life.
It's easy to go all Class War here and suggest that spotting such headphones in the wild is a great way to recognize the people who'll be first against the wall when the revolution comes (the launch is taking place on Loewe's luxury yacht, with the orcas).
But underneath all the gems there's what could be a very credible rival to the likes of the AirPods Max and other high-end headphone options, and I suspect that considerably more affordable versions of these headphones will arrive in due course.
You might also likeAfter over a year of radio silence, Bioshock creator Ken Levine has finally emerged to provide an update on his next game, Judas.
In Ghost Story Games' first developer log, Levine said that the studio is focusing all its efforts on finishing the game and has decided to begin having a more direct communication with fans, which will offer more frequent updates than before.
For this first update, Levine highlighted Villainy, a central feature of Judas that is essentially a choice-driven system that will affect who will become the game's villain out of three characters: Tom, Nefertiti, and Hope.
"In Judas, your actions will attract members of the Big 3 to you as friends. But ignore one of them enough, and they become the VILLAIN," Levine explained. "From there, they will get access to a new suite of powers to subvert your actions and goals."
Villainy is just one example of how the Big 3 can retaliate, and the "more dangerous and character-specific stuff" will be revealed at a later date.
Levine also touched on the game's relationship system and once again compared it to Middle-earth: Shadow of Mordor's reactive Nemesis system, explaining that the Big 3 will observe the player and have feelings about how you approach everything from combat, hacking, crafting, and how they interact with the other two characters.
"In Judas, you're going to get to know these characters intimately. We want losing one of them to feel like losing a friend," he said. "We want to play with that dynamic, and we want that choice to be super hard. The Big 3 are all going to be competing for your favor and attention.
"They can bribe you, save you in battle, talk s**t about the other characters, and share with you their darkest secrets. But eventually, you've got to decide who you trust and who you don't."
Judas still doesn't have a release date, because Ghost Story Games is "not quite ready to finalize that," but the game is expected to launch on PC, PS5, Xbox Series X, and Xbox Series S.
You might also like...In a first for Apple Music, the music streaming service is offering free access to its six live streamed radio stations in a new partnership with TuneIn – a free online audio streaming platform that gives listeners access to radio, podcasts, sports, and more.
Apple Music has partnered with TuneIn to extend the reach of its live radio shows to the free audio streamer’s 75 million monthly listeners, The Wall Street Journal ($/£) reports, and you can access six Apple Music Radio stations outside of the Apple Music app for free right now. They are:
Apple seems to be conducting another strategic move to entice new subscribers, or regain ones who may have made the switch to competing music streaming services – most notably Spotify. Just last week Spotify unveiled its rival to Apple Music’s AutoMix, and it announced its new Messages feature just a few days ago.
While Spotify offers an ad-supported tier, Apple Music doesn’t, and therefore lacks other means of attracting new subscribers beyond free trials. The decision to expand its radio station access enables it to reach millions of potential new listeners, and if you’re tempted to make the switch, these are the Apple Music Radio stations I’d try out first.
1. Apple Music 1 (Image credit: Future)This is arguably Apple Music’s main radio station, which airs daily music shows from hosts such as Rebecca Judd, Matt Wilkinson, and of course, Zane Lowe. It’s a hot spot for both the latest music releases and for pop culture conversation, and often features guest hosting sessions from some of the biggest artists in the world.
2. Apple Music Hits (Image credit: Future)Similar to Apple Music 1, Apple Music Hits also has dedicated slots hosted by both broadcasters and artists, but its main aim is to bring you the best hits from the last 20 years through radio segments highlighting specific genres and music of the ‘80s, ‘90s, and ‘00s. It also has curated shows featuring today’s hits, but not to the extent of Apple Music 1.
3. Apple Music Chill (Image credit: Future)Apple Music Chill is exactly what the name suggests, serving up low-tempo, relaxing tracks which Apple describes as “an escape, a refuge, a sanctuary in sound”. It features laid-back artists and producers, and a variety of instrumental music including dinner party and coffee shop mixes, piano chill outs and spa music.
You might also likeSomeone is selling almost two billion Discord messages and other data, allegedly scraped from the platform, experts have warned.
Security researchers at Cybernews, who saw an ad on an underground hacking forum for the archive.
The data, most likely scraped from the platform, includes 1.8 billion Discord messages, 35 million users, 207 million voice sessions, and 6,000 Discord servers, and can be obtained for a fee.
A Spy.Pet copycat?Discord is a communication platform that lets people chat via text, voice, or video, often in servers organized around communities, games, or interests. It’s popular for gaming, social groups, and professional communities alike, and many servers on the platform are public, meaning anyone can join and read the contents, including chat messages, member names, and more.
This also means that much of the data being sold by the miscreants could be public. Still, while the content is technically visible, harvesting it en masse still violates the platform’s Terms of Service, and using it for commercial purposes, or personal data collection, could run into privacy laws like the GDPR or CCPA.
Whether or not the data is public, can only be determined with a detailed analysis, which no one has done at press time. In any case, it is likely Discord will shut it down, similar to how it shut down a previous service that tried the same thing, called Spy.Pet.
In late April 2024, a website that offered billions of Discord chat logs for sale, was taken offline by the chat app provider. Discord accounts associated with the service were banned, and the company confirmed the service breached its ToS:
"Scraping our services and self-botting are violations of our Terms of Service and Community Guidelines,” the company spokesperson said in a statement at the time. “In addition to banning the affiliated accounts, we are considering appropriate legal action."
You might also like