Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
All the latest content from the TechRadar team
Updated: 1 hour 11 min ago

Microsoft's new Windows tool will let you access your work PC, even if it's been hit in a cyberattack

Tue, 08/12/2025 - 05:06
  • Windows 365 Reserve includes 10 days of Cloud PC access
  • It's designed to plug the gaps during hardware, software or cybersecurity-related outages
  • Affected users can access their work by logging in from a browser or the Windows app

Microsoft has revealed an initial launch of Windows 365 Reserve – a new service which gives users temporary, dedicated Cloud PC access when their primary device is unavailable.

The company says Windows 365 Reserve is designed to maintain business continuity during any type of outage, be it from a cyberattack such as ransomware, a hardware failure, software issues or loss or theft.

In a blog post, Microsoft Senior Product Manager Logan Silliman explained companies already have to deal with, "halt[ed] productivity, delay[ed] deliverables, and strain[ed] IT teams," but the new offering could lift a huge weight off companies during these times of stress.

Windows 365 Reserve is available for some users to try

Microsoft will give users up to 10 days per year, which can be split across incidents or used up in one go.

"With this solution, organizations can proactively establish protections that reduce both financial and operational impacts when disruptions arise," Silliman added.

Promising the usual suite of Microsoft 365 apps, existing Microsoft Intune policies and secure access from any device, Windows 365 Reserve could vastly improve a worker's return to productivity.

It also buys the IT team undisturbed time to remediate whatever the issue is without the worker effectively being offline.

Microsoft noted the feature was developed after customers expressed concerns about challenges preparing for disruptions.

End users affected by any type of outage can regain access to their work by logging into their Windows 365 Reserve via a web browser or the Windows app.

The limited public preview launch comes less than two months after Microsoft first lifted the wraps off the concept.

It remains unclear whether the service will come at an additional cost to businesses, and how any pricing model would work. TechRadar Pro did ask Microsoft to confirm this, but we did not receive an immediate response.

You might also like
Categories: Technology

'Don't give me hope': Marvel star's cryptic Instagram post sparks frenzied reaction from MCU fans over possible appearance in Avengers: Doomsday

Tue, 08/12/2025 - 04:38
  • Ryan Reynolds has hinted that he'll appear in one or both of the next two Avengers movies
  • The Deadpool star sent fans into a frenzy with a cryptic Instagram post
  • The image and caption features call-backs to two of the most profitable Marvel films

Another day, another Avengers: Doomsday cast rumor – but, this time, it's a Marvel actor who's whipped fans into a frenzy over a possible appearance in the film.

Taking to Instagram overnight, Deadpool star Ryan Reynolds sparked a frenzied reaction from fans about his potential inclusion in the superhero flick.

A post shared by Ryan Reynolds (@vancityreynolds)

A photo posted by on

Ordinarily, a post like this wouldn't be much to write home about. The image, which features a red, rebel-style version of the Avengers logo on top of the superteam's official emblem, might be viewed as nothing more than a call-back to Deadpool and Wolverine. That's the only Marvel Cinematic Universe (MCU) movie that launched in theaters last year, and it ended up making over $1 billion globally.

It's the caption accompanying said image that's excited MCU fans, though. Referencing a key line of dialog uttered by Clint Barton/Hawkeye in 2019's Avengers: Endgame, Reynolds wrote: "Don't do that. Don't give me hope".

The implication here, of course, is that Reynolds is suggesting The Merc With a Mouth could finally get his wish to join Earth's Mightiest Heroes to combat an otherworldly threat. Deadpool says as much in Deadpool and Wolverine's first act when he interviews for a place on the super-group's roster.

Reynolds wasn't part of Doomsday's initial 27-strong cast that was announced via a four-hour livestream in March. Less than 24 hours later, though, Marvel insisted "there's always room for more", thereby indicating that more cast additions might be made in the months ahead. If you're interested, here's a list of 17 other Marvel heroes I'd like to see in Avengers: Doomsday.

But I digress. Reynolds' latest social media post insinuates Deadpool will appear in a future Avengers movie, but I'm not sure he'll show up in Avengers 5. I think he'd serve a better purpose by being a big part of Avengers: Secret Wars instead.

Deadpool has already traversed the Marvel multiverse multiple times (Image credit: Marvel Studios)

Hear me out first. As I outlined in my Deadpool and Wolverine ending explained piece, the movie ends with the titular pair residing on Earth-10005. That's the parallel universe – one of many that exists alongside Earth-616, aka the MCU – that the bulk of the MCU Phase 5 film is set in.

Furthermore, Joe and Anthony Russo, who returned to the MCU to helm Doomsday and its sequel, told me that they're "drawing inspiration" from both of Marvel's 'Secret Wars' comic book series. I won't spoil the events of either literary works here – you can learn more about how they may influence Doomsday and its follow-up in the aforementioned linked-to article and my Fantastic Four: First Steps ending explained piece. The latter article reveals how the first Marvel Phase 6 movie might set up Doomsday's plot, so it's also worth reading.

Anyway, considering what's likely to happen in Doomsday (seriously, read the two articles linked above), it makes more sense for Deadpool to meet The Avengers in Secret Wars rather than shoehorn him into its predecessor. Doomsday is already going to be a busy film with so many characters in it. As I pointed out above, more heroes could be part of proceedings, so Marvel might be best served delaying Deadpool's team-up with The Avengers until Secret Wars to ensure the emotional pay-off – for the character and fans alike – is worth the wait.

But, what do you think? Has Reynolds all but confirmed Deadpool will appear in either or both Avengers movies? Which one would you prefer to see him in? Let me know in the comments. Then, check out my dedicated guide on Avengers: Doomsday for the latest news and rumors on the highly anticipated flick.

You might also like
Categories: Technology

GitHub CEO resigns - is this the latest sign of its Microsoft absorption?

Tue, 08/12/2025 - 04:30
  • Thomas Dohmke resigns as GitHub CEO, effective by the end of 2025
  • GitHub is getting closer to Microsoft as it aligns with CoreAI business
  • Microsoft CEO says "internal organizational boundaries are meaningless" anyway

GitHub CEO Thomas Dohmke has announced he is resigning as CEO of the company as Microsoft begins to bring GitHub closer to its CoreAI team.

Following the move, Microsoft will not appoint a new GitHub CEO and the company will no longer have a single leader, instead reporting more directly into its CoreAI division.

After a four-year stint, Dohmke will continue to serve as CEO until the end of 2025, however he has alluded to plans to found a new startup.

GitHub CEO resigns, no new CEO in sight

CoreAI, led by former Meta exec Jay Parikh, is Microsoft's new division for building AI platforms and tools.

"GitHub and its leadership team will continue its mission as part of Microsoft’s CoreAI organization," Dohmke wrote.

The departing CEO also noted "pride in everything we’ve built as a remote-first organization" – it was recently revealed Microsoft could be looking to increase its in-office working days, and it's unclear whether Dohmke's comment is a secret dig at this.

With GitHub set to become more closely aligned with Microsoft's CoreAI, we could speculate that the developer platform's workers could be affected by any upcoming changes.

Speaking about the scale of GitHub, Dohmke mentioned that the platform now houses over one billion repos and forks, more than 150 million developers, and more recently, over 20 million Copilot users.

"By launching this new age of developer AI, we’ve made it possible for anyone – no matter what language they speak at home or how fluent they are in programming – to take their spark of creativity and transform it into something real," he added.

When Satya Nadella launched CoreAI, he explained that besides bringing together "Dev Div, AI Platform and some key teams from the Office of the CTO (AI Supercomputer, AI Agentic Runtimes, and Engineering Thrive)," it would also "build out GitHub Copilot" – an early clue that the popular developer platform would be losing some of its independence.

Nadella also noted: "We must remember that our internal organizational boundaries are meaningless to both our customers and to our competitors."

You might also like
Categories: Technology

iOS 26 beta 6 brings more surprise upgrades to your iPhone – and I'm already using my favorite one

Tue, 08/12/2025 - 04:21
  • iOS 26 beta 6 is out for developers
  • There are changes to Liquid Glass and the Camera app
  • There are also 7 new ringtones

Apple continues to push out beta updates for its iOS 26 software ahead of a full launch later this year, and the latest beta 6 version for developers brings with it a number of interesting tweaks – including one I'm particularly keen to try out.

As reported by TechCrunch and others, iOS 26 is snappier than ever, with app launching and switching noticeably faster – a sure sign that Apple is continuing to optimize the software before it gets rolled out to millions of iPhones.

There are also more tweaks to Liquid Glass here (via MacRumors), with more transparency on the lock screen, and a more 3D look to the lock-screen clock. Apple also seems to have done more work to improve text legibility with the Liquid Glass effect.

Apple has also brought back the previous swipe direction for the modes on the Camera screen, so it's very much as you were with that – a couple of betas ago it reversed the swipe direction for some unknown reason, which messed up the muscle memory of the majority of users.

Added ringtones

iOS 26 beta 6 adds 6 new ringtones!All 6 are variants of “Reflection” pic.twitter.com/BN3mWXm2t5August 11, 2025

There's also a new and improved onboarding process for users here, which will help explain all the changes when iOS 26 rolls out to the masses (most likely in September, with the iPhone 17 series). Do note though that this is the developer beta, and you won't see these changes yet if you're in the public beta program.

What I'm most excited about, however, are the seven new ringtones Apple has added, giving you even more choice for incoming calls. As well as some neat variations on the default Reflection ringtone, there's also a brand-new one called Little Bird.

I've already given it a listen, and it's a jaunty number that mixes synth and whistling sounds to interesting effect. I also like the new Reflection takes, which sound familiar, but which each have a fresh new sound layered on top.

I may be opening myself up to ridicule by getting excited about new ringtones, but these are sounds I hear every day, and new ones are always welcome – it's actually been a couple of years now since Apple treated us to any new variations.

You might also like
Categories: Technology

This fake VPN could have been spying on you all this time

Tue, 08/12/2025 - 04:02
  • The malicious group VexTrio Viper developed and shared a host of fake apps via legit app stores, new research reveals
  • Malicious applications include VPNs, ad-blockers, RAM cleaners, and even online dating services
  • VexTrio Viper employs traffic distribution systems (TDSs) to spread malware and other online scams since at least 2015

No matter if you download your VPN app through Google Play or Apple App Store, there's still a chance it could be a malicious app developed by VexTrio Viper.

In an extensive report, researchers at Infoblox Threat Intel revealed how the fraudulent adtech group published a range of applications on official app stores – from virtual private network (VPN) and ad-blockers to RAM cleaners and even online dating services.

Thought to be active since 2015, VexTrio is a complex criminal enterprise that involves several companies and employs traffic distribution systems (TDSs) to spread malware and other online scams.

At least seven security apps impacted

"They released apps under several developer names, including HolaCode, LocoMind, Hugmi, Klover Group, and AlphaScale Media. [...] Available in the Google Play and Apple stores, these have been downloaded millions of times in aggregate," Infoblox explained to The Hacker News.

Specifically, at least seven applications supposed to offer security tools have been developed by LocoMind, which in 2024 claimed over 500,000 downloads and 50,000 active users for their apps.

These include various VPN services, such as Fast VPN - Super Proxy, and other utility applications, like RAM cleaners.

Once users have installed these applications on their devices, they are bombarded with intrusive ads and prompted to sign up for deceptive subscriptions.

(Image credit: APKPure)

The team at Infoblox Threat Intel has tracked VexTrio's malicious activities since 2022, publishing various reports throughout the years.

Among these, in June 2025, researchers disclosed a criminal web between WordPress hackers and a traffic distribution system (TDS) operated by the VexTrio group.

In 2024, they also unveiled VexTrio's massive malicious affiliate program that worked like a food delivery service for criminals.

"In total, the VexTrio enterprise includes nearly a hundred companies and brands. The scope of their activities includes malicious apps and large-scale spamming operations, and as we published a few months ago, they have a special relationship with numerous website hackers," notes researchers.

How to stay safe

This story is a stark reminder that it isn't enough for an application to be on an official app store to be safe. You should be even more careful when it comes to a security tool, as cybercriminals are notorious for taking advantage of unprotected devices.

For instance, in April, an investigation found at least 20 free VPN apps with undisclosed Chinese ownership lurking in Apple's official app store in the US. At least five of these were linked with a Shanghai-based firm believed to have ties with the Chinese military.

While the best VPN services boost your online anonymity and security by encrypting your internet traffic and spoofing your IP address, malicious apps pose risks to your privacy.

As a rule of thumb, you should only download a reliable service with a strong no-log VPN policy and a history of independent third-party audits.

If you aren't willing to pay for a premium service just yet, I recommend checking Proton VPN and Privado VPN, as they currently are the best free VPNs on the market, according to TechRadar's reviewers.

That said, our testing confirmed NordVPN as the best all-arounder right now, thanks to great security/privacy features and impeccable performance. Even better, perhaps, you may still be in time to grab TechRadar's exclusive deal, which expires on August 12, 2025.

NordVPN: The best VPN for most people
Sign up to one of NordVPN's 2-year plans today to claim TechRadar's exclusive deal and get:
✅ Up to 76% OFF
✅ Up to $50 Amazon Gift card
✅ 4 months free protection (TechRadar exclusive)

There's a 30-day money-back guarantee, so if it isn't right, you can cancel and get a refund.View Deal

You might also like
Categories: Technology

Welcome to the era of empathic Artificial Intelligence

Tue, 08/12/2025 - 03:47

Imagine a health plan member interacting with their insurer’s virtual assistant, typing, “I just lost my mom and feel overwhelmed.” A conventional chatbot might respond with a perfunctory “I’m sorry to hear that” and send a list of FAQs. This might be why 59% of chatbot users before 2020 felt that “the technologies have misunderstood the nuances of human dialogue.”

In contrast, an AI agent can pause, offer empathetic condolences, gently guide the member to relevant resources, and even help schedule an appointment with their doctor. This empathy, paired with personalization, drives better outcomes.

When people feel understood, they’re more likely to engage, follow through, and trust the system guiding them. Oftentimes in regulated industries that handle sensitive topics, simple task automation fails when users abandon engagements that feel rigid, incompetent, or lack understanding of the individual’s circumstances.

AI agents can listen, understand, and respond with compassion. This combination of contextual awareness and sentiment‑driven response is more than just a nice‑to‑have add-on—it’s foundational for building trust, maintaining engagement, and ensuring members navigating difficult moments get the personalized support they need.

Beyond Automation: Why Empathy Matters in Complex Conversations

Traditional automation excels at straightforward, rule‑based tasks but struggles when conversations turn sensitive. AI agents, by contrast, can detect emotional cues—analyzing tone, punctuation, word choice, conversation history, and more—and deliver supportive, context‑appropriate guidance.

This shift from transactional to relational interactions matters in regulated industries, where people may need help navigating housing assistance, substance-use treatment, or reproductive health concerns.

AI agents that are context-aware and emotionally intelligent can support these conversations by remaining neutral, non‑judgmental, and attuned to the user’s needs.

They also offer a level of accuracy and consistency that’s hard to match—helping ensure members receive timely, personalized guidance and reliable access to resources, which could lead to better, more trusted outcomes.

The Technology Under the Hood

Recent advances in large language models (LLMs) and transformer architectures (GPT‑style models) have been pivotal to enabling more natural, emotionally aware conversations between AI agents and users. Unlike early sentiment analysis tools that only classified text as positive or negative, modern LLMs predict word sequences across entire dialogues, effectively learning the subtleties of human expression.

Consider a scenario where a user types, “I just got laid off and need to talk to someone about my coverage.” An early-generation chatbot might respond with “I can help you with your benefits,” ignoring the user’s distress.

Today’s emotionally intelligent AI agent first acknowledges the emotional weight: “I’m sorry to hear that—losing a job can be really tough.” It then transitions into assistance: “Let’s review your coverage options together, and I can help you schedule a call if you'd like to speak with someone directly."

These advances bring two key strengths. First, contextual awareness means AI agents can track conversation history—remembering what a user mentioned in an earlier exchange and following up appropriately.

Second, built‑in sentiment sensitivity allows these models to move beyond simple positive versus negative tagging. By learning emotional patterns from real‑world conversations, these AI agents can recognize shifts in tone and tailor responses to match the user’s emotional state.

Ethically responsible online platforms embed a robust framework of guardrails to ensure safe, compliant, and trustworthy AI interactions. In regulated environments, this includes proactive content filtering, privacy protections, and strict boundaries that prevent AI from offering unauthorized advice.

Sensitive topics are handled with predefined responses and escalated to human professionals when needed. These safeguards mitigate risk, reinforce user trust, and ensure automation remains accountable, ethical, and aligned with regulatory standards.

Navigating Challenges in Regulated Environments

For people to trust AI in regulated sectors, AI must do more than sound empathetic. It must be transparent, respect user boundaries, and know when to escalate to live experts. Robust safety layers mitigate risk and reinforce trust.

Empathy Subjectivity

Tone, cultural norms, and even punctuation can shift perception. Robust testing across demographics, languages, and use cases is critical. When agents detect confusion or frustration, escalation paths to live agents must be seamless, ensuring swift resolution and access to the appropriate level of human support when automated responses may fall short.

Regulatory Compliance and Transparency

Industries under strict oversight cannot allow hallucinations or unauthorized advice. Platforms must enforce transparent disclosures—ensuring virtual agents identify themselves as non-human—and embed compliance‑driven guardrails that block unapproved recommendations. Redirects to human experts should be fully logged, auditable, and aligned with applicable frameworks.

Guardrail Management

Guardrails must filter hate speech or explicit content while distinguishing between abusive language and expressions of frustration. When users use mild profanity to convey emotional distress, AI agents should recognize the intent without mirroring the language—responding appropriately and remaining within company guidelines and industry regulations.

Also, crisis‑intervention messaging—responding to instances of self‑harm, domestic violence, or substance abuse—must be flexible enough for organizations to tailor responses to their communities, connect people with local resources, and deliver support that is both empathetic and compliant with regulatory standards.

Empathy as a Competitive Advantage

As regulated industries embrace AI agents, the conversation is shifting from evaluating their potential to implementing them at scale. Tomorrow’s leaders won’t just pilot emotion‑aware agents but embed empathy into every customer journey, from onboarding to crisis support.

By committing to this ongoing evolution, businesses can turn compliance requirements into opportunities for deeper connection and redefine what it means to serve customers in complex, regulated environments.

Regulated AI must engineer empathy in every interaction. When systems understand the emotional context (not just data points), they become partners rather than tools. But without vertical specialization and real-time guardrails, even the most well-intentioned AI agents can misstep.

The future belongs to agentic, emotionally intelligent platforms that can adapt on the fly, safeguard compliance, and lead with compassion when it matters most. Empathy, when operationalized safely, becomes more than a UX goal—it becomes a business advantage.

We list the best enterprise messaging platform.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

People are fighting about whether Dyson vacuums are trash, but the arguments don't stack up – here's my take as a vacuum expert

Tue, 08/12/2025 - 03:44

Vacuum cleaners divide opinion more than you might expect, and the brand that people seem to feel most strongly about is Dyson. Behind every diehard Dyson fan there are 10 more people ready to eagerly proclaim that they're the worst vacuums in the world.

At the weekend, designer Mike Smith proclaimed on X that Dyson vacuums were "not for serious vacuumers" and the ensuing thread went viral, with over 1,000 people piling in to air their vacuum views.

My hot take is that Dyson vacuums are not for serious vacuumers.Battery is garbage, filter is garbage. Canister too small. Absolute joke of a cleaning tool.August 10, 2025

I manage the vacuum cleaner content for TechRadar, which includes reviewing vacs from many different brands and putting together our official best vacuum cleaner ranking. All of that means I spend far more time than the average person thinking about vacuum cleaners.

I'm neither wildly pro- or anti-Dyson, and this discussion didn't sway me any further in either direction. What it did do is make me even more confident in my long-held belief that what most people actually have a problem with is not Dyson vacuums, but cordless stick vacuums in general.

Cordless stick vacuums are not the same as traditional upright vacuums or canister vacs. In some ways, they're worse. Providing strong suction requires a lot of power, and the bigger the battery the heavier the vacuum – so brands are constantly trying to balance whether to provide customers with longer runtimes or a lighter build.

A bigger dust cup means a vacuum that's bulkier and heavier, so there's another trade-off there in terms of how often you have to empty it. They also seem to be an inherently less robust type of cleaner – cordless stick vacs are expected to have a far shorter overall lifespan than other styles of vacuum.

(Image credit: Future)

In short, if you choose a cordless stick vacuum, you should expect limited runtimes on higher suction modes, canisters that need emptying regularly, and for it not to last forever. For those compromises, you get something you don't need to plug into the wall, and which you can easily use to vacuum up the stairs – or even on the ceiling – if you want to.

Of course, some cordless vacs perform much better than others, but broadly speaking you should expect those pros and cons to be true whatever model or brand you go for. Dyson stick vacs might not be for "serious" vacuuming, but boy are they good for convenient, comfortable vacuuming.

(Of course, the other element when it comes to Dyson is the price. I get into this more in my article exploring if Dyson vacuums are worth it, and I've also written about by experience of Shark vs Dyson vacuums, if you're interested in that comparison specifically.)

In the thread, the name that crops up again and again from the opposing chorus is Miele. This brand is synonymous with canister vacuums, and not a direct comparison. One of the very best vacuums I've used in terms of outright suction power remains the 25+ year-old upright that used to belong to my Nana and now lives in my parents' house. But it weighs a ton and takes up a load of space, so when it comes to cleaning my own flat, I'd reach for a Dyson (or similar) every time.

You might also like...
Categories: Technology

I am an AI expert and here's why synthetic threats demand synthetic resilience

Tue, 08/12/2025 - 02:51

Artificial Intelligence (AI) is rapidly reshaping the landscape of fraud prevention, creating new opportunities for defense as well as new avenues for deception.

Across industries, AI has become a double-edged sword. On one hand, it enables more sophisticated fraud detection, but on the other, it is being weaponized by threat actors to exploit controls, create synthetic identities and launch hyper-realistic attacks.

Fraud prevention is vital in sectors handling high volumes of sensitive transactions and digital identities. In financial services, for example, it's not just about protecting capital - regulatory compliance and customer trust are at stake.

Similar cybersecurity pressures are growing in telecoms and tech industries like SaaS, ecommerce and cloud infrastructure, where threats like SIM swapping, API abuse and synthetic users can cause serious disruption.

Fraud has already shifted from a risk to a core business challenge - with 58 per cent of key decision-makers in large UK businesses now viewing it as a ‘serious threat’, according to a survey conducted in 2024.

The rise of synthetic threats

Synthetic fraud refers to attacks that leverage fabricated data, AI-generated content or manipulated digital identities. These aren’t new concepts, but the capability and accessibility of generative AI tools have dramatically lowered the barrier to entry.

A major threat is the creation of synthetic identities which are combinations of real and fictitious information used to open accounts, bypass Know-Your-Customer (KYC) checks or access services.

Deepfakes are also being used to impersonate executives during video calls or in phishing attempts. One recent example involved attackers using AI to mimic a CEO’s voice and authorize a fraudulent transfer. These tactics are difficult to detect in fast-moving digital environments without advanced, real-time verification methods.

Data silos only exacerbate the problem. In many tech organizations, different departments rely on disconnected tools or platforms. One team may use AI for authentication while another still relies on legacy systems, and it is these blind spots which are easily exploited by AI-driven fraud.

AI as a defense

While AI enables fraud, it also offers powerful tools for defense if implemented strategically. At its best, AI can process vast volumes of data in real time, detect suspicious patterns and adapt as threats evolve. But this depends on effective integration, governance and oversight.

One common weakness lies in fragmented systems. Fraud prevention efforts often operate in silos across compliance, cybersecurity and customer teams. To build true resilience, organizations must align AI strategies across departments. Shared data lakes, or secure APIs, can enable integrated models with a holistic view of user behavior.

Synthetic data, often associated with fraud, can also play a role in defense. Organizations can use anonymized, realistic data to simulate rare fraud scenarios and train models without compromising customer privacy. This approach helps test defenses against edge cases not found in historical data.

Fraud systems must also be adaptive. Static rules and rarely updated models can’t keep pace with AI-powered fraud - real-time, continuously learning systems are now essential. Many companies are adopting behavioral biometrics, where AI monitors how users interact with devices, such as typing rhythm or mouse movement, to detect anomalies, even when credentials appear valid.

Explainability is another cornerstone of responsible AI use and it is essential to understand why a system has flagged or blocked activity. Explainable AI (XAI) frameworks help make decisions transparently, supporting trust and regulatory compliance, ensuring AI is not just effective, but also accountable.

Industry collaboration

AI-enhanced fraud doesn’t respect organizational boundaries, and as a result, cross-industry collaboration is becoming increasingly important. While sectors like financial services have long benefited from information-sharing frameworks like ISACs, similar initiatives are emerging in the broader tech ecosystem.

Cloud providers are beginning to share indicators of compromised credentials or coordinated malicious activity with clients. SaaS and cybersecurity vendors are also forming consortiums and joint research initiatives to accelerate detection and improve response times across the board.

Despite its power, AI is not a silver bullet and organizations which rely solely on automation risk missing subtle or novel fraud techniques. Effective fraud strategies should include regular model audits, scenario testing and red-teaming exercises (where ethical hackers conduct simulated cyberattacks on an organization to test cybersecurity effectiveness).

Human analysts bring domain knowledge and judgement that can refine model performance. Training teams to work alongside AI is key to building synthetic resilience, combining human insight with machine speed and scale.

Resilience is a system, not a feature

As AI transforms both the tools of fraud and the methods of prevention, organizations must redefine resilience. It’s no longer about isolated tools, but about creating a connected, adaptive, and explainable defense ecosystem.

For many organizations, that means integrating AI across business units, embracing synthetic data, prioritizing explainability, and embedding continuous improvement into fraud models. While financial services may have pioneered many of these practices, the broader tech industry now faces the same level of sophistication in fraud, and must respond accordingly.

In this new era, synthetic resilience is not a static end goal but a capability to be constantly cultivated. Those who succeed will not only defend their businesses more effectively but help define the future of secure, AI-enabled digital trust.

We list the best identity management solutions.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

The evolution of smart data capture

Tue, 08/12/2025 - 01:53

The landscape of smart data capture software is undergoing a significant transformation, with advancements that can help businesses build long-term resilience against disruptions like trade tariffs, labor shortages, and volatile demand.

No longer confined to handheld computers and mobile devices, the technology is embracing a new batch of hybrid data capture methods that include fixed cameras, drones, and wearables.

If you weren’t familiar with smart data capture, it is the ability to capture data intelligently from barcodes, text, IDs, and objects. It enables real-time decision-making, engagement, and workflow automation at scale across industries such as retail, supply chain, logistics, travel, and healthcare.

The advancements it’s currently experiencing are beyond technological novelties; they are further redefining how businesses operate, driving ROI, enhancing customer experience, and streamlining operational workflows. Let’s explore how:

More than just smartphones

Traditionally, smart data capture relied heavily on smartphones and handheld computers, devices that both captured data and facilitated user action. With advancements in technology, the device landscape is expanding. Wearables like smart glasses and headsets, fixed cameras, drones, and even robots are becoming more commonplace, each with its own value.

This diversification leads to the distinction of devices that purely ‘capture’ data versus those that can ‘act’ on it too. For example, stationary cameras or drones capture data from the real world and then feed it into a system of record to be aggregated with other data.

Other devices — often mobile or wearable — can capture data and empower users to act on that information instantly, such as a store associate who scans a shelf and can instantly be informed of a pricing error on a particular item. Depending on factors such as the frequency of data collected, these devices can allow enterprises to tailor a data capture strategy to their needs.

Practical innovations with real ROI

In a market saturated with emerging technologies, it's easy to get caught up in the hype of the next big thing. However, not all innovations are ready for prime time, and many fail to deliver a tangible return on investment, especially at scale. The key for businesses is to focus on practical, easy-to-implement solutions that enhance workflows rather than disrupt them by leveraging existing technologies and IT infrastructure.

An illustrative example of this evolution is the increasing use of fixed cameras in conjunction with mobile devices for shelf auditing and monitoring in retail environments. Retailers are deploying mobile devices and fixed cameras to monitor shelves in near real-time and identify out-of-stock items, pricing errors, and planogram discrepancies, freeing up store associates’ time and increasing revenue — game-changing capabilities in the current volatile trade environment, which triggers frequent price changes and inventory challenges.

This hybrid shelf management approach allows businesses to scale operations no matter the store format: retailers can easily pilot the solution using their existing mobile devices with minimal upfront investment and assess all the expected ROI and benefits before committing to full-scale implementation.

The combination also enables further operational efficiency, with fixed cameras providing continuous and fully automated shelf monitoring in high-footfall areas, while mobile devices can handle lower-frequency monitoring in less-frequented aisles.

This is how a leading European grocery chain increased revenue by 2% in just six months — an enormous uplift in a tight-margin vertical like grocery.

Multi-device and multi-signal systems

An important aspect of this data capture evolution is the seamless integration of all these various devices and technologies. User interfaces are being developed to facilitate multi-device interactions, ensuring that data captured by one system can be acted upon through another.

For example, fixed cameras might continuously monitor inventory levels, with alerts to replenish specific low-stock items sent directly to a worker's wearable device for immediate and hands-free action.

And speaking of hands-free operation: gesture recognition and voice input are also becoming increasingly important, especially for wearable devices lacking traditional touchscreens. Advancing these technologies would enable workers to interact with items naturally and efficiently.

Adaptive user interfaces also play a vital role, ensuring consistent experiences across different devices and form factors. Whether using a smartphone, tablet, or digital eyewear, the user interface should adapt to provide the necessary functionality without a steep learning curve; otherwise, it may negatively impact the adoption rate of the data capture solution.

Recognizing the benefits, a large US grocer implemented a pre-built adaptive UI to enable top-performing scanning capabilities on existing apps to 100 stores in just 90 days.

The co-pilot system

As the volume of data increases, so does the potential for information overload. In some cases, systems can generate thousands of alerts daily, overwhelming staff and hindering productivity. To combat this, businesses are adopting so-called co-pilot systems — a combination of devices and advanced smart data capture that can guide workers to prioritize ROI-optimizing tasks.

This combination leverages machine learning to analyze sales numbers, inventory levels, and other critical metrics, providing frontline workers with actionable insights. By focusing on high-priority tasks, employees can work more efficiently without sifting through endless lists of alerts.

Preparing for the future

As the smart data capture landscape continues to evolve and disruption becomes the “new normal”, businesses must ensure their technology stacks are flexible, adaptable, and scalable.

Supporting various devices, integrating multiple data signals, and providing clear task prioritization are essential for staying competitive in an increasingly complex, changeable and data-driven market.

By embracing hybrid smart data capture device strategies, businesses can optimize processes, enhance user experiences, and make informed decisions based on real-time data.

The convergence of mobile devices, fixed cameras, wearables, drones, and advanced user interfaces represents not just an evolution in technology but a revolution in how businesses operate. And in a world where data is king, those who capture it effectively — and act on it intelligently — will lock in higher margins today and lead the way tomorrow.

We've listed the best ERP software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

I learned all about cheese with Gemini's Guided Learning feature, and it was so easy, I’m thinking of making my own cheese

Mon, 08/11/2025 - 22:00

Google Gemini introduced a new feature aimed at education called Guided Learning this month. The idea is to teach you something through question-centered conversation instead of a lecture.

When you ask it to teach you something, it breaks the topic down and starts asking you questions about it. Based on your answers, it explains more details and asks another question. The feature provides visuals, quizzes, and even embeds YouTube videos to help you absorb knowledge.

As a test, I asked Gemini's Socratic tutor to teach me all about cheese. It started by asking me about what I think is in cheese, clarifying my somewhat vague answer with more details, and then asking if I knew how those ingredients become cheese. Soon, I was in a full-blown cheese seminar. For every answer I gave, Gemini came back with more details or, in a gentle way, told me I was wrong.

The AI then got into cheese history. It framed the history as a story of traveling herders, clay pots, ancient salt, and Egyptian tombs with cheese residue. It showed a visual timeline and said, “Which of these surprises you most?” I said the tombs did, and it said, “Right? They found cheese in a tomb and it had survived.” Which is horrifying and also makes me respect cheese on a deeper level.

In about 15 minutes, I knew all about curds and whey, the history of a few regional cheese traditions, and even how to pick out the best examples of different cheeses. I could see photos in some cases and a video tour of a cellar full of expensive wheels of cheese in France. The AI quizzed me when I asked it to make sure I was getting it, and I scored a ten out of ten.

(Image credit: Gemini screenshots)Cheesemonger AI

It didn’t feel like studying, exactly. More like falling into a conversation where the other person knows everything about dairy and is excited to bring you along for the ride. After learning about casein micelles. starter cultures, and cutting the curd, Gemini asked me if I wanted to learn how to make cheese.

I said sure, and it guided me through the process of making ricotta, including pictures to help show what it should look like at each step.

(Image credit: Gemini screenshots)

By the time I was done with that part of the conversation, I felt like I’d taken a mini‑course in cheesemaking. I'm not sure I am ready to fill an entire cheeseboard or age a wheel of gruyère in my basement.

Still, I think making ricotta or maybe paneer would be a fun activity in the next few weeks. And I can show off a mild, wobbly ball of dairy pride thanks to learning from questioning, and, as it were, being guided to an education.

You might also like
Categories: Technology

Baffled by ChatGPT and Copilot? It might not be your fault - report flags the key skills needed to get the most out of AI

Mon, 08/11/2025 - 20:03
  • Report claims AI adoption depends on critical human abilities
  • Ethics, adaptability, and audience-specific communication all named
  • The skills gap in AI workplaces is as much human as it is technical

As AI tools become more and more embedded in our everyday work, new research claims the challenge of not getting the best out of them may not lie solely with the technology.

A report from Multiverse has identified thirteen core human skillsets which could determine whether companies fully realize AI’s potential.

The study warns without deliberate attention to these capabilities, investment in AI writer systems, LLM applications, and other AI tools could fall short of expectations.

Critical thinking under pressure

The Multiverse study draws from observation of AI users at varying experience levels, from beginners to experts, employing methods such as the Think Aloud Protocol Analysis.

Participants verbalised their thought processes while using AI to complete real-world tasks.

From this, researchers built a framework grouping the identified skills into four categories: cognitive skills, responsible AI skills, self-management, and communication skills.

Among the cognitive abilities, analytical reasoning, creativity, and systems thinking were found to be essential for evaluating AI outputs, pushing innovation, and predicting AI responses.

Responsible AI skills included ethics, such as spotting bias in outputs, and cultural sensitivity to address geographic or social context gaps.

Self-management covered adaptability, curiosity, detail orientation, and determination, traits that influence how people refine their AI interactions.

Communication skills included tailoring AI-generated outputs for audience expectations, engaging empathetically with AI as a thought partner, and exchanging feedback to improve performance.

Reports from academic institutions, including MIT, have raised concerns reliance on generative AI can reduce critical thinking, a phenomenon linked to “cognitive offloading.”

This is the process where people delegate mental effort to machines, risking erosion of analytical habits.

While AI tools can process vast amounts of information at speed, the research suggests they cannot replace the nuanced reasoning and ethical judgement that humans contribute.

The Multiverse researchers note that companies focusing solely on technical training may overlook the “soft skills” required for effective collaboration with AI.

Leaders may assume their AI tool investments address a technology gap when in reality, they face a combined human-technology challenge.

The study refrains from claiming AI inevitably weakens human cognition, but instead it argues the nature of cognitive work is shifting, with less emphasis on memorising facts and more on knowing how to access, interpret, and verify information.

You might also like
Categories: Technology

One of my favorite iPhone features arrives on the Mac with Tahoe – and I can’t stop using it

Mon, 08/11/2025 - 19:00

While the new ‘Liquid Glass’ look and a way more powerful Spotlight might be the leading features of macOS Tahoe 26, I’ve found that bringing over a much-loved iPhone feature has proven to be the highlight after weeks of testing.

Live Activities steal the show on the iPhone, thanks to their glanceability and effortless way of highlighting key info, whether it’s from a first or third-party app. Some of my favorites are:

  • Flighty displays flight tracking details in real-time, for myself, family, or friends
  • Airlines like United show my seat, a countdown for boarding, or even baggage claim
  • Rideshare apps tell you what kind of car you're driving is arriving in
  • Apple Sports displays your favorite teams' live scores in real-time with the game

Now, all of this is arriving on the Mac – right at the top navigation bar, near the right-hand side. They appear when your iPhone is nearby, signed into the same Apple Account, and mirror the same Live Activities you’d see on your phone. It’s a simple but powerful addition.

Considering Apple brought iPhone Mirroring to the Mac in 2024, this 2025 follow-up isn’t surprising. But it’s exactly the kind of small feature that makes a big difference. I’ve loved being able to check a score, track a flight, or see my live position on a plane – without fishing for my phone.

(Image credit: Future/Jacob Krol)

I’ve used it plenty at my desk, but to me, it truly shines in Economy class. If you’ve ever tried balancing an iPhone and a MacBook Pro – or even a MacBook Air – on a tray table, you know the awkward overlap. I usually end up propping the iPhone against my screen, hanging it off the palm rest, or just tossing it in my lap. With Live Activities on the Mac, I can stick to one device and keep the tray table clutter-free.

Considering notifications already sync, iPhone Mirroring arrived last year, Live Activities were ultimately the missing piece. On macOS Tahoe, they sit neatly collapsed in the menu bar, just like the Dynamic Island on iPhone, and you can click on one to expand and see the full Live Activity. Another click on any of these Live Activities quickly opens the app on your iPhone via the Mirroring app – it all works together pretty seamlessly.

(Image credit: Future/Jacob Krol)

You can also easily dismiss them, as I have found they automatically expand for major updates, saving screen real estate on your Mac. If you already have a Live Activity that you really enjoy on your iPhone, there’s really no extra work needed from the developer, as these will automatically repeat.

All-in-all, it’s a small but super helpful tool that really excels in cramped spaces. So, if you’ve ever struggled with the same balancing act as I have with a tray table, your iPhone, and a MacBook, know that relief is on the way.

It's arriving in the Fall (September or October) with the release of macOS Tahoe 26. If you want it sooner, the public beta of macOS Tahoe 26 is out now, but you'll need to be okay with some bugs and slowdowns.

You might also like
Categories: Technology

Brave or foolhardy? Huawei takes the fight to Nvidia CUDA by making its Ascend AI GPU software open source

Mon, 08/11/2025 - 17:42
  • Huawei makes its CANN AI GPU toolkit open source to challenge Nvidia’s proprietary CUDA platform
  • CUDA’s near 20-year dominance has locked developers into Nvidia’s hardware ecosystem exclusively
  • CANN provides multi-layer programming interfaces for AI applications on Huawei’s Ascend AI GPUs

Huawei has announced plans to make its CANN software toolkit for Ascend AI GPUs open source, a move aimed squarely at challenging Nvidia’s long-standing CUDA dominance.

CUDA, often described as a closed-off “moat” or “swamp,” has been viewed as a barrier for developers seeking cross-platform compatibility by some for years.

Its tight integration with Nvidia hardware has locked developers into a single vendor ecosystem for nearly two decades, with all efforts to bring CUDA functionality to other GPU architectures through translation layers blocked by the company.

Opening up CANN to developers

CANN, short for Compute Architecture for Neural Networks, is Huawei’s heterogeneous computing framework designed to help developers create AI applications for its Ascend AI GPUs.

The architecture offers multiple programming layers, giving developers options for building both high-level and performance-intensive applications.

In many ways, it is Huawei’s equivalent to CUDA, but the decision to open its source code signals an intent to grow an alternative ecosystem without the restrictions of a proprietary model.

Huawei has reportedly already begun discussions with major Chinese AI players, universities, research institutions, and business partners about contributing to an open-sourced Ascend development community.

This outreach could help accelerate the creation of optimized tools, libraries, and AI frameworks for Huawei’s GPUs, potentially making them more attractive to developers who currently rely on Nvidia hardware.

Huawei’s AI hardware performance has been improving steadily, with claims that certain Ascend chips can outperform Nvidia processors under specific conditions.

Reports such as CloudMatrix 384’s benchmark results against Nvidia running DeepSeek R1 suggest that Huawei’s performance trajectory is closing the gap.

However, raw performance alone will not guarantee developer migration without equivalent software stability and support.

While open-sourcing CANN could be exciting for developers, its ecosystem is in its early stages and may not be anything close to CUDA, which has been refined for nearly 20 years.

Even with open-source status, adoption may depend on how well CANN supports existing AI frameworks, particularly for emerging workloads in large language models (LLM) and AI writer tools.

Huawei’s decision could have broader implications beyond developer convenience, as open-sourcing CANN aligns with China’s broader push for technological self-sufficiency in AI computing, reducing dependence on Western chipmakers.

In the current environment, where U.S. restrictions target Huawei’s hardware exports, building a robust domestic software stack for AI tools becomes as critical as improving chip performance.

If Huawei can successfully foster a vibrant open-source community around CANN, it could present the first serious alternative to CUDA in years.

Still, the challenge lies not just in code availability, but in building trust, documentation, and compatibility at the scale Nvidia has achieved.

Via Toms Hardware

You might also like
Categories: Technology

4 things we learned from OpenAI’s GPT-5 Reddit AMA

Mon, 08/11/2025 - 17:00

OpenAI CEO Sam Altman and several other researchers and engineers came to Reddit the day after debuting the powerful new GPT-5 AI model for the time-honored tradition of an Ask Me Anything thread.

Though the discussion ranged over all kinds of technical and product elements, there were a few topics that stood out as particularly important to posters based on the frequency and passion with which they were discussed. Here are a few of the most notable things we learned from the OpenAI AMA.

Pining for GPT-4o

The biggest recurring theme in the AMA was a mournful wail from users who loved GPT-4o and felt personally attacked by its removal. That's not an exaggeration, as one user posted, “BRING BACK 4o GPT-5 is wearing the skin of my dead friend.”To which Altman replied, “what an…evocative image. ok we hear you on 4o, working on something now.”

This wasn’t just one isolated request, either. Another post asked to keep both GPT-4o and GPT-4.1 alongside GPT-5, arguing that the older models had distinct personalities and creative rhythms. Altman admitted they were “looking into this now.”

Most requests were a little more subdued, with one poster asking, “Why are we getting rid of the variants and 4o when we all have unique communication styles? Please bring them back!”

Altman’s answer was brief but direct in conceding the point. He wrote, “ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!). we are going to bring it back for plus users, and will watch usage to determine how long to support it."

It is interesting that so many heavy users seem to prefer the style of the older model, and prefer it to the objectively better newer ones.

Filtering history

Another big topic was ChatGPT's safety filter, both currently and before GPT-5 which many posted complaints about for being overzealous. One user described a scenario where they’d been flagged for discussing historical topics, with a response about Gauguin getting flagged and deleted because the artist was a "sex pest," and the user's own clarification question itself getting flagged.

Altman’s answer was a mixture of agreement and reality check. “Yeah, we will continue to improve this,” he said. “It is a legit hard thing; the lines are often really quite blurry sometimes.” He stressed that OpenAI wants to allow “very wide latitude” but admitted that the boundary between unsafe and safe content is far from perfect, but that "people should of course not get banned for learning."

New tier

Another questioner zeroed in on a gap in OpenAI’s subscription model: "Are you guys planning to add another plan for solo power users that are not pros? 20$ plan offers too little for some, and the $200 tier is overkill."

Altman’s answer was succinct, simply saying, “Yes we will do something here.” No details, just a confirmation that the idea’s on the table. That brevity leaves open possibilities from 'next week' to just saying 'the discussion starts now.' But the pricing gap is a big deal for power users who find themselves constrained by the Plus tier but can’t justify enterprise pricing. If OpenAI does create an intermediate tier, it could reshape how dedicated individual users engage with the platform.

The future

At the end of the AMA, Altman shared some new information about the current and future state of ChatGPT and GPT-5. He started by admitting to some issues with the release, writing that "we expected some bumpiness as we roll out so many things at once. But it was a little more bumpy than we hoped for!"

That bumpiness ended up making GPT-5 seem not as impressive as it should have until now.

"GPT-5 will seem smarter starting today," Altman wrote. "Yesterday, we had a sev [severity, meaning system issue] and the autoswitcher was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber."

He also promised more access for ChatGPT Plus users, with double the rate limits, as well as the upcoming return of GPT-4o, at least for those same subscribers. The AMA did paint a clearer picture of what OpenAI is willing to change in response to public pressure.

The return of GPT-4o for Plus users at least acknowledges that raw capability isn’t the only metric that matters. If users are this vocal about keeping an older model alive, future releases of GPT-5 and beyond may be designed with more deliberate flavors built in beyond just the personality types promised for GPT-5.

You might also like
Categories: Technology

MacBook screens can be broken with a simple greeting card, viral TikTok video warns – and Apple has explained the reason why

Mon, 08/11/2025 - 17:00
  • A TikTok user damaged their MacBook display in an unexpected way
  • The issue was caused by a piece of card placed under the lid
  • Even something as innocuous as this can break a laptop screen

For many MacBook owners, it’s a nightmare come true: you open the lid of your pricey laptop and switch it on, only to find the display is a mess, with black bars and glitchy colors everywhere you look. The screen has been ruined, and it’s going to cost a whole lot to put it right.

Worryingly, it’s actually a lot easier to experience this than you might expect: just one seemingly innocuous action can cause hundreds of dollars of damage.

That’s something TikTok user classicheidi found out the hard way. In a video uploaded to the social media platform, classicheidi explained that they had placed a piece of card on the keyboard of their MacBook Air, then closed the lid.

When they opened it again a while later, the screen was ruined.

A costly mistake @classicheidi

Is this common knowledge omg

♬ original sound - Heidi

This is an unfortunate incident, but there’s a reason it happened. It’s not because the displays of Apple’s laptops (or those of any other manufacturer, for that matter) are weak or poorly made. But while they should certainly be treated with care, there’s another issue at play.

It’s what Apple describes in a support document as the “tight tolerances” of its laptops. Apple’s MacBooks are made to be as thin as possible, which means the gap between the keyboard and display is very small when the lid is closed.

Anything placed in that gap – even something as modest as a piece of card – can be pushed up against the display, with the resulting pressure leading to serious damage.

For that reason, Apple warns that “leaving any material on your display, keyboard, or palm rest might interfere with the display when it’s closed and cause damage to your display.” If you have a camera cover, a palm rest cover, or a keyboard cover, Apple says you should remove it before closing your laptop’s lid to avoid this kind of scenario – unfortunately, it’s something we've seen before.

If you want to sidestep the kind of outcome classicheidi suffered, it’s important to ensure there’s nothing between your laptop’s keyboard and screen when you close it. If there is, you might open it up to “the biggest jump scare of the century,” in classicheidi’s words.

You might also like
Categories: Technology

Fake TikTok shops found spreading malware to unsuspecting victims - here's how to stay safe

Mon, 08/11/2025 - 16:04
  • Fraudulent TikTok Shops driving victims into fake portals designed to steal cryptocurrency and data
  • Scammers mimic trusted seller profiles and lure shoppers with unrealistic discounts across popular platforms
  • SparkKitty malware secretly collects sensitive data from devices, enabling long-term unauthorized surveillance and control

Cybercriminals are now making use of TikTok Shops to spread malware and steal funds from unsuspecting young users of the platform.

The campaign, revealed by security experts at CTM360, mimics the profile of legitimate ecommerce sellers to build its credibility, often using AI-generated content.

In addition to TikTok, these fake shops can also be found on Facebook, where their modus operandi is to advertise massive price cuts to lure potential victims.

Exploiting brand trust for profit

The main target of these malicious actors is not only to defraud users, mostly in cryptocurrency, but also to deliver malicious software and steal login details.

At the moment, TikTok Wholesale and Mall pages have been linked to over 10,000 such fraudulent URLs.

These URLs, which look like official platforms, offer “buy links” that redirect visitors to a criminal phishing portal.

Once users click the link and enter the portal, they will be made to pay a deposit into an online wallet or purchase a product – the online wallet is fake and the product does not exist.

Some operations take the deception further by posing as an affiliate management service, pushing malicious apps disguised as tools for sellers.

More than 5,000 app download sources have been uncovered, many using embedded links and QR codes to bypass traditional scrutiny.

One identified threat, known as SparkKitty, is capable of harvesting data from both Android and iOS devices.

It can enable long-term access to compromised devices, creating ongoing risk even after the initial infection.

The malware is often delivered through these fake affiliate applications, turning what appears to be a legitimate opportunity into a direct path for account takeover and identity theft.

Because cryptocurrency transactions are irreversible, victims have little recourse once funds are transferred.

A common thread in the campaign is the use of pressure tactics, with countdown timers or limited-time discounts designed to force quick decisions.

These tactics, while common in legitimate marketing, make it harder for users to pause and assess the authenticity of an offer.

Domain checks reveal many of the scam sites using inexpensive extensions such as .top, .shop, or .icu, which can be purchased and deployed rapidly.

How to stay safe
  • Make sure you check the website address carefully before entering your payment information. Every detail of the website should match the legitimate domain.
  • Ensure that you use secure HTTPS encryption
  • If the price cut feels too huge, follow your gut and stay away.
  • Do not allow a countdown timer to pressure you into making payment; this pressure is a common tactic my malicious actors
  • Always insist on the standard payment methods and avoid direct wire transfers or cryptocurrency, as these are harder to trace and often used in scams.
  • Install and maintain a trusted security suite that combines robust antivirus protection with real-time browsing safeguards to block malicious websites.
  • Configure your firewall to actively monitor and filter network traffic, preventing unauthorized access and blocking suspicious connections before they reach your device.
  • Pay close attention to alerts from reputable security programs, which can detect and warn you about known phishing sites or fraudulent activities in real time.
  • Remain cautious even when shopping on professional-looking platforms, as well-designed storefronts can still conceal sophisticated attempts at theft.
You might also like
Categories: Technology

Roblox is sharing its AI tool to fight toxic game chats – here’s why that matters for kids

Mon, 08/11/2025 - 16:00

Online game chats are notorious for vulgar, offensive, and even criminal behavior. Even if only a tiny percentage, the many millions of hours of chat can accumulate a lot of toxic interactions in a way that's a problem for players and video game companies, especially when it involves kids. Roblox has a lot of experience dealing with that aspect of gaming and has used AI to create a whole system to enforce safety rules among its more than 100 million mostly young daily users, Sentinel. Now, it's open-sourcing Sentinel, offering the AI and its capacity for identifying grooming and other dangerous behavior in chat before it escalates for free to any platform.

This isn’t just a profanity filter that gets triggered when someone types a curse word. Roblox has always had that. Sentinel is built to watch patterns over time. It can track how conversations evolve, looking for subtle signs that someone is trying to build trust with a kid in potentially problematic ways. For instance, it might flag a long conversation where an adult-sounding player is just a little too interested in a kid’s personal life.

Sentinel helped Roblox moderators file about 1,200 reports to the National Center for Missing and Exploited Children in just the first half of this year. As someone who grew up in the Wild West of early internet chatrooms, where “moderation” usually meant suspecting that people who used correct spelling and grammar were adults, I can’t overstate how much of a leap forward that feels.

Open-sourcing Sentinel means any game or online platform, whether as big as Minecraft or as small as an underground indie hit, can adapt Sentinel and use it to make their own communities safer. It’s an unusually generous move, albeit one with obvious public relations and potential long-term commercial benefits for the company.

For kids (and their adult guardians), the benefits are obvious. If more games start running Sentinel-style checks, the odds of predators slipping through the cracks go down. Parents get another invisible safety net they didn’t have to set up themselves. And the kids get to focus on playing rather than navigating the online equivalent of a dark alley.

For video games as a whole, it’s a chance to raise the baseline of safety. Imagine if every major game, from the biggest esports titles to the smallest cozy simulators, had access to the same kind of early-warning system. It wouldn’t eliminate the problem, but it could make bad behavior a lot harder to hide.

AI for online safety

Of course, nothing with “AI” in the description is without its complications. The most obvious one is privacy. This kind of tool works by scanning what people are saying to each other, in real time, looking for red flags. Roblox says it uses one-minute snapshots of chat and keeps a human review process for anything flagged. But you can’t really get around the fact that this is surveillance, even if it’s well-intentioned. And when you open-source a tool like this, you’re not just giving the good guys a copy; you also make it easier for bad actors to see how you're stopping them and come up with ways around the system.

Then there’s the problem of language itself. People change how they talk all the time, especially online. Slang shifts, in-jokes mutate, and new apps create new shorthand. A system trained to catch grooming attempts in 2024 might miss the ones happening in 2026. Roblox updates Sentinel regularly, both with AI training and human review, but smaller platforms might not have the resources to keep up with what's happening in their chats.

And while no sane person is against stopping child predators or jerks deliberately trying to upset children, AI tools like this can be abused. If certain political talk, controversial opinions, or simply complaints about the game are added to the filter list, there's little players can do about it. Roblox and any companies using Sentinel will need to be transparent, not just with the code, but also with how it's being deployed and what the data it collects will be used for.

It's also important to consider the context of Roblox's decision. The company is facing lawsuits over what's happened with children using the platform. One lawsuit alleges a 13‑year‑old was trafficked after meeting a predator on the platform. Sentinel isn't perfect, and companies using it could still face legal problems. Ideally, it would serve as a component of online safety setups that include things like better user education and parental controls. AI can't replace all safety programs.

Despite the very real problems of deploying AI to help with online safety, I think open-sourcing Sentinel is one of the rare cases where the upside of using AI is both immediate and tangible. I’ve written enough about algorithms making people angry, confused, or broke to appreciate when one is actually pointed toward making people safer. And making it open-source can help make more online spaces safer.

I don’t think Sentinel will stop every predator, and I don’t think it should be a replacement for good parenting, better human moderation, and educating kids about how to be safe when playing online. But as a subtle extra line of defense, Sentinel has a part to play in building better online experiences for kids.

You might also like
Categories: Technology

I’ll upgrade my M1 MacBook Pro for the first time in years if this rumor is true – and it might be the last MacBook I buy this decade

Mon, 08/11/2025 - 16:00

How often do you upgrade your MacBook? I’m willing to bet it’s not very often, and certainly not every year. If so, that’s great news for you, but perhaps not so pleasing for Apple, which would rather you stumped up for one of the best MacBooks as often as possible. Yet is there really a reason to upgrade if your laptop does everything you need for years at a time?

Take me, for example. I’ve had a MacBook Pro with M1 Pro chip since 2022, and it’s served me superbly well in that time. It handles all my work without a hitch and gives me strong gaming performance for the titles I play. Even Cyberpunk 2077 performs impressively well if I turn frame generation on, and I’m happy to do that since it boosts the frame rates from my integrated laptop chip – which is several generations out of date – up to the mid-70s.

That all means that over the past few years, I’ve looked at advances in the MacBook Pro and decided to take a pass. New chips have been the only major changes of note, and with no big design adjustments or feature improvements to tempt me – and my M1 Pro chip performing so consistently – there’s been no need to rock the boat.

However, I’m starting to get the feeling that this situation is not going to last. Judging by the latest rumors, things could change in a big way in the next year or two, and it might be harder than ever for me to resist the lure of a new MacBook Pro. The good news, though, is that this step up could last me well into the next decade.

The OLED revolution

(Image credit: Apple)

That idea centers around Apple’s M6 chip, which is expected to land in the MacBook Pro in late 2026 or early 2027. This model is expected to come with an OLED display as well as the new chip, according to Bloomberg journalist Mark Gurman’s latest Power On newsletter.

There, Gurman says that the upcoming M6 MacBook Pro “represents enough of a change to finally move the needle” in his opinion, bringing with it a new chip, an improved screen, plus a thinner, redesigned chassis for the first time in several years.

Gurman is not the only person who could be swayed by this upcoming Mac: it’s the kind of upgrade that might convince me to open the purse strings as well. After all, by the time the M6 model launches, my M1 Pro laptop will be five generations out of date and might start showing its age a little more. It’s still going strong for now, but that won’t be the case forever.

But the bigger change will be the OLED display. This has been rumored for years, but Apple’s obsessive perfectionism has meant we still haven’t seen it in action. When it finally arrives, though, Apple’s gaming gains could finally be married up with the kind of visual output they deserve. The question of whether MacBooks are actually gaming machines has been discussed much over the last few years, but adding an OLED display into the mix would surely settle the question in Apple’s favor once and for all.

What does the future hold?

(Image credit: Future)

But the fact that it would take an upgrade as momentous as this to convince me to get a new MacBook raises another question: what happens after the M6 MacBook Pro has been and gone?

Generally, MacBook upgrades aren’t usually as feature packed as the one we’re expecting when the M6 chip and OLED display come around. The M4 MacBook Pro, for example, offered a new chip, added Center Stage to the front-facing camera, brought Thunderbolt 5 connectivity to the M4 Pro and M4 Max chips, added a nano-texture coating to the display… and not a whole lot else. Those changes are fine, but they’re not groundbreaking.

Apple has, in some ways, created a problem for itself: its chips are now so performant that they can last for generations, dissuading people from upgrading. Contrast that to the bad old Intel Mac days, when the chips were so underpowered that many people felt forced into expensive annual upgrades, and it’s clear that Apple users are in a better spot than ever.

These days, Apple silicon chips have a lot more longevity, which means it’s harder for Apple to persuade its users to buy new MacBooks on the regular. My hope, at least, is this means Apple will bring more significant new features in the coming years in a bid to tempt upgraders.

But even if it doesn’t, just having a chip that lasts years without faltering is a win for Apple fans, and my M1 Pro is a testament to that. If I upgrade to the M6 MacBook Pro and its OLED display, I’m hoping the improvements it brings last me half a decade or more, just as my long-serving M1 Pro chip has done before it.

You might also like
Categories: Technology

This Meta prototype is a seriously upgraded Meta Quest 3 – and you can try it for yourself

Mon, 08/11/2025 - 15:00
  • Meta has two new VR headsets you can try
  • They're protypes that aren't usually accessible to the public
  • You'll have to attend SIGGRAPH 2025 to give them a whirl

Every so often, Meta will showcase some of its prototype VR headsets – models which aren’t for public release like its fully fledged Meta Quest 3, but allow its researchers to test attributes when they’re pushed too far beyond current commercial headset limits. Like the Starburst headset, which offered a peak brightness of 20,000 nits.

Tiramisu and Boba 3 – two more of its prototypes – are more concerned with offering “retinal resolution” and an extremely wide field of view rather than just boasting incredible brightness, but like Starburster, Meta is giving folks the chance to demo these usually lab-exclusive headsets.

That is, if you happen to be attending SIGGRAPH 2025 in Vancouver.

(Image credit: Meta)

I’ve been to SIGGRAPH previously, and it’s full of futuristic XR tech and demos that companies like Meta and its Reality Labs have been cooking up.

Though usually the prototypes look just like Tiramasu. That is to say, a little impractical.

Tiramisu does at least seem to be a headset you can wear normally, even if it does look like a Meta Quest 2 that has been comically stretched – Starburst, for example, had to be suspended from a metal frame as it was far too heavy to wear.

But Tiramasu doesn’t look like the most practical model. The trade-off is that Meta can outfit the headset with µOLED displays and other tech like custom lenses to deliver high contrast and resolution – 3x and 3.6x respectively of what the Meta Quest 3 offers.

As a result, Tiramasu is the closest Meta has got to achieving the “visual Turing test”, virtual visuals that are indistinguishable from real ones.

(Image credit: Meta)

Boba 3, on the other hand, looks like a headset you could buy tomorrow, and the way Meta talks about it, it does feel like something inspired by it could arrive at some point in the future.

That’s because it looks surprisingly compact – apparently it weighs just 660g, a little less than a Quest 3 with Elite strap at 698g. It also has a 4k by 4k resolution, and – the reason this headset is special – it boasts a horizontal field of view of 180° and a vertical field of view of 120°.

That’s significantly more than the 110° and 96°, respectively, offered by the Meta Quest 3, and while the 3 covers about 46% of a person’s field of view, Boba 3 captures about 90%.

The only issue is Boba 3 does require a “top-of-the-line GPU and PC system”, according to Display Systems Research Optical Scientist Yang Zhao. That’s because it needs to fill in the extra space the larger field of view creates, leading to higher compute requirements.

Though Zhao did note that Boba 3 is “something that we wanted to send out into the world as soon as possible”, and it does resemble goggles in a way – the design direction Meta’s next headset is said to be taking.

So we’ll have to keep our eyes peeled to see what Meta launches next, but while only a few lucky folks will get to try Boba 3 at Siggraph, I’m hoping many more of us will get to experience the next-gen VR headsets it inspires.

You might also like
Categories: Technology

MRI scans, X-rays and more leaked online in major breach - over a million healthcare devices affected, here's what we know

Mon, 08/11/2025 - 14:27
  • Modat found more than 1.2 million misconfigured devices leaking info
  • This includes MRI scans, X-rays, and other sensitive files, together with patient contact data
  • The healthcare industry needs a proactive approach to cybersecurity, researchers warn

Researchers have warned there are currently over a million internet-connected healthcare devices which are misconfigured, leaking all the data they generate online - putting millions of people at risk of identity theft, phishing, wire fraud, and more.

Modat recently scanned the internet in search of misconfigured, non-password protected, devices and their data, and by using the tag ‘HEALTHCARE’, they found more than 1.2 million devices which were generating, and leaking, confidential medical images including MRI scans, X-rays, and even blood work, of hospitals all over the world.

“Examples of data being leaked in this way include brain scans and X-rays, stored alongside protected health information and personally identifiable information of the patient, potentially representing both a breach of patient’s confidentiality and privacy,” the researchers explained.

Weak passwords and other woes

In some cases, the researchers found information unlocked and available for anyone who knows where to look - and in other cases, the data was protected with such weak and predictable passwords that it posed no challenge to break in and grab them.

“In the worst-case scenario, leaked sensitive medical information could leave unsuspecting victims open to fraud or even blackmail over a confidential medical condition,” they added.

In theory, a threat actor could learn of a patient’s condition before they do. Together with names and contact details, they can reach out to the patient and threaten to release the information to friends and family, unless they pay a ransom.

Alternatively, they could impersonate the doctor or the hospital and send phishing emails inviting the victim to “view sensitive files” which would just redirect them to download malware or share login credentials.

The majority of the misconfigured devices are located in the United States (174K+), with South Africa being close second (172K+). Australia (111K+), Brazil (82K+), and Germany (81K+) round off the top five.

For Modat, a proactive security culture “beats a reactive response”.

“This research reinforces the urgent need for comprehensive asset visibility, robust vulnerability management, and a proactive approach to securing every internet-connected device in healthcare environments, ensuring that sensitive patient data remains protected from unauthorized access and potential exploitation," commented Errol Weiss, Chief Security Officer at Health-ISAC.

You might also like
Categories: Technology

Pages