Dredging waterways for navigation is a centuries-old practice, but this project is controversial because the mud being dug out of the channel is put into other parts of Mobile Bay.
(Image credit: Blake Jones for NPR)
Imagine a health plan member interacting with their insurer’s virtual assistant, typing, “I just lost my mom and feel overwhelmed.” A conventional chatbot might respond with a perfunctory “I’m sorry to hear that” and send a list of FAQs. This might be why 59% of chatbot users before 2020 felt that “the technologies have misunderstood the nuances of human dialogue.”
In contrast, an AI agent can pause, offer empathetic condolences, gently guide the member to relevant resources, and even help schedule an appointment with their doctor. This empathy, paired with personalization, drives better outcomes.
When people feel understood, they’re more likely to engage, follow through, and trust the system guiding them. Oftentimes in regulated industries that handle sensitive topics, simple task automation fails when users abandon engagements that feel rigid, incompetent, or lack understanding of the individual’s circumstances.
AI agents can listen, understand, and respond with compassion. This combination of contextual awareness and sentiment‑driven response is more than just a nice‑to‑have add-on—it’s foundational for building trust, maintaining engagement, and ensuring members navigating difficult moments get the personalized support they need.
Beyond Automation: Why Empathy Matters in Complex ConversationsTraditional automation excels at straightforward, rule‑based tasks but struggles when conversations turn sensitive. AI agents, by contrast, can detect emotional cues—analyzing tone, punctuation, word choice, conversation history, and more—and deliver supportive, context‑appropriate guidance.
This shift from transactional to relational interactions matters in regulated industries, where people may need help navigating housing assistance, substance-use treatment, or reproductive health concerns.
AI agents that are context-aware and emotionally intelligent can support these conversations by remaining neutral, non‑judgmental, and attuned to the user’s needs.
They also offer a level of accuracy and consistency that’s hard to match—helping ensure members receive timely, personalized guidance and reliable access to resources, which could lead to better, more trusted outcomes.
The Technology Under the HoodRecent advances in large language models (LLMs) and transformer architectures (GPT‑style models) have been pivotal to enabling more natural, emotionally aware conversations between AI agents and users. Unlike early sentiment analysis tools that only classified text as positive or negative, modern LLMs predict word sequences across entire dialogues, effectively learning the subtleties of human expression.
Consider a scenario where a user types, “I just got laid off and need to talk to someone about my coverage.” An early-generation chatbot might respond with “I can help you with your benefits,” ignoring the user’s distress.
Today’s emotionally intelligent AI agent first acknowledges the emotional weight: “I’m sorry to hear that—losing a job can be really tough.” It then transitions into assistance: “Let’s review your coverage options together, and I can help you schedule a call if you'd like to speak with someone directly."
These advances bring two key strengths. First, contextual awareness means AI agents can track conversation history—remembering what a user mentioned in an earlier exchange and following up appropriately.
Second, built‑in sentiment sensitivity allows these models to move beyond simple positive versus negative tagging. By learning emotional patterns from real‑world conversations, these AI agents can recognize shifts in tone and tailor responses to match the user’s emotional state.
Ethically responsible online platforms embed a robust framework of guardrails to ensure safe, compliant, and trustworthy AI interactions. In regulated environments, this includes proactive content filtering, privacy protections, and strict boundaries that prevent AI from offering unauthorized advice.
Sensitive topics are handled with predefined responses and escalated to human professionals when needed. These safeguards mitigate risk, reinforce user trust, and ensure automation remains accountable, ethical, and aligned with regulatory standards.
Navigating Challenges in Regulated EnvironmentsFor people to trust AI in regulated sectors, AI must do more than sound empathetic. It must be transparent, respect user boundaries, and know when to escalate to live experts. Robust safety layers mitigate risk and reinforce trust.
Empathy Subjectivity
Tone, cultural norms, and even punctuation can shift perception. Robust testing across demographics, languages, and use cases is critical. When agents detect confusion or frustration, escalation paths to live agents must be seamless, ensuring swift resolution and access to the appropriate level of human support when automated responses may fall short.
Regulatory Compliance and Transparency
Industries under strict oversight cannot allow hallucinations or unauthorized advice. Platforms must enforce transparent disclosures—ensuring virtual agents identify themselves as non-human—and embed compliance‑driven guardrails that block unapproved recommendations. Redirects to human experts should be fully logged, auditable, and aligned with applicable frameworks.
Guardrail Management
Guardrails must filter hate speech or explicit content while distinguishing between abusive language and expressions of frustration. When users use mild profanity to convey emotional distress, AI agents should recognize the intent without mirroring the language—responding appropriately and remaining within company guidelines and industry regulations.
Also, crisis‑intervention messaging—responding to instances of self‑harm, domestic violence, or substance abuse—must be flexible enough for organizations to tailor responses to their communities, connect people with local resources, and deliver support that is both empathetic and compliant with regulatory standards.
Empathy as a Competitive AdvantageAs regulated industries embrace AI agents, the conversation is shifting from evaluating their potential to implementing them at scale. Tomorrow’s leaders won’t just pilot emotion‑aware agents but embed empathy into every customer journey, from onboarding to crisis support.
By committing to this ongoing evolution, businesses can turn compliance requirements into opportunities for deeper connection and redefine what it means to serve customers in complex, regulated environments.
Regulated AI must engineer empathy in every interaction. When systems understand the emotional context (not just data points), they become partners rather than tools. But without vertical specialization and real-time guardrails, even the most well-intentioned AI agents can misstep.
The future belongs to agentic, emotionally intelligent platforms that can adapt on the fly, safeguard compliance, and lead with compassion when it matters most. Empathy, when operationalized safely, becomes more than a UX goal—it becomes a business advantage.
We list the best enterprise messaging platform.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Vacuum cleaners divide opinion more than you might expect, and the brand that people seem to feel most strongly about is Dyson. Behind every diehard Dyson fan there are 10 more people ready to eagerly proclaim that they're the worst vacuums in the world.
At the weekend, designer Mike Smith proclaimed on X that Dyson vacuums were "not for serious vacuumers" and the ensuing thread went viral, with over 1,000 people piling in to air their vacuum views.
My hot take is that Dyson vacuums are not for serious vacuumers.Battery is garbage, filter is garbage. Canister too small. Absolute joke of a cleaning tool.August 10, 2025
I manage the vacuum cleaner content for TechRadar, which includes reviewing vacs from many different brands and putting together our official best vacuum cleaner ranking. All of that means I spend far more time than the average person thinking about vacuum cleaners.
I'm neither wildly pro- or anti-Dyson, and this discussion didn't sway me any further in either direction. What it did do is make me even more confident in my long-held belief that what most people actually have a problem with is not Dyson vacuums, but cordless stick vacuums in general.
Cordless stick vacuums are not the same as traditional upright vacuums or canister vacs. In some ways, they're worse. Providing strong suction requires a lot of power, and the bigger the battery the heavier the vacuum – so brands are constantly trying to balance whether to provide customers with longer runtimes or a lighter build.
A bigger dust cup means a vacuum that's bulkier and heavier, so there's another trade-off there in terms of how often you have to empty it. They also seem to be an inherently less robust type of cleaner – cordless stick vacs are expected to have a far shorter overall lifespan than other styles of vacuum.
(Image credit: Future)In short, if you choose a cordless stick vacuum, you should expect limited runtimes on higher suction modes, canisters that need emptying regularly, and for it not to last forever. For those compromises, you get something you don't need to plug into the wall, and which you can easily use to vacuum up the stairs – or even on the ceiling – if you want to.
Of course, some cordless vacs perform much better than others, but broadly speaking you should expect those pros and cons to be true whatever model or brand you go for. Dyson stick vacs might not be for "serious" vacuuming, but boy are they good for convenient, comfortable vacuuming.
(Of course, the other element when it comes to Dyson is the price. I get into this more in my article exploring if Dyson vacuums are worth it, and I've also written about by experience of Shark vs Dyson vacuums, if you're interested in that comparison specifically.)
In the thread, the name that crops up again and again from the opposing chorus is Miele. This brand is synonymous with canister vacuums, and not a direct comparison. One of the very best vacuums I've used in terms of outright suction power remains the 25+ year-old upright that used to belong to my Nana and now lives in my parents' house. But it weighs a ton and takes up a load of space, so when it comes to cleaning my own flat, I'd reach for a Dyson (or similar) every time.
You might also like...Al Jazeera's Anas al-Sharif and five of his colleagues at the network were killed in an Israeli airstrike targeting Gaza's most recognized television journalist.
(Image credit: Anas Baba)
Trump said Ukrainian President Volodymyr Zelenskyy was unlikely to be included in talks he described as a "feel out meeting" to better understand Russia's demands for ending its war in Ukraine.
(Image credit: Aurelien Morissard, left and center, Pavel Bednyakov, right)
Artificial Intelligence (AI) is rapidly reshaping the landscape of fraud prevention, creating new opportunities for defense as well as new avenues for deception.
Across industries, AI has become a double-edged sword. On one hand, it enables more sophisticated fraud detection, but on the other, it is being weaponized by threat actors to exploit controls, create synthetic identities and launch hyper-realistic attacks.
Fraud prevention is vital in sectors handling high volumes of sensitive transactions and digital identities. In financial services, for example, it's not just about protecting capital - regulatory compliance and customer trust are at stake.
Similar cybersecurity pressures are growing in telecoms and tech industries like SaaS, ecommerce and cloud infrastructure, where threats like SIM swapping, API abuse and synthetic users can cause serious disruption.
Fraud has already shifted from a risk to a core business challenge - with 58 per cent of key decision-makers in large UK businesses now viewing it as a ‘serious threat’, according to a survey conducted in 2024.
The rise of synthetic threatsSynthetic fraud refers to attacks that leverage fabricated data, AI-generated content or manipulated digital identities. These aren’t new concepts, but the capability and accessibility of generative AI tools have dramatically lowered the barrier to entry.
A major threat is the creation of synthetic identities which are combinations of real and fictitious information used to open accounts, bypass Know-Your-Customer (KYC) checks or access services.
Deepfakes are also being used to impersonate executives during video calls or in phishing attempts. One recent example involved attackers using AI to mimic a CEO’s voice and authorize a fraudulent transfer. These tactics are difficult to detect in fast-moving digital environments without advanced, real-time verification methods.
Data silos only exacerbate the problem. In many tech organizations, different departments rely on disconnected tools or platforms. One team may use AI for authentication while another still relies on legacy systems, and it is these blind spots which are easily exploited by AI-driven fraud.
AI as a defenseWhile AI enables fraud, it also offers powerful tools for defense if implemented strategically. At its best, AI can process vast volumes of data in real time, detect suspicious patterns and adapt as threats evolve. But this depends on effective integration, governance and oversight.
One common weakness lies in fragmented systems. Fraud prevention efforts often operate in silos across compliance, cybersecurity and customer teams. To build true resilience, organizations must align AI strategies across departments. Shared data lakes, or secure APIs, can enable integrated models with a holistic view of user behavior.
Synthetic data, often associated with fraud, can also play a role in defense. Organizations can use anonymized, realistic data to simulate rare fraud scenarios and train models without compromising customer privacy. This approach helps test defenses against edge cases not found in historical data.
Fraud systems must also be adaptive. Static rules and rarely updated models can’t keep pace with AI-powered fraud - real-time, continuously learning systems are now essential. Many companies are adopting behavioral biometrics, where AI monitors how users interact with devices, such as typing rhythm or mouse movement, to detect anomalies, even when credentials appear valid.
Explainability is another cornerstone of responsible AI use and it is essential to understand why a system has flagged or blocked activity. Explainable AI (XAI) frameworks help make decisions transparently, supporting trust and regulatory compliance, ensuring AI is not just effective, but also accountable.
Industry collaborationAI-enhanced fraud doesn’t respect organizational boundaries, and as a result, cross-industry collaboration is becoming increasingly important. While sectors like financial services have long benefited from information-sharing frameworks like ISACs, similar initiatives are emerging in the broader tech ecosystem.
Cloud providers are beginning to share indicators of compromised credentials or coordinated malicious activity with clients. SaaS and cybersecurity vendors are also forming consortiums and joint research initiatives to accelerate detection and improve response times across the board.
Despite its power, AI is not a silver bullet and organizations which rely solely on automation risk missing subtle or novel fraud techniques. Effective fraud strategies should include regular model audits, scenario testing and red-teaming exercises (where ethical hackers conduct simulated cyberattacks on an organization to test cybersecurity effectiveness).
Human analysts bring domain knowledge and judgement that can refine model performance. Training teams to work alongside AI is key to building synthetic resilience, combining human insight with machine speed and scale.
Resilience is a system, not a featureAs AI transforms both the tools of fraud and the methods of prevention, organizations must redefine resilience. It’s no longer about isolated tools, but about creating a connected, adaptive, and explainable defense ecosystem.
For many organizations, that means integrating AI across business units, embracing synthetic data, prioritizing explainability, and embedding continuous improvement into fraud models. While financial services may have pioneered many of these practices, the broader tech industry now faces the same level of sophistication in fraud, and must respond accordingly.
In this new era, synthetic resilience is not a static end goal but a capability to be constantly cultivated. Those who succeed will not only defend their businesses more effectively but help define the future of secure, AI-enabled digital trust.
We list the best identity management solutions.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Insulin needles. Sleeping bags. Nutella. These are items Arwa Damon’s charity — International Network for Aid, Relief and Assistance — has tried to send to Gaza and Israel has rejected. It’s a glimpse into the harsh reality of a humanitarian crisis with no end in sight. Today on the show, we talk to Damon about the economics of running a humanitarian nonprofit and what’s stopping more aid from reaching Gaza.
Related episodes:
Why Israel uses diaspora bonds
Why the U.S. helps pay for Israel’s military
What could convince Egypt to take Gaza’s refugees?
For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.
Fact-checking by Sierra Juarez. Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.
(Image credit: Mohammed Abed)
A Chinese scientist, He Jiankui, made a shocking announcement to the world in 2018: He had secretly engineered the birth of the first gene-edited babies. The birth of the twins was seen as reckless and unethical by the scientific community. That’s because, among other things, the CRISPR gene-editing technique Jiankui used was so new. NPR science correspondent Rob Stein has been following the controversial world of gene-editing and human reproduction, including some companies’ recent quests to push gene-editing technology forward.
Read more of Rob Stein’s reporting on the topic here.
Interested in more science news? Let us know at shortwave@npr.org.
Listen to every episode of Short Wave sponsor-free and support our work at NPR by signing up for Short Wave+ at plus.npr.org/shortwave.
(Image credit: jm1366)
The landscape of smart data capture software is undergoing a significant transformation, with advancements that can help businesses build long-term resilience against disruptions like trade tariffs, labor shortages, and volatile demand.
No longer confined to handheld computers and mobile devices, the technology is embracing a new batch of hybrid data capture methods that include fixed cameras, drones, and wearables.
If you weren’t familiar with smart data capture, it is the ability to capture data intelligently from barcodes, text, IDs, and objects. It enables real-time decision-making, engagement, and workflow automation at scale across industries such as retail, supply chain, logistics, travel, and healthcare.
The advancements it’s currently experiencing are beyond technological novelties; they are further redefining how businesses operate, driving ROI, enhancing customer experience, and streamlining operational workflows. Let’s explore how:
More than just smartphonesTraditionally, smart data capture relied heavily on smartphones and handheld computers, devices that both captured data and facilitated user action. With advancements in technology, the device landscape is expanding. Wearables like smart glasses and headsets, fixed cameras, drones, and even robots are becoming more commonplace, each with its own value.
This diversification leads to the distinction of devices that purely ‘capture’ data versus those that can ‘act’ on it too. For example, stationary cameras or drones capture data from the real world and then feed it into a system of record to be aggregated with other data.
Other devices — often mobile or wearable — can capture data and empower users to act on that information instantly, such as a store associate who scans a shelf and can instantly be informed of a pricing error on a particular item. Depending on factors such as the frequency of data collected, these devices can allow enterprises to tailor a data capture strategy to their needs.
Practical innovations with real ROIIn a market saturated with emerging technologies, it's easy to get caught up in the hype of the next big thing. However, not all innovations are ready for prime time, and many fail to deliver a tangible return on investment, especially at scale. The key for businesses is to focus on practical, easy-to-implement solutions that enhance workflows rather than disrupt them by leveraging existing technologies and IT infrastructure.
An illustrative example of this evolution is the increasing use of fixed cameras in conjunction with mobile devices for shelf auditing and monitoring in retail environments. Retailers are deploying mobile devices and fixed cameras to monitor shelves in near real-time and identify out-of-stock items, pricing errors, and planogram discrepancies, freeing up store associates’ time and increasing revenue — game-changing capabilities in the current volatile trade environment, which triggers frequent price changes and inventory challenges.
This hybrid shelf management approach allows businesses to scale operations no matter the store format: retailers can easily pilot the solution using their existing mobile devices with minimal upfront investment and assess all the expected ROI and benefits before committing to full-scale implementation.
The combination also enables further operational efficiency, with fixed cameras providing continuous and fully automated shelf monitoring in high-footfall areas, while mobile devices can handle lower-frequency monitoring in less-frequented aisles.
This is how a leading European grocery chain increased revenue by 2% in just six months — an enormous uplift in a tight-margin vertical like grocery.
Multi-device and multi-signal systemsAn important aspect of this data capture evolution is the seamless integration of all these various devices and technologies. User interfaces are being developed to facilitate multi-device interactions, ensuring that data captured by one system can be acted upon through another.
For example, fixed cameras might continuously monitor inventory levels, with alerts to replenish specific low-stock items sent directly to a worker's wearable device for immediate and hands-free action.
And speaking of hands-free operation: gesture recognition and voice input are also becoming increasingly important, especially for wearable devices lacking traditional touchscreens. Advancing these technologies would enable workers to interact with items naturally and efficiently.
Adaptive user interfaces also play a vital role, ensuring consistent experiences across different devices and form factors. Whether using a smartphone, tablet, or digital eyewear, the user interface should adapt to provide the necessary functionality without a steep learning curve; otherwise, it may negatively impact the adoption rate of the data capture solution.
Recognizing the benefits, a large US grocer implemented a pre-built adaptive UI to enable top-performing scanning capabilities on existing apps to 100 stores in just 90 days.
The co-pilot systemAs the volume of data increases, so does the potential for information overload. In some cases, systems can generate thousands of alerts daily, overwhelming staff and hindering productivity. To combat this, businesses are adopting so-called co-pilot systems — a combination of devices and advanced smart data capture that can guide workers to prioritize ROI-optimizing tasks.
This combination leverages machine learning to analyze sales numbers, inventory levels, and other critical metrics, providing frontline workers with actionable insights. By focusing on high-priority tasks, employees can work more efficiently without sifting through endless lists of alerts.
Preparing for the futureAs the smart data capture landscape continues to evolve and disruption becomes the “new normal”, businesses must ensure their technology stacks are flexible, adaptable, and scalable.
Supporting various devices, integrating multiple data signals, and providing clear task prioritization are essential for staying competitive in an increasingly complex, changeable and data-driven market.
By embracing hybrid smart data capture device strategies, businesses can optimize processes, enhance user experiences, and make informed decisions based on real-time data.
The convergence of mobile devices, fixed cameras, wearables, drones, and advanced user interfaces represents not just an evolution in technology but a revolution in how businesses operate. And in a world where data is king, those who capture it effectively — and act on it intelligently — will lock in higher margins today and lead the way tomorrow.
We've listed the best ERP software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The Trump administration has pressured China to have the Hong Kong-based operator of ports at either end of the canal sell those interests to a U.S. consortium.
(Image credit: Matias Delacroix)
Miguel Uribe was shot three times while giving a campaign speech in a park and had since remained in an intensive care unit in serious condition with episodes of slight improvement.
(Image credit: Fernando Vergara)
Google Gemini introduced a new feature aimed at education called Guided Learning this month. The idea is to teach you something through question-centered conversation instead of a lecture.
When you ask it to teach you something, it breaks the topic down and starts asking you questions about it. Based on your answers, it explains more details and asks another question. The feature provides visuals, quizzes, and even embeds YouTube videos to help you absorb knowledge.
As a test, I asked Gemini's Socratic tutor to teach me all about cheese. It started by asking me about what I think is in cheese, clarifying my somewhat vague answer with more details, and then asking if I knew how those ingredients become cheese. Soon, I was in a full-blown cheese seminar. For every answer I gave, Gemini came back with more details or, in a gentle way, told me I was wrong.
The AI then got into cheese history. It framed the history as a story of traveling herders, clay pots, ancient salt, and Egyptian tombs with cheese residue. It showed a visual timeline and said, “Which of these surprises you most?” I said the tombs did, and it said, “Right? They found cheese in a tomb and it had survived.” Which is horrifying and also makes me respect cheese on a deeper level.
In about 15 minutes, I knew all about curds and whey, the history of a few regional cheese traditions, and even how to pick out the best examples of different cheeses. I could see photos in some cases and a video tour of a cellar full of expensive wheels of cheese in France. The AI quizzed me when I asked it to make sure I was getting it, and I scored a ten out of ten.
(Image credit: Gemini screenshots)Cheesemonger AIIt didn’t feel like studying, exactly. More like falling into a conversation where the other person knows everything about dairy and is excited to bring you along for the ride. After learning about casein micelles. starter cultures, and cutting the curd, Gemini asked me if I wanted to learn how to make cheese.
I said sure, and it guided me through the process of making ricotta, including pictures to help show what it should look like at each step.
(Image credit: Gemini screenshots)By the time I was done with that part of the conversation, I felt like I’d taken a mini‑course in cheesemaking. I'm not sure I am ready to fill an entire cheeseboard or age a wheel of gruyère in my basement.
Still, I think making ricotta or maybe paneer would be a fun activity in the next few weeks. And I can show off a mild, wobbly ball of dairy pride thanks to learning from questioning, and, as it were, being guided to an education.
You might also likeAs AI tools become more and more embedded in our everyday work, new research claims the challenge of not getting the best out of them may not lie solely with the technology.
A report from Multiverse has identified thirteen core human skillsets which could determine whether companies fully realize AI’s potential.
The study warns without deliberate attention to these capabilities, investment in AI writer systems, LLM applications, and other AI tools could fall short of expectations.
Critical thinking under pressureThe Multiverse study draws from observation of AI users at varying experience levels, from beginners to experts, employing methods such as the Think Aloud Protocol Analysis.
Participants verbalised their thought processes while using AI to complete real-world tasks.
From this, researchers built a framework grouping the identified skills into four categories: cognitive skills, responsible AI skills, self-management, and communication skills.
Among the cognitive abilities, analytical reasoning, creativity, and systems thinking were found to be essential for evaluating AI outputs, pushing innovation, and predicting AI responses.
Responsible AI skills included ethics, such as spotting bias in outputs, and cultural sensitivity to address geographic or social context gaps.
Self-management covered adaptability, curiosity, detail orientation, and determination, traits that influence how people refine their AI interactions.
Communication skills included tailoring AI-generated outputs for audience expectations, engaging empathetically with AI as a thought partner, and exchanging feedback to improve performance.
Reports from academic institutions, including MIT, have raised concerns reliance on generative AI can reduce critical thinking, a phenomenon linked to “cognitive offloading.”
This is the process where people delegate mental effort to machines, risking erosion of analytical habits.
While AI tools can process vast amounts of information at speed, the research suggests they cannot replace the nuanced reasoning and ethical judgement that humans contribute.
The Multiverse researchers note that companies focusing solely on technical training may overlook the “soft skills” required for effective collaboration with AI.
Leaders may assume their AI tool investments address a technology gap when in reality, they face a combined human-technology challenge.
The study refrains from claiming AI inevitably weakens human cognition, but instead it argues the nature of cognitive work is shifting, with less emphasis on memorising facts and more on knowing how to access, interpret, and verify information.
You might also likeWhile the new ‘Liquid Glass’ look and a way more powerful Spotlight might be the leading features of macOS Tahoe 26, I’ve found that bringing over a much-loved iPhone feature has proven to be the highlight after weeks of testing.
Live Activities steal the show on the iPhone, thanks to their glanceability and effortless way of highlighting key info, whether it’s from a first or third-party app. Some of my favorites are:
Now, all of this is arriving on the Mac – right at the top navigation bar, near the right-hand side. They appear when your iPhone is nearby, signed into the same Apple Account, and mirror the same Live Activities you’d see on your phone. It’s a simple but powerful addition.
Considering Apple brought iPhone Mirroring to the Mac in 2024, this 2025 follow-up isn’t surprising. But it’s exactly the kind of small feature that makes a big difference. I’ve loved being able to check a score, track a flight, or see my live position on a plane – without fishing for my phone.
(Image credit: Future/Jacob Krol)I’ve used it plenty at my desk, but to me, it truly shines in Economy class. If you’ve ever tried balancing an iPhone and a MacBook Pro – or even a MacBook Air – on a tray table, you know the awkward overlap. I usually end up propping the iPhone against my screen, hanging it off the palm rest, or just tossing it in my lap. With Live Activities on the Mac, I can stick to one device and keep the tray table clutter-free.
Considering notifications already sync, iPhone Mirroring arrived last year, Live Activities were ultimately the missing piece. On macOS Tahoe, they sit neatly collapsed in the menu bar, just like the Dynamic Island on iPhone, and you can click on one to expand and see the full Live Activity. Another click on any of these Live Activities quickly opens the app on your iPhone via the Mirroring app – it all works together pretty seamlessly.
(Image credit: Future/Jacob Krol)You can also easily dismiss them, as I have found they automatically expand for major updates, saving screen real estate on your Mac. If you already have a Live Activity that you really enjoy on your iPhone, there’s really no extra work needed from the developer, as these will automatically repeat.
All-in-all, it’s a small but super helpful tool that really excels in cramped spaces. So, if you’ve ever struggled with the same balancing act as I have with a tray table, your iPhone, and a MacBook, know that relief is on the way.
It's arriving in the Fall (September or October) with the release of macOS Tahoe 26. If you want it sooner, the public beta of macOS Tahoe 26 is out now, but you'll need to be okay with some bugs and slowdowns.
You might also likePresident Trump plans to tap an economist from the conservative Heritage Foundation to oversee the Bureau of Labor Statistics. He fired the previous leader after a disappointing jobs report.
(Image credit: Jim Watson)
She recorded a magical debut album on Blue Note and was later named a Jazz Master by the National Endowment of the Arts.
Huawei has announced plans to make its CANN software toolkit for Ascend AI GPUs open source, a move aimed squarely at challenging Nvidia’s long-standing CUDA dominance.
CUDA, often described as a closed-off “moat” or “swamp,” has been viewed as a barrier for developers seeking cross-platform compatibility by some for years.
Its tight integration with Nvidia hardware has locked developers into a single vendor ecosystem for nearly two decades, with all efforts to bring CUDA functionality to other GPU architectures through translation layers blocked by the company.
Opening up CANN to developersCANN, short for Compute Architecture for Neural Networks, is Huawei’s heterogeneous computing framework designed to help developers create AI applications for its Ascend AI GPUs.
The architecture offers multiple programming layers, giving developers options for building both high-level and performance-intensive applications.
In many ways, it is Huawei’s equivalent to CUDA, but the decision to open its source code signals an intent to grow an alternative ecosystem without the restrictions of a proprietary model.
Huawei has reportedly already begun discussions with major Chinese AI players, universities, research institutions, and business partners about contributing to an open-sourced Ascend development community.
This outreach could help accelerate the creation of optimized tools, libraries, and AI frameworks for Huawei’s GPUs, potentially making them more attractive to developers who currently rely on Nvidia hardware.
Huawei’s AI hardware performance has been improving steadily, with claims that certain Ascend chips can outperform Nvidia processors under specific conditions.
Reports such as CloudMatrix 384’s benchmark results against Nvidia running DeepSeek R1 suggest that Huawei’s performance trajectory is closing the gap.
However, raw performance alone will not guarantee developer migration without equivalent software stability and support.
While open-sourcing CANN could be exciting for developers, its ecosystem is in its early stages and may not be anything close to CUDA, which has been refined for nearly 20 years.
Even with open-source status, adoption may depend on how well CANN supports existing AI frameworks, particularly for emerging workloads in large language models (LLM) and AI writer tools.
Huawei’s decision could have broader implications beyond developer convenience, as open-sourcing CANN aligns with China’s broader push for technological self-sufficiency in AI computing, reducing dependence on Western chipmakers.
In the current environment, where U.S. restrictions target Huawei’s hardware exports, building a robust domestic software stack for AI tools becomes as critical as improving chip performance.
If Huawei can successfully foster a vibrant open-source community around CANN, it could present the first serious alternative to CUDA in years.
Still, the challenge lies not just in code availability, but in building trust, documentation, and compatibility at the scale Nvidia has achieved.
Via Toms Hardware
You might also likeOpenAI CEO Sam Altman and several other researchers and engineers came to Reddit the day after debuting the powerful new GPT-5 AI model for the time-honored tradition of an Ask Me Anything thread.
Though the discussion ranged over all kinds of technical and product elements, there were a few topics that stood out as particularly important to posters based on the frequency and passion with which they were discussed. Here are a few of the most notable things we learned from the OpenAI AMA.
Pining for GPT-4oThe biggest recurring theme in the AMA was a mournful wail from users who loved GPT-4o and felt personally attacked by its removal. That's not an exaggeration, as one user posted, “BRING BACK 4o GPT-5 is wearing the skin of my dead friend.”To which Altman replied, “what an…evocative image. ok we hear you on 4o, working on something now.”
This wasn’t just one isolated request, either. Another post asked to keep both GPT-4o and GPT-4.1 alongside GPT-5, arguing that the older models had distinct personalities and creative rhythms. Altman admitted they were “looking into this now.”
Most requests were a little more subdued, with one poster asking, “Why are we getting rid of the variants and 4o when we all have unique communication styles? Please bring them back!”
Altman’s answer was brief but direct in conceding the point. He wrote, “ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!). we are going to bring it back for plus users, and will watch usage to determine how long to support it."
It is interesting that so many heavy users seem to prefer the style of the older model, and prefer it to the objectively better newer ones.
Filtering historyAnother big topic was ChatGPT's safety filter, both currently and before GPT-5 which many posted complaints about for being overzealous. One user described a scenario where they’d been flagged for discussing historical topics, with a response about Gauguin getting flagged and deleted because the artist was a "sex pest," and the user's own clarification question itself getting flagged.
Altman’s answer was a mixture of agreement and reality check. “Yeah, we will continue to improve this,” he said. “It is a legit hard thing; the lines are often really quite blurry sometimes.” He stressed that OpenAI wants to allow “very wide latitude” but admitted that the boundary between unsafe and safe content is far from perfect, but that "people should of course not get banned for learning."
New tierAnother questioner zeroed in on a gap in OpenAI’s subscription model: "Are you guys planning to add another plan for solo power users that are not pros? 20$ plan offers too little for some, and the $200 tier is overkill."
Altman’s answer was succinct, simply saying, “Yes we will do something here.” No details, just a confirmation that the idea’s on the table. That brevity leaves open possibilities from 'next week' to just saying 'the discussion starts now.' But the pricing gap is a big deal for power users who find themselves constrained by the Plus tier but can’t justify enterprise pricing. If OpenAI does create an intermediate tier, it could reshape how dedicated individual users engage with the platform.
The futureAt the end of the AMA, Altman shared some new information about the current and future state of ChatGPT and GPT-5. He started by admitting to some issues with the release, writing that "we expected some bumpiness as we roll out so many things at once. But it was a little more bumpy than we hoped for!"
That bumpiness ended up making GPT-5 seem not as impressive as it should have until now.
"GPT-5 will seem smarter starting today," Altman wrote. "Yesterday, we had a sev [severity, meaning system issue] and the autoswitcher was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber."
He also promised more access for ChatGPT Plus users, with double the rate limits, as well as the upcoming return of GPT-4o, at least for those same subscribers. The AMA did paint a clearer picture of what OpenAI is willing to change in response to public pressure.
The return of GPT-4o for Plus users at least acknowledges that raw capability isn’t the only metric that matters. If users are this vocal about keeping an older model alive, future releases of GPT-5 and beyond may be designed with more deliberate flavors built in beyond just the personality types promised for GPT-5.
You might also likeFor many MacBook owners, it’s a nightmare come true: you open the lid of your pricey laptop and switch it on, only to find the display is a mess, with black bars and glitchy colors everywhere you look. The screen has been ruined, and it’s going to cost a whole lot to put it right.
Worryingly, it’s actually a lot easier to experience this than you might expect: just one seemingly innocuous action can cause hundreds of dollars of damage.
That’s something TikTok user classicheidi found out the hard way. In a video uploaded to the social media platform, classicheidi explained that they had placed a piece of card on the keyboard of their MacBook Air, then closed the lid.
When they opened it again a while later, the screen was ruined.
A costly mistake @classicheidiIs this common knowledge omg
♬ original sound - HeidiThis is an unfortunate incident, but there’s a reason it happened. It’s not because the displays of Apple’s laptops (or those of any other manufacturer, for that matter) are weak or poorly made. But while they should certainly be treated with care, there’s another issue at play.
It’s what Apple describes in a support document as the “tight tolerances” of its laptops. Apple’s MacBooks are made to be as thin as possible, which means the gap between the keyboard and display is very small when the lid is closed.
Anything placed in that gap – even something as modest as a piece of card – can be pushed up against the display, with the resulting pressure leading to serious damage.
For that reason, Apple warns that “leaving any material on your display, keyboard, or palm rest might interfere with the display when it’s closed and cause damage to your display.” If you have a camera cover, a palm rest cover, or a keyboard cover, Apple says you should remove it before closing your laptop’s lid to avoid this kind of scenario – unfortunately, it’s something we've seen before.
If you want to sidestep the kind of outcome classicheidi suffered, it’s important to ensure there’s nothing between your laptop’s keyboard and screen when you close it. If there is, you might open it up to “the biggest jump scare of the century,” in classicheidi’s words.
You might also like