Amid all the debates about how AI affects jobs, science, the environment, and everything else, there's a question of how large language models impact the people using them directly.
A new study from the MIT Media Lab implies that using AI tools reduces brain activity in some ways, which is understandably alarming. But I think that's only part of the story. How we use AI, like any other piece of technology, is what really matters.
Here's what the researchers did to test AI's effect on the brain: They asked 54 students to write essays using one of three methods: their own brains, a search engine, or an AI assistant, specifically ChatGPT.
Over three sessions, the students stuck with their assigned tools. Then they swapped, with the AI users going tool-free, and the non-tool users employing AI.
EEG headsets measured their brain activity throughout, and a group of humans, plus a specially trained AI, scored the resulting essays. Researchers also interviewed each student about their experience.
As you might expect, the group relying on their brains showed the most engagement, best memory, and the most sense of ownership over their work, as evidenced by how much they could quote from them.
The ones using AI at first had less impressive recall and brain connectivity, and often couldn’t even quote their own essays after a few minutes. When writing manually in the final test, they still underperformed.
The authors are careful to point out that the study has not yet been peer-reviewed. It was limited in scope, focused on essay writing, not any other cognitive activity. And the EEG, while fascinating, is better at measuring overall trends than pinpointing exact brain functions. Despite all these caveats, the message most people would take away is that using AI might make you dumber.
But I would reframe that to consider if maybe AI isn’t dumbing us down so much as letting us opt out of thinking. Perhaps the issue isn’t the tool, but how we’re using it.
AI brainsIf you use AI, think about how you used it. Did you get it to write a letter, or maybe brainstorm some ideas? Did it replace your thinking, or support it? There’s a huge difference between outsourcing an essay and using an AI to help organize a messy idea.
Part of the issue is that "AI" as we refer to it is not literally intelligent, just a very sophisticated parrot with an enormous library in its memory. But this study didn’t ask participants to reflect on that distinction.
The LLM-using group was encouraged to use the AI as they saw fit, which probably didn't mean thoughtful and judicious use, just copying without reading, and that’s why context matters.
Because the "cognitive cost" of AI may be tied less to its presence and more to its purpose. If I use AI to rewrite a boilerplate email, I’m not diminishing my intelligence. Instead, I’m freeing up bandwidth for things that actually require my thinking and creativity, such as coming up with this idea for an article or planning my weekend.
Sure, if I use AI to generate ideas I never bother to understand or engage with, then my brain probably takes a nap, but if I use it to streamline tedious chores, I have more brainpower for when it matters.
Think about it like this. When I was growing up, I had dozens of phone numbers, addresses, birthdays, and other details of my friends and family memorized. I had most of it written down somewhere, but I rarely needed to consult it for those I was closest to. But I haven't memorized a number in almost a decade.
I don't even know my own landline number by heart. Is that a sign I’m getting dumber, or just evidence I've had a cell phone for a long time and stopped needing to remember them?
We’ve offloaded certain kinds of recall to our devices, which lets us focus on different types of thinking. The skill isn’t memorizing, it’s knowing how to find, filter, and apply information when we need it. It's sometimes referred to as "extelligence," but really it's just applying brain power to where it's needed.
That’s not to say memory doesn’t matter anymore. But the emphasis has changed. Just like we don’t make students practice long division by hand once they understand the concept, we may one day decide that it’s more important to know what a good form letter looks like and how to prompt an AI to write one than to draft it line by line from scratch.
Humans are always redefining intelligence. There are a lot of ways to be smart, and knowing how to use tools and technology is one important measure of smarts. At one point, being smart meant knowing how to knap flint, make Latin declensions or working a slide rule.
Today, it might mean being able to collaborate with machines without letting them do all the thinking for you. Different tools prioritize different cognitive skills. And every time a new tool comes along, some people panic that it will ruin us or replace us.
The printing press. The calculator. The internet. All were accused of making people lazy thinkers. All turned out to be a great boon to civilization (well, the jury is still out on the internet).
With AI in the mix, we’re probably leaning harder into synthesis, discernment, and emotional intelligence – the human parts of being human. We don't need the kind of scribes who are only good at writing down what people say; we need people who know how to ask better questions.
Knowing when to trust a model and when to double-check. It means turning a tool that’s capable of doing the work into an asset that helps you do it better.
But none of it works if you treat the AI like a vending machine for intelligence. Punch in a prompt, wait for brilliance to fall out? No, that's not how it works. And if that's all you do with it, you aren't getting dumber, you just never learned how to stay in touch with your own thoughts.
In the study, the LLM group’s lower essay ownership wasn’t just about memory. It was about engagement. They didn’t feel connected to what they wrote because they weren’t the ones doing the writing. That’s not about AI. That’s about using a tool to skip the hard part, which means skipping the learning.
The study is important, though. It reminds us that tools shape thinking. It nudges us if we are using AI tools to expand our brains or to avoid using them. But to claim AI use makes people less intelligent is like saying calculators made us bad at math. If we want to keep our brains sharp, maybe the answer isn’t to avoid AI but to be thoughtful about using it.
The future isn't human brains versus AI. It’s about humans who know how to think with AI and any other tool, and avoiding becoming someone who doesn't bother thinking at all. And that’s a test I’d still like to pass.
You might also likeAll the indications are that the Samsung Galaxy Z Fold 7 and the Samsung Galaxy Z Flip 7 are going to get their grand reveal next month – possibly on July 9 – and freshly leaked renders may have given us a better idea of the designs of these handsets.
First up we've got the Samsung Galaxy Z Fold 7 renders, courtesy of the team at Android Headlines. There aren't too many design changes, but it looks like the foldable is going to be thinner than ever, as has been previously rumored.
Exclusive: Samsung Galaxy Z Fold 7 Renders Show Slimmer Design, Bigger Displays https://t.co/bYpU7Fuyy5June 19, 2025
The cover display is apparently getting wider too, so the phone will feel a bit more like a standard phone when it's closed, and we've got two colors to look at here: Blue Shadow and Jet Black (a few other colors could be on the table too).
Perhaps the biggest surprise in these renders is that the punch-hole camera seems to be back on the main display, replacing the under-display camera on the Samsung Galaxy Z Fold 6 – perhaps due to the thinner frame. That's a step back in terms of technology, and arguably aesthetics, though the captured photo and video quality could be boosted as a result.
On the flip sideExclusive: Galaxy Z Flip 7 Leaks with Full Cover Display — Finally Catching Motorola https://t.co/aWkrS2P4TOJune 19, 2025
We've got another batch of leaked renders showing off the Samsung Galaxy Z Flip 7, and again these come from Android Headlines. The same Blue Shadow and Jet Black colors are on show, which will most likely be joined by other shades.
The big upgrade when it comes to this phone compared to the Samsung Galaxy Z Flip 6 is the larger cover display, meaning it looks more like the Motorola Razr series of flip foldables – and the upgrade should make the outer screen more useful.
As with the Galaxy Z Fold 7, these renders show a phone that's thinner and lighter than its predecessor. According to this leak, many of the specs will stay the same, though there will be a faster processor on the inside.
All that remains is for Samsung to announce a date for its next Galaxy Unpacked event, and reveal these phones officially – which will almost certainly be sometime in July. At the same showcase we're expecting to see a couple of Samsung Galaxy Watch 8 models, and perhaps a tri-fold phone as well.
You might also likeAI agents are fundamentally reshaping businesses and how work gets done by augmenting them with autonomous, context-aware execution. If you’re not leveraging AI agents, here’s why you should.
Recognizing that managing AI agents is becoming an essential skill in the workforce, I predict that before 2026, every person at my 1,000+-person company will be using an agent on a daily basis. AI agents are evolving businesses rapidly, and while tech adoption rates can be slower for some organizations, the momentum and interest in agentic AI is building rapidly and proving its business value.
As organizations continue to grapple with complexity, speed, and the pressure to do more with less resources and staff, AI agents offer a path to operational agility: automating routine decisions, surfacing real-time insights, and accelerating strategic outcomes. This shift marks more than a ‘tech upgrade’—it’s a redefinition of the business operating model, where the ability to harness intelligent, data-driven agents will distinguish tomorrow’s leaders from those stuck in yesterday’s workflows.
Real agentsFirst, what is a real agent and why is it different?
In sales and marketing, there is a lot of chatter about building agents that save time and allow companies to send personalized content at scale. That’s absolutely an efficiency that should be taking place, but we’re probably talking about a database trigger in Salesforce that kicks off IF-THEN logic that, in turn, uses an API call to ChatGPT for purposes of drafting the content.
That is not an agent.
By definition, an agent is more resourceful, proactive, and helpful, able to pursue goals, and achieve more results on behalf of employees than either a chatbot or conventional automation. Although the concept of autonomous agents has been around for years, we’re just getting to the point where the technology is becoming widespread, with tools for the creation of agents improving rapidly.
Rather than trying to give a formal definition of what constitutes an agent, let me describe what makes the best formula for purpose-driven AI agents. They possess:
1. Tools to search the web and social media, gather information, and provide data analysis. It’s not a simple report on findings that is helpful in this context, it is an analysis of the findings and a strategy to move forward. Investing in tools is important: think of it like sharpening your knife, if it’s dull, it’s not going to cut as intended. You need to structure the tools to be efficient and flexible so your agent can use them properly.
2. Knowledge, particularly knowledge of you and your goals, the expectation of your outcome for the role you sit in, your writing style, and how to be successful. Context is key, make sure the agent has the relevant knowledge to do its intended job. This could include embedding knowledge from sales decks, website and app data and customer call transcripts.
3. LLM vs LLM evaluation, to ensure reliability, the most effective AI agents will use one model to generate an output and a different model to critique it. For example, if you're relying on an AI agent to draft a report, this approach helps prevent mistakes or awkward phrasing that another reviewer—human or AI—might otherwise catch.
4. A Playbook so the agent learns standard protocols about your company's data and requirements. The playbook should be prescriptive and specific but also leave room for the agent to adapt and change as it gets more information and is able to perform better.
How AI agents are leading business transformationAcross industries, AI agents are beginning to take on specialized roles within business workflows, offering practical support in areas like SEO, sales, and market analysis. For example, some agents now generate pre-meeting briefs by pulling together public digital signals, company data, and CRM information—work that previously required extensive manual effort.
For example, I worked on an AI Meeting Prep Agent for salespeople that one customer told us gave him a complete briefing within seconds that would have taken him at least a half hour to do himself, if he could even find the time — and this is someone who meets with multiple customers and potential customers every day.
Other agents analyze competitive keyword trends to recommend SEO content strategies, or track sudden changes in search behavior to surface emerging market shifts, providing more depth and speed of analysis than would be possible otherwise.
In sales, agents can be used to craft personalized outreach based on real-time data, helping teams engage prospects with greater relevance. Rather than replacing teams, these agents handle the groundwork—searching, summarizing, and connecting data—so people can spend more time making strategic decisions and less time on prep work.
ResultsThe result isn’t just increased efficiency, it’s business transformation. These agents free up talent from information-gathering and task repetition, enabling teams to focus on high-impact work: crafting strategy, building relationships, and driving innovation.
As these agentic workflows become embedded across functions, companies gain a more adaptive, data-responsive operating model—one that scales insight, improves agility, and accelerates decision-making across the board.
In short, AI agents don’t replace teams—they amplify them, creating a multiplier effect that turns data into direction and strategy into execution.
Technology is moving faster than ever, and now is the time to be an innovator, set your brand apart from the rest and stay ahead of the curve.
We list the best client management software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Unlocking true productivity on your morning Microsoft Teams calls should soon get a bit easier thanks to a new update rolling out now.
The video conferencing platform has revealed it is working on adding configurable keyboard shortcuts for users.
Once included, this should mean users can quickly and easily access the symbols, icons and much more more they may use regularly on a daily basis, but sometime are not close at hand when typing in a work environment.
Microsoft Teams keyboard shortcutsMicrosoft says the new addition will allow users to, "set your own keyboard shortcuts to match your work preferences."
In a Microsoft 365 roadmap post, it noted the feature can be found by clicking on the ellipsis in a Teams chat window, and selecting "Keyboard shortcuts" from the menu. Users will be able to create and customize their own shortcuts, and edit them once completed.
The feature is rolling out now, and will be available to users across the world using Teams on Windows, Mac, Android and iOS.
The launch is the latest in a series of recent improvements to Microsoft Teams announced by the company as it looks to improve the experience for users.
This includes rolling out "enhanced spell check", giving users the tools to make sure their messages are as accurate as possible.
It also recently announced a tweak that will allow multiple people to control slides being presented in a meeting or call.
Microsoft says the addition will mean that presenters are able to maintain "a smooth flow during meetings or webinars" - hopefully meaning the end of manual slide changes - and hopefully, the phrase "next slide please".
And the platform also revealed it is working on adding noise suppression for participants dialing in to a call, which should spell an end to potentially ear-splitting call interruptions, or participants being deafened by background noise from another person on the call.
You might also likeIf you think digital scams are on the rise, you’re not alone - a new survey from Avast and Neighbourhood Watch has revealed 92% of Brits believe that cybercrime is as much of a threat as other types of crime.
Just over one in three respondents say they have been personally victimised by cybercriminals, and many of these have suffered financial loss at the hands of digital scammers.
In particular, phishing scams are on the rise, with a 466% rise quarter-on-quarter. The rise in phishing scams is largely attributed to AI, with criminals leveraging AI tools in order to send more frequent and more sophisticated social engineering attacks. With AI, it takes fraudsters just a few minutes to craft campaigns that would have previously taken days.
Save up to 68% on identity theft protection for TechRadar readers!
TechRadar editors praise Aura's upfront pricing and simplicity. Aura also includes a password manager, VPN, and antivirus to make its security solution an even more compelling deal.
Preferred partner (What does this mean?)View Deal
More financial lossUnsurprisingly, Brits are losing more money too, with 59% of victims losing up to £500. Women more commonly lose under £500, and men are more likely to suffer higher losses (between £501 and £2000, and £2000+).
“As cybercriminals use increasingly sophisticated tactics, staying vigilant online is no longer optional - especially as scams are becoming harder to spot and now lurking around every digital corner,” said Luis Corrons, Security Evangelist for Avast.
To protect yourself from cyberattacks, especially engineering attacks, the key is staying vigilant. Make sure to thoroughly check any unsuspected communications, especially emails or texts that include a call to action (i.e. ‘change your password now’).
Be very wary of anyone claiming to be a family member or friend, especially given the developments in deep-fake technologies. Voice and images can be cloned or faked, so don’t send money to anyone you aren’t 100% sure is real.
Particularly important is to remember to never click any links or attachments that you don’t trust, and if you need recommendations on how to create a secure password, we’ve listed some of our top tips here.
You might also likeAutomation is becoming increasingly common in the cybersecurity space, but some industries and organizations continue to lag when it comes to adopting modern security tools. Recent research from Cymulate revealed that nearly two-thirds of security leaders report missing exposures due to the limitations of manual penetration testing and 67% say infrequent testing has left worrying gaps in their security assessments.
That’s a real problem—and it highlights the growing danger posed by inefficient and outdated manual security processes. Cybercriminals are embracing automation to enhance their attack patterns, and security teams that fail to do the same are putting themselves at unnecessary risk.
It doesn’t have to be this way. Practices like exposure management and security controls validation have become increasingly common, with many organizations now engaged in continuous monitoring and validation of potential threats.
With attackers using AI and other automated solutions to enhance and upscale their efforts, defenders need tools capable of matching the speed, volume and sophistication of modern attack tactics.
Today’s advanced security solutions are helping security professionals improve both their detection and remediation capabilities, allowing them to continuously and automatically test their defenses against new and emerging threats while keeping their systems and data secure.
Manual Processes Are Holding Security Teams BackThere is a reason many security teams have come to rely on manual processes: up to this point, they have generally worked. As with any industry, there will always be resistance to change, and “this is just the way we’ve always done it” can be a powerful argument. Of course, it helps that practices like manual penetration testing do still produce valuable results—but the issue is that attackers don’t update their tactics on an annual or quarterly basis.
They are continuously poking and prodding around the edges of systems and networks, looking for a way in. If your last penetration test was three months ago, that means attackers have had three months to find new vulnerabilities, new exposures, and new ways to evade your defenses. In today’s threat landscape, that’s not acceptable.
Unfortunately, it just isn’t possible for human beings to engage in penetration testing or security controls validation on a continuous basis. Today’s digital environments are more complex than ever, and an organization might have thousands of potential vulnerabilities to monitor—more than even the most dedicated security professionals can manage on their own.
Thankfully, today’s organizations have no resources at their fingertips, with modern exposure management and security validation solutions helping to not only automate the testing process, but identify which exposures represent the most pressing danger and prioritize remediation accordingly.
Why Automation Is More Critical than EverAccording to Cymulate’s research, a staggering 65% of security leaders say they know they are missing exposures due to manual penetration testing, while 67% say challenges like scope limitations and infrequent penetration testing are leaving identifiable gaps in their assessments. In today’s threat environment, that’s a serious concern—because if security leaders are aware of those gaps, cybercriminals almost certainly are, too.
At a time when the average cost of a data breach in the U.S. is more than $9 million, businesses cannot afford to let their exposures and vulnerabilities go unaddressed. Cybersecurity is inherently asymmetrical: attackers only need to succeed once to cause significant damage. You may not be able to stop every attack—but you can avoid becoming an easy target.
That starts with testing. Security leaders who use automated validation solutions say they are able to conduct more than 200x as many tests as those relying on manual processes, helping them stay one step ahead of attackers even when they are leveraging the latest tactics and techniques.
In fact, organizations that have implemented AI-based automation into their exposure validation process report that it takes an average of 24 fewer hours to test their defenses against newly identified cyber threats. That can make a significant difference, especially at a time when attackers are identifying and exploiting vulnerabilities more quickly than ever.
Organizations can’t wait weeks or months to manually test new attack tactics—they need to know whether they can defend against these threats, and they need to know now.
Reducing Manual Processes Should Be a Top PriorityCymulate’s findings reveal that 97% of organizations with automated security control validation processes in place have seen a positive impact since implementation, and those that run exposure validation processes at least once per month report a 20% reduction in breaches alongside improved mean time to detection.
The message is simple: organizations that test and validate their security capabilities on a regular (or continuous) basis have a higher degree of success detecting attackers, preventing breaches, and keeping their digital environments secure. Better still, eliminating cumbersome manual processes and automating a significant portion of the testing, prioritization, and remediation processes frees up security teams to focus more pressing tasks.
Automation doesn’t just improve security—it heightens job satisfaction and ensure organizations are getting the most out of their highly skilled employees. Reducing the manual processes that lead to both employee burnout and unnecessary exposures should be a top priority for businesses across every industry.
We list the best network monitoring tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
I feel I owe my Bose QuietComfort Headphones an apology after I allowed myself to be led astray by the allure of a pretty pink pair of Edifier ES850NB over-ears. This was but a brief hiatus, however, as despite my new pair being so aesthetically pleasing, something just wasn’t sitting quite right.
Available in black, pink, ivory or brown, the Edifier ES850NB headphones are available right now in the US at a list price of $169.99 (so around £132 or AU$262, give or take), and are due to be released in the UK at the end of June 2025.
It should come as no surprise that because the Edifiers are one third of the cost of my beloved Bose cans, I wasn’t expecting to be blown away by the audio quality. This is especially true considering I was directly comparing them to my Bose QuietComfort Headphones, which – despite being somewhat lowlier than their higher-spec sibling, the Bose QuietComfort Ultras (in our pick of the best over-ear headphones) – are a fantastic-sounding pair of headphones with great noise cancelling.
I must say that the Edifier ES850NB headphones do sound good for the price. This was the case straight out of the box, which makes a nice change, as cheaper options can often be a bit heavy on the bass in an attempt to sound fun and zealous – ie. to compensate for weaker sonic elements.
Upon first inspecting the Edifier ES850NBs, I noted the super-padded ear cups and soft and spongy headband. This made me wonder if this added plushness would make them even comfier than my QuietComfort, but they actually felt about the same, which was fortunate, as I was a little worried I’d really feel the difference when switching back.
We’ll be publishing a full review soon, but for now, here are three reasons why I won’t be retiring my trusty Bose QuietComforts in favor of the Edifier ES850NB headphones.
With their leather-like texture and metallic details, the Edifier ES850NB headphones do have a classy finish. I like how they look on, too, but I’m a little confused by their design choices where the headband meets the ear cups. The two components are connected by a bendy arm, similar to that of the outstanding Bowers & Wilkins Px7 S3, though not as well executed. I say this because Edifier has carved into the outside of the ear cup to allow the headband to sit flush when the headphones are being worn.
I’m not saying this looks bad aesthetically – it does look good, but this format seemed to cause an issue I’ve not experienced with other over-ears: an audible knocking sound. I encountered this issue a few times when out walking, finding that one ear cup would rock slightly, causing the little post at the bottom of the headband to knock against the ear cup.
It looks like Edifier made an attempt to negate this problem by placing a little rectangle of black silicone to provide some cushioning, but it appears to be too thin and small to be efficient. It’s worth mentioning that this may not be the case for everyone, though, as this problem may be exacerbated by the fact that I’m a relatively petite female, and so the headphones may be a little more prone to movement when I’m wearing them out and about.
(Image credit: Future)2. A pressing issueThis could all be down to personal preference, but I don’t find the Edifier ES850NB controls as intuitive or easy to use as the ones on my pair of Bose QuietComfort Headphones. There’s a small slider switch on the outer surface of the right earcup on my pair of Bose, which also doubles as the Bluetooth pairing trigger when kept pulled forward. Then there are the volume and play/pause buttons on the back edge, and an action button on the back edge of the left ear cup that cycles through the different ANC modes, amongst other things.
In contrast, the Edifier ES850NB has fewer buttons. There’s a run of three buttons consisting of two volume buttons, separated by a power button, and a Bluetooth button, which also cycles through the different listening modes. The power button is identifiable by the raised tag, which, although fairly easy to recognise, feels somewhat rough and unpleasant under my fingertips.
Though I appreciate the color coordination of the buttons on the pale pink model I have, I have concerns that the silicone material used means they’re likely to discolor far quicker than a smoother, harder material would. This may be less of an issue for the darker-colored models, but I’d advise caution if you happen to be a person who wears makeup, as any foundation transferred from your fingertips would be a nightmare to clean off the textured surface here.
(Image credit: Future)3. Red light, blue lightWhen it comes to sticking my headphones on charge, I prefer the larger slot-like indicator light on my pair of Edifiers, as it’s easier to see when it turns red to confirm that they're charging. Having said this, it’s far easier to tell whether my QuietComforts are switched on at a glance, either from the position of the power switch or the small but steady white indicator light.
The reason it’s trickier with the Edifiers is because the power is indicated by a blue light that double-flashes every five seconds, which feels like a surprisingly long time when I’m used to getting instant confirmation of the power status. I also found it oddly irritating, both because the flashing blue light can be distracting when in eyeshot, and because it looks like the headphones are always in pairing mode when I’ve not got them on. The light does at least stop flashing once I’ve got music playing, not that I’d be able to see it even if it were.
Say what you want about Apple in 2025, but I truly believe that iOS still represents the very best of mobile software design.
It's intuitive, with a great design language that permeates throughout the OS, and it’s just really fun to use. But if there’s one area that hangs like a lifeless limb from iOS' otherwise muscular frame, it’s Apple Intelligence.
I can’t blame Apple for wanting to go all-in with its own take on artificial intelligence. After all, AI features are becoming key USPs of today's flagship (and even not-so-flagship) devices – from Galaxy AI on the best Samsung phones to Google’s in-house AI systems on the best Pixel phones. To not join in with the current AI revolution is to run the risk of being seen as old-fashioned.
Despite Apple's best intentions, though, Apple Intelligence – in its current form, at least – is a dud.
Of course, it’s tricky to know exactly where Apple Intelligence has gone wrong if you’re accustomed to Apple’s way of doing things. That's why, over the last week, I’ve been using AI on the OnePlus Open to see what I’ve been missing. I wasn’t quite sure what to expect, but now the experience has shown me just how much ground Apple has given up to the competition.
Google Gemini is just on a whole other level(Image credit: Future)To give Siri some credit, when it comes to setting timers, calling contacts, or setting reminders, it can do the job just fine, and if your requests stay within those confines, then you won’t have an issue. It's when you go beyond those parameters that it starts to fall apart.
Somewhat laughably for a man in his early thirties, I’m now finally making an attempt to get into football after feeling like too much of a social outcast whenever conversations turn to last night’s match, and I’ve been trying to use AI to keep me in the loop with everything that’s going on.
For example, instead of scrolling through a timetable of upcoming fixtures, I instead decided to simply ask Siri when the next Liverpool match was set for. The assistant responded in kind, but when I asked if it could then add that match as an event in my calendar, it hadn’t the faintest idea what I was talking about.
Moving over to Google Gemini and following through the same set of questions, it did exactly what I asked of it in next to no time.
Dropping it down a tad and giving Siri a lowball that I thought it would knock out of the park, when asking who the current Liverpool manager is, it couldn't respond without asking if I wanted the results via a Google search or a ChatGPT request. I can understand Apple wanting to give me that option if I’d asked Siri something about theoretical physics, but not for something so basic, and I don’t understand why it's unable to differentiate between the two.
These are the features I want to use AI for: simple requests that make my day just that little bit easier. I do at least have some hope that Apple can catch up at this level. Where the real uphill battle lies is in Apple's fight to compete with Gemini Live.
Living with a true digital assistant(Image credit: Google)Using Gemini Live for the first time, I felt a kinship with those who must have marveled at the very first consumer-grade computers as they started to recognize all of the possibilities on the horizon.
This feature lets you talk to Gemini in the style of free-flowing conversation – there’s no need to type or press any buttons, just speak what’s on your mind, and Gemini will respond much like a normal person.
Gemini Live feels like the full realization of what having a digital assistant is supposed to be.
If you ask Gemini for a realistic schedule that lets you juggle both your full-time job and your side hustle, then it’ll create one for you. For when you want advice on how to talk to a friend who is struggling with their mental health, Gemini can be surprisingly insightful. At one point, it even mentioned that it could pick up on the nuances and tone of my voice to recognize whether I’d said something in anger or jubilation. This feels like the full realization of what having a digital assistant is supposed to be, and Siri (in its current form) doesn’t compare in the slightest.
The disparity is so cavernous here that I do wonder whether Apple should change tact and invest its resources in changes that make sense. For example, OnePlus is one of the few companies that hasn’t changed its entire outlook to focus on AI, but it has included meaningful AI features that are available (like AI summaries of web pages) on the OnePlus 13, but never thrown in your face.
Thankfully, with the introduction of Call Screening and Live Translation in iOS 26, it seems as though Apple is trying to gain back some ground where functional AI features are concerned, and that’s great.
Beyond that, I think it might be time for Apple to abandon any plans of getting Siri to compete – after what I’ve seen from Google, I’m going to assign the Gemini app to my Action button anyway.
You might also likeMidjourney has long been a popular AI image wizard, but now the company is making moves and movies with its first-ever video model, simply named V1.
This image-to-video tool is now available to Midjourney's 20 million-strong community, who want to see five-second clips based on their images, and up to 20 seconds of them extended in five-second increments.
Despite being a brand new venture for Midjourney, the V1 model has enough going on to at least draw comparisons to rival models like OpenAI’s Sora and Google’s Veo 3, especially when you consider the price.
For now, Midjourney V1 is in web beta, where you can spend credits to animate any image you create on the platform or upload yourself.
To make a video, you simply generate an image in Midjourney like usual, hit “Animate,” choose your motion settings, and let the AI go to work.
The same goes with uploading an image; you just have to mark it as the start frame and type in a custom motion prompt.
You can let the AI decide how to move it, or you can take the reins and describe how you want the motion to play out. You can pick between low motion or high motion depending on whether you want a calm movement or a more frenetic scene, respectively.
The results I've seen certainly fit into the current moment in AI video production, both good and bad. The uncanny valley is always waiting to ensnare users, but there are some surprisingly good examples from both Midjourney and initial users.
AI video battlesMidjourney video is really fun from r/midjourneyMidjourney isn’t trying to compete head-on with Sora or Veo in terms of technical horsepower. Those models are rendering cinematic-quality 4K footage with photorealistic lighting and long-form narratives based solely on text. They’re trained on terabytes of data and emphasize frame consistency and temporal stability that Midjourney is not claiming to offer.
Midjourney’s video tool isn’t pretending to be Hollywood’s next CGI pipeline. The pitch is more about being easy and fun to use for independent artists or tinkerers in AI media.
And it really does come out as pretty cheap. According to Midjourney, one video job costs about the same as upscaling, or one image’s worth of cost per second of video.
That’s 25 times cheaper than most AI video services on the market, according to Midjourney and a cursory examination of other alternatives.
That's probably for the best since a lot of Hollywood is going after Midjourney in court. The company is currently facing a high-stakes lawsuit from several Disney, Universal, and other studios over claims it trained its models on copyrighted content.
For now, Midjourney's AI generators for images and video remain active, and the company has plans to expand its video production capabilities. Midjourney is teasing long-term plans for full 3D rendering, scene control, and even immersive world exploration. This first version is just a stepping stone.
Advocates for Sora and Veo probably don't have to panic just yet, but maybe they should be keeping an eye on Midjourney's plans, because while they’re busy building the AI version of a studio camera crew, Midjourney just handed a magic flipbook to anyone with a little cash for its credits.
You might also likeSuperman is less than a month away from flying into theaters and, according to James Gunn, the superhero movie's full cast still hasn't been revealed yet.
Speaking to Entertainment Weekly, Gunn confirmed that there are characters in the DC Universe (DCU) film, which arrives on July 11, whose identities haven't been publicly revealed. Asked if there are individuals who'll appear in Superman who haven't been announced yet, the DC Studios co-chief simply replied: "Yes."
Predictably, Gunn's one-word response set tongues wagging among DC comic book fans. Could Batman and/or Wonder Woman make unexpected yet crowd-pleasing cameos? What about other members of The Justice League, such as The Flash or Aquaman? Or how about Peacemaker, whose second season will launch on Max just over a month (August 21, to be exact) after Superman's theatrical release?
I don't think it'll be any of those metahumans. If Gunn is to be believed, Bruce Wayne and Diana Prince haven't even been cast in their standalone DCU Chapter One projects yet. Meanwhile, The Flash and Aquaman haven't been mentioned by Gunn during his two and a half year stint (at the time of publication) as co-CEO of DC Studios. As for Peacemaker, I'd be very surprised if he makes his official DCU debut before Peacemaker season 2 is released. So, who could Gunn be referring to?
A not so cryptic Kryptonian cameoMilly Alcock's Kara Zor-El is my top pick for a Superman cameo (Image credit: James Gunn/Twitter)In my opinion, there's only one individual that Gunn's reply applies to: Supergirl, aka Kara Zor-El.
Think about it. Per DC Comics lore, Kara is the only other person who survived Krypton's destruction, is Kal-El/Superman's cousin, and was supposed to join her blood relative on Earth to not only help raise him, but protect him while he grew up. Unfortunately, according to DC literature, her ship was knocked off-course by Krypton's explosion and didn't make it to Earth for another 24 years. By then, Kal-El was, unsurprisingly, all grown-up, had adopted the alias Superman, and had become one of the planet's mightiest heroes.
Considering Kal-El is already operating as a superhero as soon as Superman's story begins, we won't see much, if any, of Kara's backstory in one of 2025's most eagerly awaited new movies. Nevertheless, a brief cameo from the Maiden of Might – either before Superman's end credits roll or in a post-credits scene – would be a fun way to introduce her to audiences and tease the familial dynamic she has with her younger cousin.
Supergirl's brief appearance in Superman would also pave the way for her own solo movie. Supergirl: Woman of Tomorrow – it's now known by its simpler title Supergirl, which Gunn recently confirmed – will take flight on June 26, 2026 and, therefore, is the second DCU film that'll arrive in theaters. It would make perfect sense, then, for Milly Alcock's Supergirl to cameo in Superman ahead of the character's first feature film in over 40 years.
There's one more piece of evidence that's convinced me Kara Zor-El will show up in Gunn's Superman movie.
Supergirl takes its inspiration from the 'Supergirl: Woman of Tomorrow' comic book series (Image credit: DC Comics/Warner Bros. Discovery)Supergirl is heavily inspired by Tom King and Bilquis Eveley's 'Supergirl: Woman of Tomorrow' comic book. In it, Krypto the Superdog joins Kara as she embarks on a mission to help a young warrior exact revenge on the man who killed the warrior's father. You can read more DC Studios' film adaptation of this graphic novel via my dedicated Supergirl movie guide.
As Superman's first trailer and subsequent follow-up teasers have revealed, Krypto is part of this film's roster. If he's also going to appear in next year's Supergirl film, there's no better way for Kal-El to pass him onto his only surviving relative than by Kara making a brief appearance in the Man of Steel's latest big-screen reboot.
Do you agree with me that Supergirl is all but confirmed to appear in Superman? If not here, who else could Gunn be referring to, in your view? Let me know in the comments.
You might also likeNew data from Indeed claims that despite stronger regulations, corporate image and branding are primarily driving responsible AI mentions in job ads – not policy compliance.
The job platform's analysis – which searched for terms like “responsible AI," “ethical AI," “AI ethics," “AI governance” and “AI – found there was a weak correlation (0.21) between national AI regulation strength and responsible AI mentions in job postings.
Human-centered occupations in legal, education, mathematics and R&D were among the most likely sectors to be using such terms, with tech firms more likely to discuss AI more broadly.
Responsible AI is just a keywordAlthough responsible AI terms are rising globally (from close to 0% in 2019), they still only account for less than 1% of related ads on average.
The Netherlands, the UK, Canada, the US and Australia lead the way, however Indeed noted high AI-regulation countries such as the UK and those within the European Union do not have significantly higher mentions of those keywords compared with lighter-regulated countries.
In fact, differences were more noticeable between job sectors rather than regions, with legal (6.5%) way above the average.
Indeed's further analysis of responsible AI mentions across job listings globally suggests that regulatory pressure alone could be insufficient to drive widespread keyword adoption, suggesting '"responsible AI" mentions are more likely to be part of market-based incentives and corporate responsibility strategies.
"This suggests that other factors, including reputational concerns or international business strategies, might be driving Responsible AI adoption as much, or more, than regulatory requirements," the researchers shared.
With rising public concern around AI risks, these terms may serve as signalling tools aimed at clients, investors and the wider market, rather than reflecting deep internal change and commitment.
You might also likeThe third-party supplier many have blamed for the major cyberattack against Marks and Spencer (M&S) has revealed its first findings of an internal investigation into its role in the incident.
Tata Consultancy Services (TCS) has said none of its "systems or users were compromised" as part of the cyberattack.
"As no TCS systems or users were compromised, none of our other customers are impacted" independent director Keki Mistry told its annual shareholder meeting, Reuters reports.
Save up to 68% on identity theft protection for TechRadar readers!
TechRadar editors praise Aura's upfront pricing and simplicity. Aura also includes a password manager, VPN, and antivirus to make its security solution an even more compelling deal.
Preferred partner (What does this mean?)View Deal
TCS role and investigationM&S was apparently hit by the attack on April 22, revealing news of the incident several days later.
Following an initial probe, experts proposed that the attackers were able to break into its systems by compromising workers at TCS, which has provided third-party services to M&S for over a decade on Sparks, the retailer's customer reward scheme.
In 2023, TCS also reportedly secured a $1 billion contract to modernize M&S' legacy technology across its supply chain and omni-channel sales, aiming to boost online sales.
TCS, part of the massive Tata Group conglomerate, was reported to be carrying out a full investigation, but has remained quiet until this unexpected (and brief) mention.
M&S has forecast the attack could cost it around £300 million in lost operating profit in its financial year.
It was recently revealed the hackers contacted M&S CEO Stuart Machin in a mocking email the day after the attack, demanding payment for the attack.
This email was sent from the DragonForce hacking collective, which carries out such attacks in return for payment or reward from other parties in exchange for a cut of any ransom payments.
M&S has not confirmed whether it has paid a ransom to the hackers, but did admit some customer data was stolen in the attack. This did not include any passwords or card or payment details, but home addressess, phone numbers and dates of birth may have been affected.
Anyone concerned their data may have been taken, we recommend using a dark web monitoring service, or using a breach monitor such as Have I Been Pwned to check for potential exposures.
TCS has not yet responded to a TechRadar Pro request for comment.
You might also likeWith each passing day, AI is becoming more intelligent, more sophisticated, and more valuable to businesses worldwide. The opportunities it presents are endless – but only if brands are willing to embrace the new tech-driven business landscape and keep pace with rapid change.
There are many ways AI tools can be leveraged to work smarter, save time, reduce costs, and unlock new opportunities. Here are some tips and tricks to get you started…
1. Use AI to Cut Through the Data NoiseRecent breakthroughs in machine learning, natural language processing, and IT automation mean businesses can now access vast amounts of valuable internal and external data. High-quality data is critical to understanding your audience, spotting unmet needs, and tailoring strategies that truly resonate, but simply having access to more data doesn’t automatically translate into better outcomes.
Businesses must find ways to sift through the noise - and that’s where AI becomes invaluable. By using AI-powered tools, companies can extract actionable insights faster, more accurately, and at a larger scale, providing business leaders with a clear understanding of the steps needed to drive growth within their sector.
2. Accelerate Decision-Making with AI InsightsWhen applied correctly, AI-powered insights can dramatically enhance decision-making. Machine learning algorithms can process extensive datasets in seconds, allowing businesses to swiftly adapt to market changes and optimize pricing strategies.
AI can also analyze consumer behavior and competitor activity in real time, automatically adjusting pricing, marketing, and inventory strategies to maintain competitiveness - all while freeing your teams to focus on higher-value work that directly impacts profitability and operational efficiency.
3. Stay Ahead by Predicting Emerging TrendsOne of AI’s greatest strengths lies in its ability to predict shifts before they happen. Traditional market research relies on historical data, while AI-powered research looks forward, identifying emerging trends early.
Businesses can monitor social media, search behavior, and purchasing patterns with AI to predict what’s coming next — whether that's the next retail craze or the next big opportunity in your sector.
Companies are already capitalizing on AI to help them do this. For example, fashion brands are using AI to forecast style trends from TikTok, while FMCG companies are tracking real-time sentiment to guide product innovation.
4. Personalize Customer Experiences at ScalePersonalization has become the gold standard in customer engagement, and AI is the ultimate engine behind it. Ecommerce platforms use AI-powered recommendation engines to suggest products based on past behavior. Streaming giants like Netflix and Spotify use it to curate content that keeps users coming back.
By tapping into consumer data, AI allows businesses to deliver highly personalized experiences - increasing customer satisfaction, loyalty, and ultimately, revenue. AI-driven chatbots and virtual assistants also ensure faster, more consistent customer support, minimizing human error while boosting brand connectivity.
5. Prioritize Ethical AI and Empower Your WorkforceAs AI becomes central to operations, businesses must also prioritize responsible use. Machines are only as effective as the ethical frameworks behind them. Rather than replacing your workforce, AI should equip your employees with powerful tools to help them perform at their very best. Forward-thinking companies are investing in reskilling programs, ensuring their employees understand AI’s role and can work alongside it to create even greater value.
Meanwhile, AI companies must also stay at the cutting edge of innovation, continuously adapting our services to meet the changing needs of businesses operating in an increasingly complex, tech-driven world.
The Bottom LineIntelligent application of AI is rapidly becoming the new foundation of business success. Companies that integrate AI into their strategy, decision-making, and customer engagement processes today will be the ones leading their industries tomorrow. Those who act now will gain a competitive edge: operating with greater agility, efficiency, and precision, freeing up valuable time and resources to focus on innovation and growth.
We've listed the best performance management software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro