If you watched the controversial premiere, you’ll know that South Park season 27 episode 2 has had a brief hiatus following an explosive return to our screens. We’re now back on track with South Park Elementary counselor Mr. Mackey being fired due to budget cuts, then getting a job with United States Immigration and Customs Enforcement (ICE) in the latest episode. Unsurprisingly, episode 2’s plot is the source of content for my new, spontaneous award: the South Park burn of the week.
As we know, the Paramount+ show has never shied away from proving hot takes on world events, and South Park season 27 has got off to a fairly explosive start. We’ve seen a deepfaked version of President Donald Trump in full nude mode in episode 1, with episode 2 quickly following up with the horrific death of Superman’s sidekick Krypto, who gets shot in the head by US Secretary of Homeland Security Kristi Noem (RIP, you were a good boy to the very end).
But for me, the best critique of US politics from South Park this week wasn’t even seen in episode 2. If you’ve been chronically online over the last few days, you might know what I’m talking about. If you’re a well-rounded, rational person with hobbies and a life (unlike myself), let me catch you up.
South Park claps back at The White house in this major burn not seen in season 27 episode 2Wait, so we ARE relevant?#eatabagofdicks https://t.co/HeQSMU86DaAugust 5, 2025
As I’ve touched on, ICE is coming under heavy fire in South Park season 27, making up a prevalent part of both episodes we’ve seen so far. However, in an unexpected turn of events, the White House has tried to use being the butt of the joke to their advantage. As you can see from the X/Twitter post above, the White House’s official account posted a (pretty sinister looking) screenshot of South Park-ed versions of ICE agents in order to try and recruit people to join their ranks. Not only was the post immediately criticised by fans, but the South Park creators themselves, who replied: “wait, so we ARE relevant? #eatabagofd*cks.”
Not only is the show no stranger to parodying Trump long before season 27 arrived on the scene, the clapback is a direct response to a Variety interview with White House spokesperson Taylor Rogers on the events of episode 1. "Just like the creators of South Park, the Left has no authentic or original content, which is why their popularity continues to hit record lows,” Rogers said in response to the deepfake. “This show hasn't been relevant for over 20 years and is hanging on by a thread with uninspired ideas in a desperate attempt for attention."
The Department of Homeland Security (NHS) additionally told Newsweek: “We want to thank South Park for drawing attention to ICE law enforcement recruitment. We are calling on patriotic Americans to help us remove murderers, gang members, pedophiles, and other violent criminals from our country.”
It’s too early to tell yet, but I wouldn’t be surprised if episode 2 rivals the astonishing viewing figures of season 27 episode 1 (that received 6 million), making it the highest-watched South Park episode since 1999. The X post in question currently has 15.2 million views, meaning there’s a good chance more people will be tuning in as the weeks go on. If we’re being honest, the show is likely only just getting started with its scathing satirizations of Trump and the US government, so I’d suggest keeping one eye on new episodes and the other on their social channels.
You might also likeThe next generation of ChatGPT, titled GPT-5, is expected to be revealed later today in an OpenAI livestream – but a leak on GitHub appears to have revealed everything the AI pioneers will unveil during the event.
The leaked information appeared on GitHub, highlighting the different iterations of GPT-5, which is describe as "OpenAI's most advanced model, offering major improvements in reasoning, code quality, and user experience."
The GitHub blog leak has since been taken down, but can be easily accessed via the Internet archive.
The post highlights the following four new models:
LIVE5TREAM THURSDAY 10AM PTAugust 6, 2025
We expect to get more information on these new models during today's livestream, which begins at 10am PT / 1pm ET / 6pm BST.
While none of the models in the leak seem to hint at the simplifed structure we were hoping for, I'm still optimistic that GPT-5 ushers in a future where OpenAI's complex naming scheme isn't a point of friction for end users.
Sam Altman has promised that ChatGPT's future will see all models incorporated under a single interface, with the AI able to determine which functionality to use based on the user's prompt. While this leak doesn't appear to point to that change, Altman might explain how and when that will happen later today.
Stay tuned to TechRadar for all the GPT-5 news as it's announced. We'll be live-blogging throughout the day, documenting any information we find on the future of ChatGPT in the build-up to the livestream.
Samsung has accidentally leaked the new Galaxy Buds 3 FE on its Panama web site, and I'm trying very hard not to make a bad joke about the Panama Ear Canal.
This isn't the first time Samsung has accidentally leaked these buds: when Evan Blass leaked images of them in July, Samsung (probably) got them pulled – because the removed images then said "Media not displayed: this image has been removed in response to a report from the copyright holder" which effectively confirmed that they were likely real.
As much as I'd love to put on my tinfoil hat and claim that this is a clever marketing strategy, it clearly isn't: it's a cavalcade of mishaps and cock-ups.
So what have we learned from this latest oopsie?
The new Buds 3 FE look like washed-out versions of the Buds 3 Pro, pictured above (Image credit: Future)Samsung Galaxy Buds 3 FE: what's leaked?This time the leak is the accidentally published product page, and that means we know their Panamanian price ($129 USD), what they look like (a bit like monochrome Galaxy Buds Pro), and what at least some of the color options are (dark gray and white).
The Galaxy Buds 3 FE appear to have silicone ear tips but everything else is a guess: as Android Police reports, the product page doesn't include any actual product information, which is yet another indication that somebody's hit the go button too early.
In case you missed it, the Buds 3 FE are the follow-up to the original Buds FE or 'Fan Edition'. That's right, there's no Buds 2 FE, because the also-new Galaxy Buds Core effectively take that spot.
We thought the original Buds FE were, well, OK, describing them as "the Samsung equivalent of Apple AirPods" with "reasonably good" sound and decent ANC. Given how good the Buds 3 Pro are, here's hoping some of their sound quality and other improvements trickle down to Samsung's more affordable option.
You might also likeThe Samsung Galaxy S25 Edge caused a commotion when it was revealed at the end of this year’s first Samsung Galaxy Unpacked event, so much so that the phone’s actual launch on May 30 seemed to come and go with relatively little fanfare.
And with the launch of the Samsung Galaxy Z Fold 7 and Galaxy Z Flip 7 taking over the airwaves at the end of July, it seemed possible that the new Edge series could fall into the background.
However, a new rumor suggests that Samsung is committed to supporting and refining its latest flagship series, and could bring some serious improvements to the Galaxy S26 Edge that put it right back into the spotlight.
Notable tipster Ice Universe (who now goes by the display name PhoneArt on X, formerly Twitter) shared a brief post on August 6 that simply reads: “Galaxy S26 Edge 5.5mm 4200mAh."
That first measurement most likely refers to the thickness of the phone – the Galaxy S25 Edge measures 5.8mm at its thinnest point, and having gone hands-on with the phone myself, I can vouch that it’s already incredibly impressive to hold.
At 5.5mm, the Galaxy S26 Edge would be thinner than an unfolded Galaxy Z Fold 6, the latter coming in at 5.6mm in its open state. That’s impressive, considering the Galaxy Z Fold 6 has much more internal space to fit its components into.
The second figure Ice Universe mentions refers to battery capacity, and, if accurate, would make a slimmed-down chassis even more impressive.
The Samsung Galaxy S25 Edge has a 3,900mAh battery – as our full Samsung Galaxy S25 Edge review notes, this is low for a modern flagship phone, but understandable given its svelte construction.
The Galaxy S25 Edge measures 5.8mm at its thinnest point (Image credit: TechRadar)If Ice Universe is on the money (and they have a fairly solid track record), then Samsung will have managed to increase capacity by nearly 8% while fitting the new cell into a smaller frame.
Around the time of the Galaxy S25 Edge’s reveal, I wrote that Samsung’s new slim flagship could open doors for a new branch of the smartphone market altogether, so this rumor gives me hope that the Korean tech giant is continuing to give the revived Edge series its full attention.
In fact, I think the Galaxy S26 Edge has the potential to be one of the best Samsung phones, or even one of the best Android phones, if these upgrades turn out to be real.
However, this post from Ice Universe is far from the most detailed tipoff we’ve ever gotten, so it’s probably best to wait for further tips and rumors to back up these suggestions.
In any case, we don’t expect to hear official word of the Samsung Galaxy S26 Edge until next year – for now, let us know what you want to see from this rumored phone in the comments below.
You might also likeGoogle might have some major software upgrades planned for the cameras on the Pixel 10 series, because along with a new Gemini-powered Camera Coach, these phones might also offer a 'Conversational Photo Editing' mode.
This is according to Android Headlines, which claims that the new software tool will also be powered by Gemini, and will allow you to use your voice (or typed text) to ask for changes to photos, such as adjusting the background, brightening the image, or erasing an object.
So, this would give you one more way to edit your photos, and might make the process easier for users who aren’t confident in their hands-on editing skills.
Android Headlines claims that Conversational Photo Editing will come to every Pixel 10 model – so (we think) the Pixel 10 itself, the Pixel 10 Pro, the Pixel 10 Pro XL, and the Pixel 10 Pro Fold. The site speculates that it might also eventually roll out to older models as a Pixel Feature Drop, but initially, at least, it’s thought to be exclusive to the upcoming phones.
A trio of colorsIn other Google Pixel 10 news, Roland Quandt (a leaker with a great track record) has shared some renders of the Pixel 10 with WinFuture, some of which you can see below.
Image 1 of 3(Image credit: WinFuture / Roland Quandt)Image 2 of 3(Image credit: WinFuture / Roland Quandt)Image 3 of 3(Image credit: WinFuture / Roland Quandt)These don’t really show us any part of the phone that we haven’t seen in earlier Pixel 10 design leaks, but they do give us a close look at the handset in blue, yellow, and black shades, which are rumored to be called Indigo, Limoncello, and Obsidian, respectively.
The blue and yellow in particular are quite striking, and are sure to stand out among most smartphones, so we hope this leak is accurate.
We’ll find out soon, as Google is expected to unveil the entire Pixel 10 line – with the possible exception of the Pixel 10 Pro Fold – on August 20.
You might also likeReikon Games, the studio behind Ruiner, has announced that its next game, Metal Eden, will officially launch on September 2 for PC, PlayStation 5, Xbox Series X, and Xbox Series S.
From publisher Deep Silver, Metal Eden is an adrenaline-fueled sci-fi first-person shooter (FPS) featuring fast-paced combat combined with cybernetic parkour, that explores a world where humanity's consciousness has transcended the flesh, now residing within robots."
A brand new story trailer has also been released, showcasing the main character Aska, a Hyper Unit robot, who is sent on a suicide mission to rescue the citizens’ cores from the city of Moebius, once a hopeful new home for humanity, now turned into a deadly trap.
"Time to descend into the cryptic, atmospheric world of Metal Eden. It’s an invitation to embark on an immersive journey into the heart of Planet Moebius, where mankind’s remnants are trapped within decaying cores—and Aska may be the key to their survival," Reikon Games describes.
"Her hidden past begins to unfold as she confronts the devastating legacy of the Erosion Bomb and her own transformation from human to weapon."
The game also features eight unique missions where players will need to defeat the Internal Defence Corps in cybernetic warfare, confront engineers, and "uncover the mysteries of the project Eden."
Metal Eden was revealed in February as part of Sony's State of Play and was originally set for a May 6, 2025, release.
Although the game doesn't arrive for another month, a free demo is now available to play on all platforms, including Steam.
In TechRadar Gaming's Metal Eden preview earlier this year, hardware writer Dashiell Wood said the game is "an eclectic blend of everything that made Doom (2016) and Ghostrunner great, with bombastic action combat that challenges you to experiment with a varied arsenal of meaty futuristic weapons, and a fast-paced parkour system where you’ll be running off walls and gliding down neon-lit rails between fighting arenas."
You might also like...September 16 is D-Day for Dashlane, and by D-Day, I mean the discontinuation of the password manager’s free plan.
Users will have to upgrade to a Dashlane paid plan, or switch over to a different free password manager.
The Dashlane free plan has long been the go-to for many who want a no frills credential manager that doesn’t break the bank - but users will now either have to pay up or switch providers.
Dashlane ditches free planUsers who do decide to make the switch to a different provider can export all their stored passwords from Dashlane as a CSV file, and import that file into an alternative provider.
Just be sure to delete the unencrypted CSV file once you are finished using it or it could put all your passwords at risk.
To export passwords from Dashlane as a CSV file, take the following steps:
For those looking to continue using Dashlane, the platform offers two options, Premium, or Friends & Family.
The Premium plan grants you access to unlimited password and passkey storage, phishing alerts, secure sharing, Dark Web Monitoring for leaked credentials, a VPN, and passwordless login for new users. All these features are available across an unlimited number of devices at $4.99 per month (billed annually).
The Friends & Family plan includes all of the above across 10 accounts, with the one caveat being that only the plan manager will have access to the VPN. This plan costs $7.49 per month (also billed annually).
Alternatively, there are a number of competitors who also offer free plans. NordPass’ free tier offers unlimited password storage but only allows one active session at a time. RoboForm also offers unlimited password storage but also throws in a two-factor authenticator app, password monitoring, and a cloud backup. Alternatively, Bitwarden offers a free tier across an unlimited number of devices and includes secure passkey management with the added promise of it always being free.
You might also likeThe TechRadar team has seen plenty of wearables over the years, and if you asked us about the best Garmin watches on the market right now, the Garmin Venu 3 would certainly be involved. It's an impressive device for all kinds of reasons, but an even better model is on the way in the form of the Garmin Venu 4.
Garmin hasn't said anything officially about this smartwatch yet, but a few leaks and rumors have emerged suggesting it's on the way. The Venu 3 launched in August 2023, so the time is definitely right for Garmin to introduce a successor.
In June 2025, we saw the launch of the Garmin Venu X1, which was something of a surprise – but the Apple Watch Ultra 2 competitor doesn't appear to be the true successor to the Venu 3, despite the branding. Read on to find out everything we think we know so far about what the Garmin Venu 4 will bring with it.
Cut to the chaseThe leaks and rumors we've seen around the Venu 4 haven't specifically referred to a launch date or a price – but we can make some educated guesses on both counts, based on what Garmin has done in the past.
Garmin launched the Venu 3 in August 2023, while the Garmin Venu 2 broke cover back in April 2021 (there was also a Garmin Venu 2 Plus in January 2022, adding a mic). With a two-year gap between previous releases, the year 2025 and the month of August would be the clever guess for the Venu 4.
As for pricing, all we have is what we saw with the Venu 3: that watch originally went on sale for $450 / £450 / AU$749, so it's in the middle of the pack compared to other Garmin watches. It seems likely that the Venu 4 will take the same approach, sitting somewhere between the high-end Fenix models and the cheapest Forerunners.
Garmin Venu 4: Leaks and rumorsThe Garmin Venu X1 (Image credit: Garmin)Now we haven't been exactly inundated with Garmin Venu 4 leaks and rumors, so this section of our preview is going to be a little sparse. What we can tell you is that a mention of the Venu 4 watch has appeared on the Garmin Japan website – specifically, in a description of the Garmin golf app.
That tells us that the Garmin Venu 4 is almost certainly on the way, that you'll be able to use it to improve your golf game, and... not much else. Garmin has actually since removed the mention of the Venu 4 from the golf app documentation, so make of that what you will. Clearly, its existence was supposed to be a secret.
While it's not a leak per se, a report from Garmin Rumors does point out that out of all of Garmin's flagship smartwatches, the Venu is the one that's been waiting for a refresh the longest. The Garmin Fenix 8 was unveiled in August 2024, which gives us another indication that August 2025 could be the right time for the Venu 4.
We can also pick up some hints from the recently unveiled Garmin Venu X1, which shows how Garmin as a company is changing. It sports the biggest screen yet on a Garmin watch, and follows the recent Garmin trend of sacrificing battery life for display quality – perhaps hints at ways in which the Venu series may evolve.
Garmin Venu 4 What we want to seeSensors on the Garmin Venu 3 (Image credit: Future)As much as we like the Venu 3, it isn't quite a perfect smartwatch – and two years is a long time in the gadget industry. With that in mind, here are five improvements we're hoping to see when the Garmin Venu 4 eventually sees the light of day.
1. An improved designThe Venu X1 has already shown us what Garmin is capable of in terms of design refinements: it's noticeably thin and lightweight, and that's something we're hoping for with the Venu 4, perhaps with upgrades in terms of bezel style and overall durability.
We've seen titanium used in watches like the Venu X1 and the Forerunner 970, both launched in 2025, and the Venu 4 may well follow that trend. Given the two-year gap, we'd expect an upgrade on the Venu 3's 1.4-inch AMOLED, 454 x 454 pixel display too.
2. Upgraded sensorsWe're always looking for new and improved sensors on smartwatches and fitness trackers, whether it's improved accuracy for measurements or entirely new categories of measurement – and this is something that the Venu 4 could well be able to deliver on.
The latest Garmin smartwatches to launch are still making use of the Elevate v5 sensor that the Venu 3 is fitted with, so you could argue it's time for an upgrade in this area – even if the Venu 3 is already one of the best models on the market for health and fitness insights.
3. More software featuresIn our Garmin Venu 3 review, we pointed out that the watch was missing out on some of the more detailed metrics and advanced features that are available on other Garmin models – so Garmin could implement these features on the Venu 4, if it wanted to.
Features such as Suggested Workouts and Training Readiness aren't found on the Venu 3, but could be transferred over to the Venu 4 from other watches, adding to the appeal. It would be a way for Garmin to increase the value of the Venu 4 relatively easily.
4. On-board mappingOne of the disappointments about the Garmin Venu 3 is that it doesn't have an integrating mapping or route navigation feature, which puts it behind other models in the Garmin range – no doubt very deliberately, so each series can remain distinct from each other.
While it might create some overlap with the more advanced Garmin smartwatches, integrated maps would be a welcome and useful addition. It's worth noting that certain Garmin watches were recently upgraded with Google Maps support, which is a good sign.
5. A cellular modelIn recent years, Garmin has largely abandoned the idea of watches that can get online independently of a connected phone – no doubt making the calculation that the trade-off in terms of battery life and device price isn't going to be worth it for most users.
However, there's a lot to be said for a cellular watch option that can message, call, and update itself independently of any other gadget. It would help the Venu 4 stand out from the Garmin crowd, and mean it was better able to take on the best Apple Watches.
You might also likeGoogle has announced the general availability of its latest AI coding agent, Jules.
Initially revealed in December 2024 as a Google Labs project, Jules has now launched as an offering to paying customers, but limited free access is also confirmed.
In a blog post announcing the launch, Google stated its decision to use Gemini 2.5 Pro would lead to "higher-quality code outputs."
Google makes Jules generally availableDesigned for asynchronous operation, Jules can work in the background without user supervision, making it a considerable improvement over previous generative AI examples of coding assistants. Supporting multimodal inputs and outputs, Jules promises to write, test and improve code while simultaneously visualizing results for its users.
Google hopes its new AI agent will not only be a valuable tool for developers, but also website designers and enterprise workers who don't have sufficient coding experience.
During the beta phase, users already used Jules to submit hundreds of thousands of tasks, with more than 140,000 code improvements shared publicly.
Now that Google's confident Jules works, general availability lands with a new streamlined user interface, new capabilities based on user feedback and bug fixes.
Although the free plan gets the same Gemini 2.5 Pro backing as the higher-tier options, it's limited to 15 daily tasks and three concurrent tasks.
Pro ($124.99/month) adds support for up to 100 daily tasks and 15 concurrent tasks, as well as "higher access to the latest models, starting with Gemini 2.5 Pro," suggesting it is likely to get model improvements before the free tier.
Ultra ($199.99/month) gets priority access to those latest models, plus 300 daily tasks and 60 concurrent tasks.
You might also likeAI has become synonymous with business transformation, promising insights and efficiency. Yet for many CEOs, traditional AI tools remain frustratingly passive, surfacing insights but failing to take action. Today’s business leaders don’t need more dashboards; they need execution.
This gap often stems from a misunderstanding of AI's role. Tools like “co-pilots” transcribe, summarize, and recommend, but they still rely on humans to follow through. That missing “last mile” is where execution breaks down, costing companies time, revenue, and agility.
Understanding the AI DichotomyThere's a widespread misconception about AI's role in modern business operations, and many CEOs don’t understand the difference. Traditional AI models, including generative AI (GenAI) and transcription services, rely on human intervention to move from insight to action.
They surface recommendations but require human oversight to execute, often causing operational stalls and insights that aren’t accounted for in decision-making. According to Gartner Research, 73% of insights captured by legacy AI tools never translate into executed actions, highlighting a tangible gap between data availability and operational execution.
Imagine a sales representative finishing a call where a potential customer expresses interest but mentions budget constraints. A traditional AI tool captures this interaction and generates a transcript, flagging the budget issue as a critical insight. However, it's up to the representative, assistant, or manager to manually review this flagged point, determine the next steps, update CRM records, and communicate that flagged point in their follow-ups.
This manual process introduces delays, allows for human errors, and increases the likelihood that the lead cools off or engages with a competitor in the meantime. Despite recognizing valuable data, the reactive nature of traditional AI means execution gaps persist, leaving executives puzzled when expected outcomes fail to materialize.
Misunderstandings Around Reactive and Proactive AIThe issue isn't just technological; it's conceptual. Organizations continue to misunderstand the distinct roles and capabilities of different AI categories though their operations. Traditional reactive AI solutions are often perceived as holistic operational fixes, setting unrealistic expectations and leading to implementation failures and skepticism regarding AI's overall efficacy in the first place.
The misunderstanding also encompasses risk and accountability.
Proactive agentic AI might raise concerns about automated errors or missteps. However, human leaders still hold the reins for overall strategy and are ultimately responsible for the outcomes. Agentic AI does not remove professional, human oversight; instead, it supports leaders by automating routine operational tasks, enabling teams to focus strategically and maximize on high-value opportunities.
The Proactive Shift: Introducing Agentic AIAgentic AI is a monumental leap in how AI operates, shifting from simply offering insights to actively taking the reins and executing tasks autonomously within existing workflows. Rather than merely highlighting data trends, it triggers structured, automated actions directly from the surfaced insights. This is to guarantee that customer and market signals are promptly acted upon, ultimately boosting revenue outcomes.
There is a spectrum of Agentic AI abilities going from advanced automations to autonomous decision making. It is important to know how and where to employ this power in the right way that is secure.
This type of AI continuously captures structured, clean, first-party data from customer interactions, such as sales calls, emails, and meetings. It then automatically integrates this information into CRM systems, communication platforms, and operational workflows, leaving no insights to fall through the cracks. Unlike traditional AI that merely suggests actions, agentic AI independently completes these tasks, prompting a reduction in administrative overhead and operational friction.
The Cost of Administrative OverheadTraditional AI's reactive approach exacerbates administrative burdens, inevitably impacting productivity and revenue potential. Boston Consulting Group reports that sales representatives spend up to 45% of their time on administrative tasks, such as CRM updates and manual follow-ups. This administrative overload limits their capacity to engage in revenue-generating activities and reduces overall sales effectiveness.
For CEOs and revenue leaders, execution speed directly correlates with revenue performance. Delays in responding to customer dissatisfaction, competitive shifts, or emerging market opportunities can lead to substantial financial setbacks. Even minor operational delays can mean the difference between growth and stagnation.
That execution gap is precisely what Agentic AI is built to resolve. Embedding directly into existing workflows and autonomously executing necessary tasks ensures immediate, structured responses to market signals. Instead of solely identifying churn risks, agentic AI proactively alerts customer success teams with clearly defined actions to prevent revenue loss.
Interoperability and Operational Agility Across the EnterpriseA major limitation of traditional AI tools is their siloed nature. Data outputs typically require manual intervention to distribute across departments, creating inefficiencies and inconsistencies. Agentic AI, in contrast, operationalizes intelligence by integrating across the enterprise's existing technology stack, enhancing transparency and consistency among sales, marketing, and customer success teams. This integration allows for interoperability while reducing delays associated with manual transfers and human-dependent workflows.
Operational agility has become a priority for CEOs who face rapidly shifting markets and fierce competition. While traditional AI provides important insights, it lacks the execution capacity to drive agile responses. Agentic AI meets this demand by automating real-time, responsive actions within core business processes.
Embracing Agentic AI: The Path ForwardWhy is Agentic AI so important right now? Because understanding and embracing Agentic AI isn't just about gaining an edge; it's about finding and taking advantage of opportunities in today's fiercely competitive, resource-strained, and unpredictable markets. This goes beyond a simple tech improvement; it's a way to redefine how businesses turn intelligence into action, directly converting their strategic insights into real, immediate impact.
I’ve tested and ranked 12 of the best CRM platforms.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
At London Tech Week we heard Kier Starmer make a commitment to the UK people that we would “become an AI maker, not an AI taker”. But how did this shift from an assumed frontrunner to an AI underdog happen?
Economic and geopolitical instability, including tariffs and ever-changing political alliances, has caused global technology leaders to realize that both their physical and digital infrastructure is best kept in-region. This allows them to protect themselves and not let innovation be hampered by outside forces. This has led to what we are seeing with Starmer’s comments - investment and commitment towards keeping AI systems developed and controlled within the UK.
Enterprises and governments are being shown that sovereign AI and data platforms are no longer a nice-to-have, but a must-have. These are defined as open source-based systems where data and AI are governed together at the edge, on-prem, or in-country cloud. This feels new to many organizations as it requires more than just turning a dial to more or less cloud.
Where we stand todayResearch shows that the UK is falling behind AI innovations, despite strong IT infrastructure and a robust workforce in place. Top enterprise leaders aren’t currently matching the government’s urgency or investment focus on AI and data. This disconnect is particularly stark in the banking sector, which was seen as the UK’s most likely AI growth engine only a year ago.
Sovereignty over AI and data must be mission-critical and applied quickly, for every economy and every enterprise within it. If we continue to hesitate, concerns are that the UK could lose its economic edge. In today’s world, if you can’t control your data and AI, you’ll struggle to stay ahead.
So what needs to be done to fix this growing problem and reposition the UK as an AI leader with a solid base to scale in-region?
1.An intention gapWhen it comes to intent to build sovereign AI and data platforms, UK leaders are among the least committed across the globe, despite government-backed programs being critical infrastructure plays.
Needless to say, if national ambition isn’t matched by enterprise commitment, the UK risks losing its early advantage.
2.Seeing beyond the immediate, and building for itGlobally, it appears that success hinges on a strategic commitment to full data access, open source foundations, integrated AI tools, and hybrid infrastructure, as well as accelerating applications into an agentic state.
The fastest-moving economies aren’t siloed in their application; generative and agentic AI are transforming every industry. They’re building sovereign AI and data factories that are open source, flexible, and future-proof architectures. This means that their AI and data can adapt and deliver value across borders, partners, and time.
In countries leading the charge, enterprise leaders follow these core beliefs:
1.Deep integration of AI and data is critical.
2.Sovereignty isn’t a choice—it’s a necessity.
3.Sustainable success relies on controlling your AI and data platform.
The next three years will shape which economies control the future of data and, consequently, AI. Although trillions have been invested by UK enterprise and government to build one of the world’s most advanced AI ecosystems, without strategies tied to these three core principles, these assets won’t deliver ROI.
3. Sensing the urgency, and adapting to itThe UK is not alone in facing this crossroads - Germany, Saudi Arabia, and the UAE are also converting infrastructure into execution. However, the UK seems to be hesitating more than its counterparts. For every competitor, there is increased recognition that sovereign control over AI and data is now essential, a push that is needed.
This recognition is at the heart of reshaping enterprise priorities. As more leaders act, the foundations they’re choosing matter just as much as the strategy itself.
Closing remarksThe divide between early movers and those hesitating is already clear. Just 13% of enterprises have fully integrated AI and data operations, but they account for 21% of the total global ROI, signaling what’s possible when strategy and execution align at speed.
There’s a huge opportunity within this space, as the global AI and data economy is projected to reach $16.5 trillion by 2028. The UK still has a structural advantage with world-class infrastructure, talent, and public investment. All that’s left is action.
We list the best cloud storage.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Like it or not, cyberattacks are now a regular occurrence, and part of everyday life. However, despite this predictability, it still remains impossible to pinpoint exactly when and where they will occur. This means that businesses must remain vigilant, constantly on the lookout for any and all potential threats.
From the moment a company is created, it must be assumed that attacks will be coming. Just because it is new and unknown does not mean it is safe. Take DeepSeek for example, despite being the new kid on the block, as soon as its name hit the news, it was hit with a severe large-scale attack. However, this does not give established companies an excuse to drop their guard.
The past couple of months alone have seen some of the biggest names in retail fall victim, with large scale companies like M&S and Dior unable to properly defend against attacks. No matter how big the company, it is vital to employ a well-rounded cybersecurity strategy that provides security from the foundational stages of development through to the latest iteration.
Siloed teams are outdatedThe key to weathering the storm of cyberattacks is a firm foundation. Cybersecurity principles must be embedded from the outset, ensuring a strong and secure beginning for any product or system development. These defenses must be continually built upon, monitored, tested and updated on a proactive basis to ensure any potential vulnerabilities are mitigated before they can become a threat.
Threats are constantly evolving, and the attack defended against today could be the one that breaks through tomorrow. Therefore it is imperative to keep any and all threat intelligence up to date, monitoring threats in real-time and continuously sharing the information business-wide.
Unfortunately, it is the dissemination of this information that can cause issues - especially when different teams are receiving information late, or not at all. This is often the case in organizations that employ a siloed approach, with individual teams working in isolation from each other.
This fragmented structure can not only impact an organization's ability to detect and respond to threats, but the capability to learn from them and share these insights with other teams. Without a formal structure in place to facilitate cross-team collaboration, teams may develop different processes in parallel, use different tools, and fail to communicate across functions when facing risks or as incidents unfold.
As a result, security controls are inconsistent, making it tough, if not impossible, to establish standard methods for sharing threat intelligence and incident response procedures.
Introducing collaborationA centralized platform that unifies threat intelligence company-wide will strengthen security efforts across departments and ensure that teams operate as part of shared vision. Creating common goals and metrics encourages collaboration and establishes a clear sense of purpose. Threat Intelligence Platforms (TIPs) enable organizations to adopt this approach, integrating across business systems and providing automated intelligence sharing.
TIPs act as the heart of an organization's cyber defenses, gathering information from across multiple sources, from public feeds, to industry reports, and distributing it across all teams. They are able to sift through the data and identify serious threats, advising teams where to focus their efforts to prioritize the most at-risk vulnerabilities.
Through the automation of processes such as data collection and by removing internal communication barriers, organizations can translate scattered, complex cyber‑threat information into coordinated action to protect critical assets faster and comprehensively. This will result in improved threat detection, quicker incident response times and a greater overall cyber resilience.
The hyper-orchestration approachThe hyper-orchestration approach builds upon these foundations of collaboration and collective defense, replacing siloed teams with a united threat intelligence network. Employing this structure from the formation of a business will allow organizations to avoid the formation of individual teams, and enhance their cybersecurity capabilities from the outset.
This collective defense approach coordinates threat intelligence and response activities to tackle specific security threats. Perhaps one of the most notable examples of collective defense in action is the Information Sharing and Analysis Centre (ISAC), which collects, analyses and disseminates actionable threat information to its members.
These centers enable organizations to identify and mitigate risks and boost their cyber resilience. ISACs are made up of a comprehensive group of highly competent and professional organizations, with the National Council of ISACs currently comprising almost 30 sector-specific organizations, for example.
Recent research highlights the importance of this collective defense approach, with 90% of cybersecurity professionals believing collaboration and information sharing are very important or crucial for a strong cyber defense. Despite this, nearly three-quarters (70%) feel their organization needs to do more to improve threat intelligence sharing capabilities.
It is clear that a collective defense approach is growing more popular, with dedicated information sharing roles now recognised at the highest levels of government and regulation. The EU Network and Information Systems Directive 2 (NIS2), which came into force last October, is a clear example of this - focusing on the resilience of sectors that are under particular risk.
With clear importance being placed on collaboration in cybersecurity, organizations must take steps to incorporate this approach into their cyber security strategies from day one. Employing hyper orchestration and collective defense is key to enhancing cyber resilience and ensuring systems are secure through every stage of a business’ development.
We list the best firewall for small business.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
If you have a kid who loves to hear about themselves in a story, Google’s Gemini AI has a new trick that could keep them happy for a long time. Gemini's new Storybook feature lets you generate fully illustrated, ten-page storybooks with narration from a single prompt.
You describe the tale, the look you want, and any other details, and Gemini writes the story, creates images for each page, and reads it aloud within a few minutes.
Storybook, in some ways, just combines existing abilities like text composition, image generation, and voice narration. Still, by putting them into a single prompt system, it speeds up the final product enormously. If you don't like certain details of the look or writing, you can simply adjust the book with follow-up prompts. You can even feed it a photo to shape the setting or characters.
The appeal for those who might feel they lack creative writing or drawing skills is obvious. No need to hire an illustrator or record voiceovers yourself. If your child wants a bedtime story about a shy dragon who finds confidence at music camp, you type that in, and within minutes, you’ve got a book with pictures, narration, and page-by-page structure.
This isn’t just for bedtime, either. Teachers can create customized stories to explain hard topics, perhaps teaching second graders about gravity with a friendly astronaut cat. Therapists could use storybooks to help kids talk through emotions using characters they connect with. Aunts and uncles can make personalized birthday stories with inside jokes and family pets.
What used to be a labor-intensive creative project is now something you can do on your phone during lunch break.
AI storytellersAnd it is a notable shift from the standard template with a blank to fill in approach common to other AI tools. The narration even adapts to the tone of the story, with voices that can be whimsical, soothing, or dramatic, depending on what your story needs. Google is pitching the tool to busy parents, overworked teachers, and creative kids looking for a co-author and illustrator for their ideas.
I asked Gemini to make a story about my dogs going on an adventure in nature, sharing their names and describing their looks, and that's about it. You can read and listen to the Gemini-created story here.
It did a remarkably good job, albeit with a very inconsistent look to the dogs from page to page and a somewhat dull story. And when I tried it again to see how it would perform with the same prompt, the dogs sometimes had more than four limbs, not exactly reassuring to a child looking forward to a story about their pets.
And while it's theoretically possible that Gemini could write and illustrate a story better than the many classic and modern children's books out there, or one more personally resonant than writing it yourself, I personally have doubts. This is a fun little trick, but the idea of skipping every bookstore, library, and box of crayons and pencils for an AI alternative that can't always even make your dog look the same on every page feels like the exact activity I'd rather do myself. I'll stick to asking AI for help organizing my kitchen and leave the bedtime stories to me.
You might also likexAI is pushing out the Grok Imagine AI video maker to those willing to pay for a SuperGrok or Premium+ subscription. Assuming you've paid your $30 or $35 a month, respectively, you can access Imagine in the Grok app under its own tab and turn prompts into short video clips. These last for around six seconds and include synced sound. You can also upload static images and animate them into looping clips.
Grok Imagine is another addition to the increasingly competitive AI video space, including OpenAI's Sora, Google's Veo 3, Runway, and more. Having audio built in also helps the tool, as sound is still not a universally available feature in all AI video tools.
To stand out, Elon Musk is encouraging people to think of it as “AI Vine,” tying the new tool to the classic and long-defunct short-form video platform for Twitter, itself a vanished brand name.
However, this isn’t just nostalgia for 2014 social media. The difference is that it's a way to blend active creation and passive scrolling.
Grok Imagine should get better almost every day. Make sure to download the latest @Grok app, as we have an improved build every few days. https://t.co/MGZtdMx26oAugust 3, 2025
Spicy GrokOne potentially heated controversy around Grok Imagine is the inclusion of a “spicy mode” allowing for a limited amount of more explicit content generation. While the system includes filters and moderation to prevent actual nudity or anything sexual, users can still experiment with suggestive prompts.
Musk himself posted a video of a scantily clad angel made with Grok Imagine. It provoked quite a few angry and upset responses from users on X. xAI insists guardrails are in place, but that hasn’t stopped some early testers from trying to break them.
xAI is keen to promote Grok Imagine as a way to make AI video accessible for everyone, from businesses crafting ads to teachers animating lessons. Still, there are understandable concerns about whether an AI platform that was only recently in hot water for outright pro-Nazi statements can be trusted to share video content without getting into more hot water. That goes double for the filters for the spicy content.
You might also likeAs you may have seen, OpenAI has just released two new AI models – gpt‑oss‑20b and gpt‑oss-120b – which are the first open‑weight models from the firm since GPT‑2.
These two models – one is more compact, and the other much larger – are defined by the fact that you can run them locally. They'll work on your desktop PC or laptop – right on the device, with no need to go online or tap the power of the cloud, provided your hardware is powerful enough.
So, you can download either the 20b version – or, if your PC is a powerful machine, the 120b spin – and play around with it on your computer, check how it works (in text-to-text fashion) and how the model thinks (its whole process of reasoning is broken down into steps). And indeed, you can tweak and build on these open models, though safety guardrails and censorship measures will, of course, be in place.
But what kind of hardware do you need to run these AI models? In this article, I'm examining the PC spec requirements for both gpt‑oss‑20b – the more restrained model packing 21 billion parameters – and gpt‑oss-120b, which offers 117 billion parameters. The latter is designed for data center use, but it will run on a high-end PC, whereas gpt‑oss‑20b is the model designed specifically for consumer devices.
Indeed, when announcing these new AI models, Sam Altman referenced 20b working on not just run-of-the-mill laptops, but also smartphones – but suffice it to say, that's an ambitious claim, which I'll come back to later.
These models can be downloaded from Hugging Face (here's gpt‑oss‑20b and here’s gpt‑oss-120b) under the Apache 2.0 license, or for the merely curious, there's an online demo you can check out (no download necessary).
(Image credit: Future / Lance Ulanoff)The smaller gpt-oss-20b modelMinimum RAM needed: 16GB
The official documentation from OpenAI simply lays out a requisite amount of RAM for these AI models, which in the case of this more compact gpt-oss-20b effort is 16GB.
This means you can run gpt-oss-20b on any laptop or PC that has 16GB of system memory (or 16GB of video RAM, or a combo of both). However, it's very much a case of the more, the merrier – or faster, rather. The model might chug along with that bare minimum of 16GB, and ideally, you'll want a bit more on tap.
As for CPUs, AMD recommends the use of a Ryzen AI 300 series CPU paired with 32GB of memory (and half of that, 16GB, set to Variable Graphics Memory). For the GPU, AMD recommends any RX 7000 or 9000 model that has 16GB of memory – but these aren't hard-and-fast requirements as such.
Really, the key factor is simply having enough memory – the mentioned 16GB allocation, and preferably having all of that on your GPU. This allows all the work to take place on the graphics card, without being slowed down by having to offload some of it to the PC's system memory. Although the so-called Mixture of Experts, or MoE, design OpenAI has used here helps to minimize any such performance drag, thankfully.
Anecdotally, to pick an example plucked from Reddit, gpt-oss-20b runs fine on a MacBook Pro M3 with 18GB.
(Image credit: TeamGroup)The bigger gpt-oss-120b modelRAM needed: 80GB
It's the same overall deal with the beefier gpt-oss-120b model, except as you might guess, you need a lot more memory. Officially, this means 80GB, although remember that you don't have to have all of that RAM on your graphics card. That said, this large AI model is really designed for data center use on a GPU with 80GB of memory on board.
However, the RAM allocation can be split. So, you can run gpt-OSS-120b on a computer with 64GB of system memory and a 24GB graphics card (an Nvidia RTX 3090 Ti, for example, as per this Redditor), which makes a total of 88GB of RAM pooled.
AMD's recommendation in this case, CPU-wise, is for its top-of-the-range Ryzen AI Max+ 395 processor coupled with 128GB of system RAM (and 96GB of that allocated as Variable Graphics Memory).
In other words, you're looking at a seriously high-end workstation laptop or desktop (maybe with multiple GPUs) for gpt-oss-120b. However, you may be able to get away with a bit less than the stipulated 80GB of memory, going by some anecdotal reports - though I wouldn't bank on it by any means.
(Image credit: Shutterstock/AdriaVidal)How to run these models on your PCAssuming you meet the system requirements outlined above, you can run either of these new gpt-oss releases on Ollama, which is OpenAI's platform of choice for using these models.
Head here to grab OIlama for your PC (Windows, Mac, or Linux) - click the button to download the executable, and when it's finished downloading, double click the executable file to run it, and click Install.
Next, run the following two commands in Ollama to obtain and then run the model you want. In the example below, we're running gpt-oss-20b, but if you want the larger model, just replace 20b with 120b.
ollama pull gpt-oss:20bollama run gpt-oss:20bIf you prefer another option rather than Ollama, you could use LM Studio instead, using the following command. Again, you can switch 20b for 120b, or vice-versa, as appropriate:
lms get openai/gpt-oss-20bWindows 11 (or 10) users can exercise the option of Windows AI Foundry (hat tip to The Verge).
In this case, you'll need to install Foundry Local - there's a caveat here, though, and it's that this is still in preview - check out this guide for the full instructions on what to do. Also, note that right now you'll need an Nvidia graphics card with 16GB of VRAM on-board (though other GPUs, like AMD Radeon models, will be supported eventually - remember, this is still a preview release).
Furthermore, macOS support is "coming soon," we're told.
(Image credit: Shutterstock/ Alex Photo Stock)What about smartphones?As noted at the outset, while Sam Altman said that the smaller AI model runs on a phone, that statement is pushing it.
True enough, Qualcomm did issue a press release (as spotted by Android Authority) about gpt-oss-20b running on devices with a Snapdragon chip, but this is more about laptops – Copilot+ PCs that have Snapdragon X silicon – rather than smartphone CPUs.
Running gpt-oss-20b isn't a realistic proposition for today's phones, though it may be possible in a technical sense (assuming your phone has 16GB+ RAM). Even so, I doubt the results would be impressive.
However, we're not far away from getting these kinds of models running properly on mobiles, and this will surely be in the cards for the near-enough future.
You might also likeIf you like your power banks small, full of energy, and the color of your favorite macarons, INIU might have you covered.
The company, best known for constantly innovating power cell stacking to create increasingly smaller and lighter power banks, introduced this week what it claims is "the World's smallest 10,000mAh, 45W fast-charging" power bank.
The Pocket Rocket P50 (don't look at us, we didn't name it) is indeed small. Measuring 3.3 x 2.0 x 1.0 inches, the P50 weighs just 5.6 oz. Similarly configured 10,000mAh power banks on Amazon tend to weigh a few ounces more and are slightly larger.
They also generally cost a little more. The Pocket Rocket 50 lists for $32.99 (£38.99) on Amazon.
(Image credit: Iniu)INIU achieved the P50's pleasingly small size by using its trademark TinyCell Pro technology, which the company says uses "efficient cell arrangement and space-saving thermal layers." It also come equipped with a small monochrome display that offers real-time charge status.
The P50 includes multiple charging ports, including a USB-A port and two USB-C ports. The attached lanyard doubles as a USB-C-to-USB-C charge cable that you can use to charge devices connected to the 45W power bank and to recharge the P50.
Available in a collection of macron-style colors that include pink, green, purple, and blue, the Pocket Rocket P50 can deliver a 45W charge and supports Samsung Fast Charging 2.0 for a speedy top-off.
INIU claims the P50 can charge a smartphone from 0% to 73% in just 25 minutes. Naturally, this is a claim we'll want to verify in lab testing.
(Image credit: Iniu)The P50, according to the company, is capable of recharging multiple devices at once, and, on a single charge, can fully charge an iPhone 16 twice as well as an iPad mini or a Samsung Galaxy S24 one and a half times. INIU also claims the Pocket Rocket P50 is approved for carry-on use.
It's certainly small enough to fit anywhere, and with those tasty colors, it might attract more than a few wistful stares at the airport.
You might also likeWhile generative AI tools continue to dominate headlines and reshape workflows, demand for creative freelancers appears to be growing, not shrinking.
Figures from the Freelancer Fast 50 Global Jobs Index found in Q2 2025, job postings for writers, designers, and video editors are climbing steadily - even as roles in machine learning, blockchain, and other AI-adjacent fields show marked declines.
The shifts suggest businesses are drawing clearer lines between automated output and the type of nuanced, human creativity that machines still fail to replicate convincingly.
Originality rises as slop loses appealThe findings are based on more than 251,000 projects posted on a leading freelance site during the second quarter of 2025.
Communications jobs surged by 25.2%, making it the fastest-growing category, with freelancers in this space are being hired to craft contracts, edit manuscripts, and produce emotionally resonant writing that AI tools struggle to deliver.
This trend emerges amid what some commentators have described as widespread “AI slop fatigue”.
This is a growing pushback against the mass of bland, automated content that has flooded social media and search platforms.
The fatigue may be both aesthetic and functional, as platforms such as Google have introduced algorithm updates designed to penalise auto-generated material, putting further pressure on brands to prioritise originality.
Clients now appear more willing to invest in skilled professionals who can ensure their content maintains visibility and emotional resonance.
Many are still using AI writer programs in support roles to brainstorm ideas or speed up drafts, but final outputs are increasingly expected to pass a test of authenticity that machines fail to meet.
In video and visual production, the shift is just as pronounced, as job listings for skills such as Adobe After Effects, Instagram content creation, and 3D design using Unity have all posted double-digit gains.
Content creators are not just surviving alongside AI; they are thriving in areas that rely heavily on personal style, spontaneity, and audience connection.
Freelancers interviewed for the report describe growing interest in projects that range from low-budget films to custom branding efforts, with clients favouring professionals who can offer “strategic thinking” and “tailored solutions.”
This growth in creative jobs also underlines a broader recalibration of the role of AI tools.
Instead of displacing freelancers, many organisations are shifting toward hybrid workflows, leaning on machines for efficiency while entrusting humans with the final creative direction.
The simple conclusion to this situation is that for now, human nuance still matters.
You might also likeWhile it’s not an iPhone that’s entirely made in the U.S.A., Apple is making some pretty major hardware-related news alongside a fresh commitment from the Cupertino-based tech giant to invest a total of $600 billion in the U.S. economy within the next five years.
Apple, in a just-announced partnership with Corning, will aim to make and produce all of the glass covers for the iPhone and Apple Watch in the United States – specifically at Corning’s facility in Harrodsburg, Kentucky. It’s part of a new $2.5 billion commitment from Apple and means that once in place, all the glass for the iPhone and Apple Watch models sold globally will be made in the United States.
Apple’s partnership with Corning is far from new. While Apple rarely explicitly names who makes which components, it’s long been known that they use some custom form of Corning Gorilla Glass. Corning has always been a US-based company. The news that all iPhone and Apple Watch glass manufacturing is coming to the US inadvertently reveals that Apple may have been using multiple glass suppliers, including some from outside the US. That all changes now, though.
(Image credit: Apple)Most recently, this facility has been producing glass that’s named ‘Ceramic Shield’ for Apple’s iPhone lineup. The Harrodsburg, Kentucky, facility will exclusively be used for making glass for Apple devices going forward. The release notes that this decision will increase Corning’s manufacturing and engineering workforce here by 50% and that a combined Apple-Corning Innovation Center will open nearby.
(Image credit: Future)At a joint conference held at the White House and attended by Apple CEO Tim Cook, US President Donald Trump stated that this is a "smart glass production line" and will ultimately create 20,000 new American jobs.
Cook actually gave Trump a present, well, a gift from Apple – a piece of Corning Glass with ‘Trump’ engraved on it, and a base made from 24 karat gold sourced from Utah. It might be the first unboxing on the Resolute Desk, at least performed by Apple’s CEO.
The bigger picture: Apple’s upping its promised US investmentWhile this is the major hardware-related news as part of Apple’s commitment, the company did promise an additional $100 billion investment United States. Previously, the total investment was $500 billion, and that jumps to $600 billion, which should be complete within four years.
Alongside the new partnership with Corning, Apple’s also committed to working further with other US manufacturers like Coherent, GlobalWafers America (GWA), Applied Materials, Texas Instruments (TI), Samsung, GlobalFoundries, Amkor, and Broadcom. This is dubbed Apple’s American Manufacturing Program and will result in a tangible 450,000 jobs created in America across 79 factories.
(Image credit: C-Span)Beyond the fact that all glass for the iPhone and Apple Watch will be made in the United States, Apple also hopes to create an end-to-end silicon supply chain in America. Apple already expects this supply chain to build over 19 billion chips by the end of 2025 here. Speaking at the White House, Cook said, “American innovation is central to everything we do," and it’s clear that the tech giant is further investing to ensure that will be the case going forward, especially from a building perspective.
Apple's decision to shift some component manufacturing to the US may have just saved it from a 100% tariff on chips and semiconductors that Trump announced during the press conference. Trump said, for companies like Apple, "if you're building in the US or have committed to building in the US, there will be no charge."
Apple has also started construction on a 250,000-square-foot facility in Houston, Texas, that’s focused on building advanced Apple servers, and is expanding a data center that supports services like Apple TV+ and Apple Music in Maiden, North Carolina.
You might also likeGoogle has patched a major vulnerability affecting Android smartphones which is being actively exploited in the wild.
In June 2025, Qualcomm publicly announced discovering three vulnerabilities: CVE-2025-21479, CVE-2025-21480, CVE-2025-27038, saying they were “indications” from Google Threat Analysis Group (TAG) the flaws were being used in “limited, targeted exploitation.”
TAG specifically focuses on tracking state-sponsored threat actors, along with other highly sophisticated hacking groups, so if these were being used in limited and targeted exploitation, it’s safe to assume that these were nation-states targeting high-value individuals such as diplomats, journalists, dissidents, scientists, and similar.
CISA sounds the alarmAt the time, Qualcomm also urged OEMs (such as Google), to deploy the patch in their products without delay.
"Patches for the issues affecting the Adreno Graphics Processing Unit (GPU) driver have been made available to OEMs in May together with a strong recommendation to deploy the update on affected devices as soon as possible," Qualcomm said.
Google has now issued it August 2025 update for Android, which includes fixes for two of the flaws: CVE-2025-21479 and CVE-2025-27038.
The former is described as “memory corruption due to unauthorized command execution in GPU micronode while executing specific sequence of commands,” and was given a severity score of 8.6/10 (high). The latter is described as “memory corruption while rendering graphics using Adreno GPU drivers in Chrome,” with a severity score of 7.5/10 (high).
The US Cybersecurity and Infrastructure Security Agency (CISA) also added these two bugs to its Known Exploited Vulnerabilities (KEV) catalog on June 3, giving Federal Civilian Executive Branch (FCEB) organizations a three-week deadline to patch up, or stop using vulnerable software entirely.
Given Android’s decentralized structure, it is safe to assume that different devices (for example, Samsung’s Galaxy lineup, or OnePlus’ One lineup) will be getting these updates at different times. Pixel, being Google’s lineup of mobile phones, will most likely receive the updates first.
Via BleepingComputer
You might also likeAfter countless rumors, teases, hints of a delay, and many, many thoughts from CEO Sam Altman, OpenAI has finally confirmed a livestream tomorrow, and we're expecting to see Chat GPT-5's formal unveiling.
It’s not just that we’ve been waiting for the next-generation model to arrive, but a post on X (formerly Twitter) from the @OpenAI account makes it pretty clear, as it reads, “LIVE5TREAM THURSDAY 10AM PT”. That’s a pretty clear spelling of ‘livestream’ replacing the ‘s’ with a 5, and hinting at the GPT-5 model.
As the next major model for OpenAI, GPT-5 is rummored to bring with it more speed and better efficiency, but a real spotlight might be on how we can interact with it. We’ve already seen more formal Agents debut from ChatGPT, but GPT-5 is likely going to bring in automatic selection of the right model.
LIVE5TREAM THURSDAY 10AM PTAugust 6, 2025
This means you won’t need to select the model you think is the best fit, as GPT-5 will understand your prompt and handle the specific routing for you. Hopefully, that means easier, more appropriate answers for various prompts. Just a few days ago, on August 3, 2025, Sam Altman shared a screenshot of ChatGPT with ChatGPT 5 as the selected model in the top corner.
With a planned livestream for tomorrow, August 7, 2025 at 1PM ET / 10AM PT / 6PM BST, this will turn out to be a pretty packed week for OpenAI. Yesterday, on August 5, 2025, OpenAI debuted two open-weight AI models, gpt‑oss‑120b and gpt‑oss‑20b. The latter of which is capable of running locally on a consumer PC.
GPT-5 would have a significantly more immediate impact, assuming it gets a wide rollout and could be in the hands of consumers soon after the livestream. Sam Altman did tease in a post on X on August 2, 2025, that OpenAI has “a ton of stuff to launch over the next couple of months--new models, products, features, and more” – so the August 7 livestream – err, LIVE5TREAM – could be the start of plenty of new features to try.
Of course, Altman also used that post to warn about capacity issues or ‘hiccups,’ so similar to other launches with longer lead times, it could be a bit of a wait before trying GPT-5 for yourself.
Either route, stick with TechRadar as we’ll be reporting live on whatever OpenAI announces during its livestream tomorrow, and we’d bet on GPT-5. Like previous OpenAI announcements, we're expecting the event to be livestreamed on the brand's YouTube channel here.
You might also like