Google has announced the general availability of its latest AI coding agent, Jules.
Initially revealed in December 2024 as a Google Labs project, Jules has now launched as an offering to paying customers, but limited free access is also confirmed.
In a blog post announcing the launch, Google stated its decision to use Gemini 2.5 Pro would lead to "higher-quality code outputs."
Google makes Jules generally availableDesigned for asynchronous operation, Jules can work in the background without user supervision, making it a considerable improvement over previous generative AI examples of coding assistants. Supporting multimodal inputs and outputs, Jules promises to write, test and improve code while simultaneously visualizing results for its users.
Google hopes its new AI agent will not only be a valuable tool for developers, but also website designers and enterprise workers who don't have sufficient coding experience.
During the beta phase, users already used Jules to submit hundreds of thousands of tasks, with more than 140,000 code improvements shared publicly.
Now that Google's confident Jules works, general availability lands with a new streamlined user interface, new capabilities based on user feedback and bug fixes.
Although the free plan gets the same Gemini 2.5 Pro backing as the higher-tier options, it's limited to 15 daily tasks and three concurrent tasks.
Pro ($124.99/month) adds support for up to 100 daily tasks and 15 concurrent tasks, as well as "higher access to the latest models, starting with Gemini 2.5 Pro," suggesting it is likely to get model improvements before the free tier.
Ultra ($199.99/month) gets priority access to those latest models, plus 300 daily tasks and 60 concurrent tasks.
You might also likeAI has become synonymous with business transformation, promising insights and efficiency. Yet for many CEOs, traditional AI tools remain frustratingly passive, surfacing insights but failing to take action. Today’s business leaders don’t need more dashboards; they need execution.
This gap often stems from a misunderstanding of AI's role. Tools like “co-pilots” transcribe, summarize, and recommend, but they still rely on humans to follow through. That missing “last mile” is where execution breaks down, costing companies time, revenue, and agility.
Understanding the AI DichotomyThere's a widespread misconception about AI's role in modern business operations, and many CEOs don’t understand the difference. Traditional AI models, including generative AI (GenAI) and transcription services, rely on human intervention to move from insight to action.
They surface recommendations but require human oversight to execute, often causing operational stalls and insights that aren’t accounted for in decision-making. According to Gartner Research, 73% of insights captured by legacy AI tools never translate into executed actions, highlighting a tangible gap between data availability and operational execution.
Imagine a sales representative finishing a call where a potential customer expresses interest but mentions budget constraints. A traditional AI tool captures this interaction and generates a transcript, flagging the budget issue as a critical insight. However, it's up to the representative, assistant, or manager to manually review this flagged point, determine the next steps, update CRM records, and communicate that flagged point in their follow-ups.
This manual process introduces delays, allows for human errors, and increases the likelihood that the lead cools off or engages with a competitor in the meantime. Despite recognizing valuable data, the reactive nature of traditional AI means execution gaps persist, leaving executives puzzled when expected outcomes fail to materialize.
Misunderstandings Around Reactive and Proactive AIThe issue isn't just technological; it's conceptual. Organizations continue to misunderstand the distinct roles and capabilities of different AI categories though their operations. Traditional reactive AI solutions are often perceived as holistic operational fixes, setting unrealistic expectations and leading to implementation failures and skepticism regarding AI's overall efficacy in the first place.
The misunderstanding also encompasses risk and accountability.
Proactive agentic AI might raise concerns about automated errors or missteps. However, human leaders still hold the reins for overall strategy and are ultimately responsible for the outcomes. Agentic AI does not remove professional, human oversight; instead, it supports leaders by automating routine operational tasks, enabling teams to focus strategically and maximize on high-value opportunities.
The Proactive Shift: Introducing Agentic AIAgentic AI is a monumental leap in how AI operates, shifting from simply offering insights to actively taking the reins and executing tasks autonomously within existing workflows. Rather than merely highlighting data trends, it triggers structured, automated actions directly from the surfaced insights. This is to guarantee that customer and market signals are promptly acted upon, ultimately boosting revenue outcomes.
There is a spectrum of Agentic AI abilities going from advanced automations to autonomous decision making. It is important to know how and where to employ this power in the right way that is secure.
This type of AI continuously captures structured, clean, first-party data from customer interactions, such as sales calls, emails, and meetings. It then automatically integrates this information into CRM systems, communication platforms, and operational workflows, leaving no insights to fall through the cracks. Unlike traditional AI that merely suggests actions, agentic AI independently completes these tasks, prompting a reduction in administrative overhead and operational friction.
The Cost of Administrative OverheadTraditional AI's reactive approach exacerbates administrative burdens, inevitably impacting productivity and revenue potential. Boston Consulting Group reports that sales representatives spend up to 45% of their time on administrative tasks, such as CRM updates and manual follow-ups. This administrative overload limits their capacity to engage in revenue-generating activities and reduces overall sales effectiveness.
For CEOs and revenue leaders, execution speed directly correlates with revenue performance. Delays in responding to customer dissatisfaction, competitive shifts, or emerging market opportunities can lead to substantial financial setbacks. Even minor operational delays can mean the difference between growth and stagnation.
That execution gap is precisely what Agentic AI is built to resolve. Embedding directly into existing workflows and autonomously executing necessary tasks ensures immediate, structured responses to market signals. Instead of solely identifying churn risks, agentic AI proactively alerts customer success teams with clearly defined actions to prevent revenue loss.
Interoperability and Operational Agility Across the EnterpriseA major limitation of traditional AI tools is their siloed nature. Data outputs typically require manual intervention to distribute across departments, creating inefficiencies and inconsistencies. Agentic AI, in contrast, operationalizes intelligence by integrating across the enterprise's existing technology stack, enhancing transparency and consistency among sales, marketing, and customer success teams. This integration allows for interoperability while reducing delays associated with manual transfers and human-dependent workflows.
Operational agility has become a priority for CEOs who face rapidly shifting markets and fierce competition. While traditional AI provides important insights, it lacks the execution capacity to drive agile responses. Agentic AI meets this demand by automating real-time, responsive actions within core business processes.
Embracing Agentic AI: The Path ForwardWhy is Agentic AI so important right now? Because understanding and embracing Agentic AI isn't just about gaining an edge; it's about finding and taking advantage of opportunities in today's fiercely competitive, resource-strained, and unpredictable markets. This goes beyond a simple tech improvement; it's a way to redefine how businesses turn intelligence into action, directly converting their strategic insights into real, immediate impact.
I’ve tested and ranked 12 of the best CRM platforms.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
At London Tech Week we heard Kier Starmer make a commitment to the UK people that we would “become an AI maker, not an AI taker”. But how did this shift from an assumed frontrunner to an AI underdog happen?
Economic and geopolitical instability, including tariffs and ever-changing political alliances, has caused global technology leaders to realize that both their physical and digital infrastructure is best kept in-region. This allows them to protect themselves and not let innovation be hampered by outside forces. This has led to what we are seeing with Starmer’s comments - investment and commitment towards keeping AI systems developed and controlled within the UK.
Enterprises and governments are being shown that sovereign AI and data platforms are no longer a nice-to-have, but a must-have. These are defined as open source-based systems where data and AI are governed together at the edge, on-prem, or in-country cloud. This feels new to many organizations as it requires more than just turning a dial to more or less cloud.
Where we stand todayResearch shows that the UK is falling behind AI innovations, despite strong IT infrastructure and a robust workforce in place. Top enterprise leaders aren’t currently matching the government’s urgency or investment focus on AI and data. This disconnect is particularly stark in the banking sector, which was seen as the UK’s most likely AI growth engine only a year ago.
Sovereignty over AI and data must be mission-critical and applied quickly, for every economy and every enterprise within it. If we continue to hesitate, concerns are that the UK could lose its economic edge. In today’s world, if you can’t control your data and AI, you’ll struggle to stay ahead.
So what needs to be done to fix this growing problem and reposition the UK as an AI leader with a solid base to scale in-region?
1.An intention gapWhen it comes to intent to build sovereign AI and data platforms, UK leaders are among the least committed across the globe, despite government-backed programs being critical infrastructure plays.
Needless to say, if national ambition isn’t matched by enterprise commitment, the UK risks losing its early advantage.
2.Seeing beyond the immediate, and building for itGlobally, it appears that success hinges on a strategic commitment to full data access, open source foundations, integrated AI tools, and hybrid infrastructure, as well as accelerating applications into an agentic state.
The fastest-moving economies aren’t siloed in their application; generative and agentic AI are transforming every industry. They’re building sovereign AI and data factories that are open source, flexible, and future-proof architectures. This means that their AI and data can adapt and deliver value across borders, partners, and time.
In countries leading the charge, enterprise leaders follow these core beliefs:
1.Deep integration of AI and data is critical.
2.Sovereignty isn’t a choice—it’s a necessity.
3.Sustainable success relies on controlling your AI and data platform.
The next three years will shape which economies control the future of data and, consequently, AI. Although trillions have been invested by UK enterprise and government to build one of the world’s most advanced AI ecosystems, without strategies tied to these three core principles, these assets won’t deliver ROI.
3. Sensing the urgency, and adapting to itThe UK is not alone in facing this crossroads - Germany, Saudi Arabia, and the UAE are also converting infrastructure into execution. However, the UK seems to be hesitating more than its counterparts. For every competitor, there is increased recognition that sovereign control over AI and data is now essential, a push that is needed.
This recognition is at the heart of reshaping enterprise priorities. As more leaders act, the foundations they’re choosing matter just as much as the strategy itself.
Closing remarksThe divide between early movers and those hesitating is already clear. Just 13% of enterprises have fully integrated AI and data operations, but they account for 21% of the total global ROI, signaling what’s possible when strategy and execution align at speed.
There’s a huge opportunity within this space, as the global AI and data economy is projected to reach $16.5 trillion by 2028. The UK still has a structural advantage with world-class infrastructure, talent, and public investment. All that’s left is action.
We list the best cloud storage.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Like it or not, cyberattacks are now a regular occurrence, and part of everyday life. However, despite this predictability, it still remains impossible to pinpoint exactly when and where they will occur. This means that businesses must remain vigilant, constantly on the lookout for any and all potential threats.
From the moment a company is created, it must be assumed that attacks will be coming. Just because it is new and unknown does not mean it is safe. Take DeepSeek for example, despite being the new kid on the block, as soon as its name hit the news, it was hit with a severe large-scale attack. However, this does not give established companies an excuse to drop their guard.
The past couple of months alone have seen some of the biggest names in retail fall victim, with large scale companies like M&S and Dior unable to properly defend against attacks. No matter how big the company, it is vital to employ a well-rounded cybersecurity strategy that provides security from the foundational stages of development through to the latest iteration.
Siloed teams are outdatedThe key to weathering the storm of cyberattacks is a firm foundation. Cybersecurity principles must be embedded from the outset, ensuring a strong and secure beginning for any product or system development. These defenses must be continually built upon, monitored, tested and updated on a proactive basis to ensure any potential vulnerabilities are mitigated before they can become a threat.
Threats are constantly evolving, and the attack defended against today could be the one that breaks through tomorrow. Therefore it is imperative to keep any and all threat intelligence up to date, monitoring threats in real-time and continuously sharing the information business-wide.
Unfortunately, it is the dissemination of this information that can cause issues - especially when different teams are receiving information late, or not at all. This is often the case in organizations that employ a siloed approach, with individual teams working in isolation from each other.
This fragmented structure can not only impact an organization's ability to detect and respond to threats, but the capability to learn from them and share these insights with other teams. Without a formal structure in place to facilitate cross-team collaboration, teams may develop different processes in parallel, use different tools, and fail to communicate across functions when facing risks or as incidents unfold.
As a result, security controls are inconsistent, making it tough, if not impossible, to establish standard methods for sharing threat intelligence and incident response procedures.
Introducing collaborationA centralized platform that unifies threat intelligence company-wide will strengthen security efforts across departments and ensure that teams operate as part of shared vision. Creating common goals and metrics encourages collaboration and establishes a clear sense of purpose. Threat Intelligence Platforms (TIPs) enable organizations to adopt this approach, integrating across business systems and providing automated intelligence sharing.
TIPs act as the heart of an organization's cyber defenses, gathering information from across multiple sources, from public feeds, to industry reports, and distributing it across all teams. They are able to sift through the data and identify serious threats, advising teams where to focus their efforts to prioritize the most at-risk vulnerabilities.
Through the automation of processes such as data collection and by removing internal communication barriers, organizations can translate scattered, complex cyber‑threat information into coordinated action to protect critical assets faster and comprehensively. This will result in improved threat detection, quicker incident response times and a greater overall cyber resilience.
The hyper-orchestration approachThe hyper-orchestration approach builds upon these foundations of collaboration and collective defense, replacing siloed teams with a united threat intelligence network. Employing this structure from the formation of a business will allow organizations to avoid the formation of individual teams, and enhance their cybersecurity capabilities from the outset.
This collective defense approach coordinates threat intelligence and response activities to tackle specific security threats. Perhaps one of the most notable examples of collective defense in action is the Information Sharing and Analysis Centre (ISAC), which collects, analyses and disseminates actionable threat information to its members.
These centers enable organizations to identify and mitigate risks and boost their cyber resilience. ISACs are made up of a comprehensive group of highly competent and professional organizations, with the National Council of ISACs currently comprising almost 30 sector-specific organizations, for example.
Recent research highlights the importance of this collective defense approach, with 90% of cybersecurity professionals believing collaboration and information sharing are very important or crucial for a strong cyber defense. Despite this, nearly three-quarters (70%) feel their organization needs to do more to improve threat intelligence sharing capabilities.
It is clear that a collective defense approach is growing more popular, with dedicated information sharing roles now recognised at the highest levels of government and regulation. The EU Network and Information Systems Directive 2 (NIS2), which came into force last October, is a clear example of this - focusing on the resilience of sectors that are under particular risk.
With clear importance being placed on collaboration in cybersecurity, organizations must take steps to incorporate this approach into their cyber security strategies from day one. Employing hyper orchestration and collective defense is key to enhancing cyber resilience and ensuring systems are secure through every stage of a business’ development.
We list the best firewall for small business.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
If you have a kid who loves to hear about themselves in a story, Google’s Gemini AI has a new trick that could keep them happy for a long time. Gemini's new Storybook feature lets you generate fully illustrated, ten-page storybooks with narration from a single prompt.
You describe the tale, the look you want, and any other details, and Gemini writes the story, creates images for each page, and reads it aloud within a few minutes.
Storybook, in some ways, just combines existing abilities like text composition, image generation, and voice narration. Still, by putting them into a single prompt system, it speeds up the final product enormously. If you don't like certain details of the look or writing, you can simply adjust the book with follow-up prompts. You can even feed it a photo to shape the setting or characters.
The appeal for those who might feel they lack creative writing or drawing skills is obvious. No need to hire an illustrator or record voiceovers yourself. If your child wants a bedtime story about a shy dragon who finds confidence at music camp, you type that in, and within minutes, you’ve got a book with pictures, narration, and page-by-page structure.
This isn’t just for bedtime, either. Teachers can create customized stories to explain hard topics, perhaps teaching second graders about gravity with a friendly astronaut cat. Therapists could use storybooks to help kids talk through emotions using characters they connect with. Aunts and uncles can make personalized birthday stories with inside jokes and family pets.
What used to be a labor-intensive creative project is now something you can do on your phone during lunch break.
AI storytellersAnd it is a notable shift from the standard template with a blank to fill in approach common to other AI tools. The narration even adapts to the tone of the story, with voices that can be whimsical, soothing, or dramatic, depending on what your story needs. Google is pitching the tool to busy parents, overworked teachers, and creative kids looking for a co-author and illustrator for their ideas.
I asked Gemini to make a story about my dogs going on an adventure in nature, sharing their names and describing their looks, and that's about it. You can read and listen to the Gemini-created story here.
It did a remarkably good job, albeit with a very inconsistent look to the dogs from page to page and a somewhat dull story. And when I tried it again to see how it would perform with the same prompt, the dogs sometimes had more than four limbs, not exactly reassuring to a child looking forward to a story about their pets.
And while it's theoretically possible that Gemini could write and illustrate a story better than the many classic and modern children's books out there, or one more personally resonant than writing it yourself, I personally have doubts. This is a fun little trick, but the idea of skipping every bookstore, library, and box of crayons and pencils for an AI alternative that can't always even make your dog look the same on every page feels like the exact activity I'd rather do myself. I'll stick to asking AI for help organizing my kitchen and leave the bedtime stories to me.
You might also likexAI is pushing out the Grok Imagine AI video maker to those willing to pay for a SuperGrok or Premium+ subscription. Assuming you've paid your $30 or $35 a month, respectively, you can access Imagine in the Grok app under its own tab and turn prompts into short video clips. These last for around six seconds and include synced sound. You can also upload static images and animate them into looping clips.
Grok Imagine is another addition to the increasingly competitive AI video space, including OpenAI's Sora, Google's Veo 3, Runway, and more. Having audio built in also helps the tool, as sound is still not a universally available feature in all AI video tools.
To stand out, Elon Musk is encouraging people to think of it as “AI Vine,” tying the new tool to the classic and long-defunct short-form video platform for Twitter, itself a vanished brand name.
However, this isn’t just nostalgia for 2014 social media. The difference is that it's a way to blend active creation and passive scrolling.
Grok Imagine should get better almost every day. Make sure to download the latest @Grok app, as we have an improved build every few days. https://t.co/MGZtdMx26oAugust 3, 2025
Spicy GrokOne potentially heated controversy around Grok Imagine is the inclusion of a “spicy mode” allowing for a limited amount of more explicit content generation. While the system includes filters and moderation to prevent actual nudity or anything sexual, users can still experiment with suggestive prompts.
Musk himself posted a video of a scantily clad angel made with Grok Imagine. It provoked quite a few angry and upset responses from users on X. xAI insists guardrails are in place, but that hasn’t stopped some early testers from trying to break them.
xAI is keen to promote Grok Imagine as a way to make AI video accessible for everyone, from businesses crafting ads to teachers animating lessons. Still, there are understandable concerns about whether an AI platform that was only recently in hot water for outright pro-Nazi statements can be trusted to share video content without getting into more hot water. That goes double for the filters for the spicy content.
You might also likeAs you may have seen, OpenAI has just released two new AI models – gpt‑oss‑20b and gpt‑oss-120b – which are the first open‑weight models from the firm since GPT‑2.
These two models – one is more compact, and the other much larger – are defined by the fact that you can run them locally. They'll work on your desktop PC or laptop – right on the device, with no need to go online or tap the power of the cloud, provided your hardware is powerful enough.
So, you can download either the 20b version – or, if your PC is a powerful machine, the 120b spin – and play around with it on your computer, check how it works (in text-to-text fashion) and how the model thinks (its whole process of reasoning is broken down into steps). And indeed, you can tweak and build on these open models, though safety guardrails and censorship measures will, of course, be in place.
But what kind of hardware do you need to run these AI models? In this article, I'm examining the PC spec requirements for both gpt‑oss‑20b – the more restrained model packing 21 billion parameters – and gpt‑oss-120b, which offers 117 billion parameters. The latter is designed for data center use, but it will run on a high-end PC, whereas gpt‑oss‑20b is the model designed specifically for consumer devices.
Indeed, when announcing these new AI models, Sam Altman referenced 20b working on not just run-of-the-mill laptops, but also smartphones – but suffice it to say, that's an ambitious claim, which I'll come back to later.
These models can be downloaded from Hugging Face (here's gpt‑oss‑20b and here’s gpt‑oss-120b) under the Apache 2.0 license, or for the merely curious, there's an online demo you can check out (no download necessary).
(Image credit: Future / Lance Ulanoff)The smaller gpt-oss-20b modelMinimum RAM needed: 16GB
The official documentation from OpenAI simply lays out a requisite amount of RAM for these AI models, which in the case of this more compact gpt-oss-20b effort is 16GB.
This means you can run gpt-oss-20b on any laptop or PC that has 16GB of system memory (or 16GB of video RAM, or a combo of both). However, it's very much a case of the more, the merrier – or faster, rather. The model might chug along with that bare minimum of 16GB, and ideally, you'll want a bit more on tap.
As for CPUs, AMD recommends the use of a Ryzen AI 300 series CPU paired with 32GB of memory (and half of that, 16GB, set to Variable Graphics Memory). For the GPU, AMD recommends any RX 7000 or 9000 model that has 16GB of memory – but these aren't hard-and-fast requirements as such.
Really, the key factor is simply having enough memory – the mentioned 16GB allocation, and preferably having all of that on your GPU. This allows all the work to take place on the graphics card, without being slowed down by having to offload some of it to the PC's system memory. Although the so-called Mixture of Experts, or MoE, design OpenAI has used here helps to minimize any such performance drag, thankfully.
Anecdotally, to pick an example plucked from Reddit, gpt-oss-20b runs fine on a MacBook Pro M3 with 18GB.
(Image credit: TeamGroup)The bigger gpt-oss-120b modelRAM needed: 80GB
It's the same overall deal with the beefier gpt-oss-120b model, except as you might guess, you need a lot more memory. Officially, this means 80GB, although remember that you don't have to have all of that RAM on your graphics card. That said, this large AI model is really designed for data center use on a GPU with 80GB of memory on board.
However, the RAM allocation can be split. So, you can run gpt-OSS-120b on a computer with 64GB of system memory and a 24GB graphics card (an Nvidia RTX 3090 Ti, for example, as per this Redditor), which makes a total of 88GB of RAM pooled.
AMD's recommendation in this case, CPU-wise, is for its top-of-the-range Ryzen AI Max+ 395 processor coupled with 128GB of system RAM (and 96GB of that allocated as Variable Graphics Memory).
In other words, you're looking at a seriously high-end workstation laptop or desktop (maybe with multiple GPUs) for gpt-oss-120b. However, you may be able to get away with a bit less than the stipulated 80GB of memory, going by some anecdotal reports - though I wouldn't bank on it by any means.
(Image credit: Shutterstock/AdriaVidal)How to run these models on your PCAssuming you meet the system requirements outlined above, you can run either of these new gpt-oss releases on Ollama, which is OpenAI's platform of choice for using these models.
Head here to grab OIlama for your PC (Windows, Mac, or Linux) - click the button to download the executable, and when it's finished downloading, double click the executable file to run it, and click Install.
Next, run the following two commands in Ollama to obtain and then run the model you want. In the example below, we're running gpt-oss-20b, but if you want the larger model, just replace 20b with 120b.
ollama pull gpt-oss:20bollama run gpt-oss:20bIf you prefer another option rather than Ollama, you could use LM Studio instead, using the following command. Again, you can switch 20b for 120b, or vice-versa, as appropriate:
lms get openai/gpt-oss-20bWindows 11 (or 10) users can exercise the option of Windows AI Foundry (hat tip to The Verge).
In this case, you'll need to install Foundry Local - there's a caveat here, though, and it's that this is still in preview - check out this guide for the full instructions on what to do. Also, note that right now you'll need an Nvidia graphics card with 16GB of VRAM on-board (though other GPUs, like AMD Radeon models, will be supported eventually - remember, this is still a preview release).
Furthermore, macOS support is "coming soon," we're told.
(Image credit: Shutterstock/ Alex Photo Stock)What about smartphones?As noted at the outset, while Sam Altman said that the smaller AI model runs on a phone, that statement is pushing it.
True enough, Qualcomm did issue a press release (as spotted by Android Authority) about gpt-oss-20b running on devices with a Snapdragon chip, but this is more about laptops – Copilot+ PCs that have Snapdragon X silicon – rather than smartphone CPUs.
Running gpt-oss-20b isn't a realistic proposition for today's phones, though it may be possible in a technical sense (assuming your phone has 16GB+ RAM). Even so, I doubt the results would be impressive.
However, we're not far away from getting these kinds of models running properly on mobiles, and this will surely be in the cards for the near-enough future.
You might also likeIf you like your power banks small, full of energy, and the color of your favorite macarons, INIU might have you covered.
The company, best known for constantly innovating power cell stacking to create increasingly smaller and lighter power banks, introduced this week what it claims is "the World's smallest 10,000mAh, 45W fast-charging" power bank.
The Pocket Rocket P50 (don't look at us, we didn't name it) is indeed small. Measuring 3.3 x 2.0 x 1.0 inches, the P50 weighs just 5.6 oz. Similarly configured 10,000mAh power banks on Amazon tend to weigh a few ounces more and are slightly larger.
They also generally cost a little more. The Pocket Rocket 50 lists for $32.99 (£38.99) on Amazon.
(Image credit: Iniu)INIU achieved the P50's pleasingly small size by using its trademark TinyCell Pro technology, which the company says uses "efficient cell arrangement and space-saving thermal layers." It also come equipped with a small monochrome display that offers real-time charge status.
The P50 includes multiple charging ports, including a USB-A port and two USB-C ports. The attached lanyard doubles as a USB-C-to-USB-C charge cable that you can use to charge devices connected to the 45W power bank and to recharge the P50.
Available in a collection of macron-style colors that include pink, green, purple, and blue, the Pocket Rocket P50 can deliver a 45W charge and supports Samsung Fast Charging 2.0 for a speedy top-off.
INIU claims the P50 can charge a smartphone from 0% to 73% in just 25 minutes. Naturally, this is a claim we'll want to verify in lab testing.
(Image credit: Iniu)The P50, according to the company, is capable of recharging multiple devices at once, and, on a single charge, can fully charge an iPhone 16 twice as well as an iPad mini or a Samsung Galaxy S24 one and a half times. INIU also claims the Pocket Rocket P50 is approved for carry-on use.
It's certainly small enough to fit anywhere, and with those tasty colors, it might attract more than a few wistful stares at the airport.
You might also likeWhile generative AI tools continue to dominate headlines and reshape workflows, demand for creative freelancers appears to be growing, not shrinking.
Figures from the Freelancer Fast 50 Global Jobs Index found in Q2 2025, job postings for writers, designers, and video editors are climbing steadily - even as roles in machine learning, blockchain, and other AI-adjacent fields show marked declines.
The shifts suggest businesses are drawing clearer lines between automated output and the type of nuanced, human creativity that machines still fail to replicate convincingly.
Originality rises as slop loses appealThe findings are based on more than 251,000 projects posted on a leading freelance site during the second quarter of 2025.
Communications jobs surged by 25.2%, making it the fastest-growing category, with freelancers in this space are being hired to craft contracts, edit manuscripts, and produce emotionally resonant writing that AI tools struggle to deliver.
This trend emerges amid what some commentators have described as widespread “AI slop fatigue”.
This is a growing pushback against the mass of bland, automated content that has flooded social media and search platforms.
The fatigue may be both aesthetic and functional, as platforms such as Google have introduced algorithm updates designed to penalise auto-generated material, putting further pressure on brands to prioritise originality.
Clients now appear more willing to invest in skilled professionals who can ensure their content maintains visibility and emotional resonance.
Many are still using AI writer programs in support roles to brainstorm ideas or speed up drafts, but final outputs are increasingly expected to pass a test of authenticity that machines fail to meet.
In video and visual production, the shift is just as pronounced, as job listings for skills such as Adobe After Effects, Instagram content creation, and 3D design using Unity have all posted double-digit gains.
Content creators are not just surviving alongside AI; they are thriving in areas that rely heavily on personal style, spontaneity, and audience connection.
Freelancers interviewed for the report describe growing interest in projects that range from low-budget films to custom branding efforts, with clients favouring professionals who can offer “strategic thinking” and “tailored solutions.”
This growth in creative jobs also underlines a broader recalibration of the role of AI tools.
Instead of displacing freelancers, many organisations are shifting toward hybrid workflows, leaning on machines for efficiency while entrusting humans with the final creative direction.
The simple conclusion to this situation is that for now, human nuance still matters.
You might also likeWhile it’s not an iPhone that’s entirely made in the U.S.A., Apple is making some pretty major hardware-related news alongside a fresh commitment from the Cupertino-based tech giant to invest a total of $600 billion in the U.S. economy within the next five years.
Apple, in a just-announced partnership with Corning, will aim to make and produce all of the glass covers for the iPhone and Apple Watch in the United States – specifically at Corning’s facility in Harrodsburg, Kentucky. It’s part of a new $2.5 billion commitment from Apple and means that once in place, all the glass for the iPhone and Apple Watch models sold globally will be made in the United States.
Apple’s partnership with Corning is far from new. While Apple rarely explicitly names who makes which components, it’s long been known that they use some custom form of Corning Gorilla Glass. Corning has always been a US-based company. The news that all iPhone and Apple Watch glass manufacturing is coming to the US inadvertently reveals that Apple may have been using multiple glass suppliers, including some from outside the US. That all changes now, though.
(Image credit: Apple)Most recently, this facility has been producing glass that’s named ‘Ceramic Shield’ for Apple’s iPhone lineup. The Harrodsburg, Kentucky, facility will exclusively be used for making glass for Apple devices going forward. The release notes that this decision will increase Corning’s manufacturing and engineering workforce here by 50% and that a combined Apple-Corning Innovation Center will open nearby.
(Image credit: Future)At a joint conference held at the White House and attended by Apple CEO Tim Cook, US President Donald Trump stated that this is a "smart glass production line" and will ultimately create 20,000 new American jobs.
Cook actually gave Trump a present, well, a gift from Apple – a piece of Corning Glass with ‘Trump’ engraved on it, and a base made from 24 karat gold sourced from Utah. It might be the first unboxing on the Resolute Desk, at least performed by Apple’s CEO.
The bigger picture: Apple’s upping its promised US investmentWhile this is the major hardware-related news as part of Apple’s commitment, the company did promise an additional $100 billion investment United States. Previously, the total investment was $500 billion, and that jumps to $600 billion, which should be complete within four years.
Alongside the new partnership with Corning, Apple’s also committed to working further with other US manufacturers like Coherent, GlobalWafers America (GWA), Applied Materials, Texas Instruments (TI), Samsung, GlobalFoundries, Amkor, and Broadcom. This is dubbed Apple’s American Manufacturing Program and will result in a tangible 450,000 jobs created in America across 79 factories.
(Image credit: C-Span)Beyond the fact that all glass for the iPhone and Apple Watch will be made in the United States, Apple also hopes to create an end-to-end silicon supply chain in America. Apple already expects this supply chain to build over 19 billion chips by the end of 2025 here. Speaking at the White House, Cook said, “American innovation is central to everything we do," and it’s clear that the tech giant is further investing to ensure that will be the case going forward, especially from a building perspective.
Apple's decision to shift some component manufacturing to the US may have just saved it from a 100% tariff on chips and semiconductors that Trump announced during the press conference. Trump said, for companies like Apple, "if you're building in the US or have committed to building in the US, there will be no charge."
Apple has also started construction on a 250,000-square-foot facility in Houston, Texas, that’s focused on building advanced Apple servers, and is expanding a data center that supports services like Apple TV+ and Apple Music in Maiden, North Carolina.
You might also likeGoogle has patched a major vulnerability affecting Android smartphones which is being actively exploited in the wild.
In June 2025, Qualcomm publicly announced discovering three vulnerabilities: CVE-2025-21479, CVE-2025-21480, CVE-2025-27038, saying they were “indications” from Google Threat Analysis Group (TAG) the flaws were being used in “limited, targeted exploitation.”
TAG specifically focuses on tracking state-sponsored threat actors, along with other highly sophisticated hacking groups, so if these were being used in limited and targeted exploitation, it’s safe to assume that these were nation-states targeting high-value individuals such as diplomats, journalists, dissidents, scientists, and similar.
CISA sounds the alarmAt the time, Qualcomm also urged OEMs (such as Google), to deploy the patch in their products without delay.
"Patches for the issues affecting the Adreno Graphics Processing Unit (GPU) driver have been made available to OEMs in May together with a strong recommendation to deploy the update on affected devices as soon as possible," Qualcomm said.
Google has now issued it August 2025 update for Android, which includes fixes for two of the flaws: CVE-2025-21479 and CVE-2025-27038.
The former is described as “memory corruption due to unauthorized command execution in GPU micronode while executing specific sequence of commands,” and was given a severity score of 8.6/10 (high). The latter is described as “memory corruption while rendering graphics using Adreno GPU drivers in Chrome,” with a severity score of 7.5/10 (high).
The US Cybersecurity and Infrastructure Security Agency (CISA) also added these two bugs to its Known Exploited Vulnerabilities (KEV) catalog on June 3, giving Federal Civilian Executive Branch (FCEB) organizations a three-week deadline to patch up, or stop using vulnerable software entirely.
Given Android’s decentralized structure, it is safe to assume that different devices (for example, Samsung’s Galaxy lineup, or OnePlus’ One lineup) will be getting these updates at different times. Pixel, being Google’s lineup of mobile phones, will most likely receive the updates first.
Via BleepingComputer
You might also likeAfter countless rumors, teases, hints of a delay, and many, many thoughts from CEO Sam Altman, OpenAI has finally confirmed a livestream tomorrow, and we're expecting to see Chat GPT-5's formal unveiling.
It’s not just that we’ve been waiting for the next-generation model to arrive, but a post on X (formerly Twitter) from the @OpenAI account makes it pretty clear, as it reads, “LIVE5TREAM THURSDAY 10AM PT”. That’s a pretty clear spelling of ‘livestream’ replacing the ‘s’ with a 5, and hinting at the GPT-5 model.
As the next major model for OpenAI, GPT-5 is rummored to bring with it more speed and better efficiency, but a real spotlight might be on how we can interact with it. We’ve already seen more formal Agents debut from ChatGPT, but GPT-5 is likely going to bring in automatic selection of the right model.
LIVE5TREAM THURSDAY 10AM PTAugust 6, 2025
This means you won’t need to select the model you think is the best fit, as GPT-5 will understand your prompt and handle the specific routing for you. Hopefully, that means easier, more appropriate answers for various prompts. Just a few days ago, on August 3, 2025, Sam Altman shared a screenshot of ChatGPT with ChatGPT 5 as the selected model in the top corner.
With a planned livestream for tomorrow, August 7, 2025 at 1PM ET / 10AM PT / 6PM BST, this will turn out to be a pretty packed week for OpenAI. Yesterday, on August 5, 2025, OpenAI debuted two open-weight AI models, gpt‑oss‑120b and gpt‑oss‑20b. The latter of which is capable of running locally on a consumer PC.
GPT-5 would have a significantly more immediate impact, assuming it gets a wide rollout and could be in the hands of consumers soon after the livestream. Sam Altman did tease in a post on X on August 2, 2025, that OpenAI has “a ton of stuff to launch over the next couple of months--new models, products, features, and more” – so the August 7 livestream – err, LIVE5TREAM – could be the start of plenty of new features to try.
Of course, Altman also used that post to warn about capacity issues or ‘hiccups,’ so similar to other launches with longer lead times, it could be a bit of a wait before trying GPT-5 for yourself.
Either route, stick with TechRadar as we’ll be reporting live on whatever OpenAI announces during its livestream tomorrow, and we’d bet on GPT-5. Like previous OpenAI announcements, we're expecting the event to be livestreamed on the brand's YouTube channel here.
You might also likeThe lines between traditional hardware providers and cybersecurity vendors are beginning to blur as printer brands enter the cybersecurity field, but hackers can still use your business printer as an easy backdoor into your corporate network.
Canon, long associated with cameras and office printing hardware, is now offering a tiered cybersecurity subscription aimed at protecting endpoint devices, documents, and data.
The offering includes two tiers: Enhanced and Premium - the former covers basics such as firmware updates and data backup, while the latter introduces proactive monitoring, threat detection, and rapid device recovery.
Canon security concernsThe launch follows closely on the heels of serious security concerns related to Canon's print infrastructure, including high-severity driver vulnerabilities and a possible network breach advertised on underground forums.
Just days before the new subscription service was announced, Microsoft’s offensive security team disclosed a critical vulnerability, CVE-2025-1268, affecting Canon’s printer drivers.
The flaw, which scores 9.4 on the CVSS scale, could enable attackers to halt printing or execute arbitrary code under certain conditions.
Canon issued advisories and urged users to update vulnerable drivers, particularly those tied to several production and office printer models.
While patching is essential, the persistence of such flaws highlights the broader risks that poorly secured print infrastructure can pose.
Adding to this unease, Canon has reportedly become the subject of underground listings offering root-level access to its internal firewall systems.
Though the company has not confirmed any such breach, security analysts continue to monitor claims circulating on dark web forums puporting to offer access allowing attackers to create backdoors or move laterally through the corporate network
Against this backdrop, Canon’s new Subscription Security Services may be seen as both a response to reputational risk and an attempt to reposition itself as more than a printer supplier.
Though these services resemble endpoint protection platform (EPP) features, they are focused solely on Canon’s device environment.
Whether this strategy gains traction depends on more than just Canon’s execution, as there is still a fair amount of skepticism around traditional hardware companies taking on roles typically reserved for antivirus and cybersecurity providers.
For businesses managing large fleets of print devices, consolidating protection through the hardware vendor may offer convenience, but it raises questions about scope, integration, and oversight.
If others in the hardware sector begin offering similar subscriptions, the market could see a gradual expansion of what constitutes EPP.
Via Cybersecurity News and Security Week
You might also likeIf you have a Google Pixel phone, make sure you’ve downloaded the latest security patch, as it includes several important updates that fix some potentially critical issues with your smartphone that you might not even be aware of.
There are a few high-level security flaws the patch solves, as well as one “critical” System vulnerability. According to Google, this flaw can be executed remotely (in combination with other bugs), and what’s more, it can be activated without any user interaction.
Yikes.
Google didn’t go into specifics about the hack beyond these details, but it doesn’t sound like one it would want to leave unaddressed.
(Image credit: Future)Beyond security improvements, Google has also seemingly solved a Back Button bug, which saw it not work at times for users.
Here's a demo of the back button randomly not working on Android 16.I grabbed a bug report and submitted it to Google engineering along with this reproduction screencast. Hopefully, they'll figure it out. pic.twitter.com/nEmifqQRvbJune 14, 2025
As you can see in the video above, users would swipe back on their Android 16 Pixel phone and nothing would happen – which isn’t ideal if you want to exit out of an app or conveniently return to a different screen.
It might have taken close to two months, but after beta users got the fix in July, the back button glitch should now be solved on all devices running Android 16’s stable version on their phone.
Are you ready to update?If you want to upgrade your Pixel device, the patch is rolling out now to all Pixel tablets and phones launched since the Pixel 6 and Pixel 6 Pro. Those two phones launched in 2021.
With automatic updates enabled, you might have already updated. However, to find it manually, you can head to your Settings app, then search for System Update and hit the Check for updates button to see if you’re up to date on your software.
If you have the August patch, then you’re all set, though it can take up to a week for updates to be made available to everyone – so if you are still on July’s update and see no option to install August’s, don’t worry, you’ll just have to wait a little longer for a fix.
You might also likeYou've been told a million times about how wonderful vinyl is, but you hear a lot less about CDs – and that's a good thing, because the relative lack of trendiness means that the cost of good-condition CDs is often a fraction of what you'd pay for the same record on vinyl. If, like me, you like saving money as much as you like listening to music, then a CD player is still a smart addition to your system.
Chinese firm Shanling makes some impressive CD players, including ones with integrated amplification. And it's just brought out a new player called the CD80 II (via Darko Audio), with high-powered headphone amplification for wired over-ears and IEMs – and with high-quality Bluetooth streaming so you can play music from your phone, computer or tablet.
(Image credit: Shanling)Shanling CD80 II: key features and pricingThe Shanling CD80 II takes the compact CD80 and delivers a new DAC system and a much improved CD loader too. The 4th-generation CD loader comes from the more advanced CD-S100 model, with an HD450 laser and a familiar tray-style mechanism.
Inside the ESS DAC of the previous model has been replaced by a Cirrus Logic CS43198, and it's teamed with dual SGM8262 headphone amps to drive the 3.5mm and 4.4mm balanced outputs. They deliver 215mW and 850mW into 32 ohms respectively, with an in-ear monitor-friendly output impedance of less than 1 ohm.
Bluetooth input is Bluetooth 5.0 with support for LDAC as well as the familiar AAC and SBC, and there's also a USB input with support for 2TB drives to play files, at hi-res audio quality up to PCM 384kHz and DSD256.
There's a lot going on here, but Shanling has managed to pack it into a very small 28 x 20 x 5cm case, so it's small enough for even the tightest setups.
The new Shanling CD80 II has a US recommended price of $359; other pricing hasn't been announced but in the UK the first-gen model had a typical price of £339.
You might also likeThe new KTC H27P3 monitor enters the market with a proposition which is hard to ignore: a 5K-resolution display priced at just $570.
The company is targeting professionals and general users who need a high-resolution panel without the premium price typically associated with 5K monitors.
The monitor is already available for preorder on KTC’s website, with shipping expected to begin in mid-August 2025.
Targeting creators with high specs at a modest priceThe H27P3, which we first flagged back in April 2025, offers a 5120×2880 resolution IPS display @60Hz, designed to cater to creative tasks such as photo editing, graphic design, and color-critical work.
It also includes a 2560×1440 mode @120Hz, allowing users to switch between high clarity and smoother motion, depending on their needs.
KTC describes this as a “dual-mode” experience, combining visual precision with responsive performance.
The panel is factory-calibrated with a Delta E of less than 2 and supports 100% sRGB, 99% Adobe RGB, and 99% DCI-P3.
These are specifications that align with the expectations for a monitor for video editing or digital content creation.
HDR400 certification, a 500-nit brightness rating, and a 2000:1 contrast ratio suggest support for high dynamic range content, at least on paper.
(Image credit: IT Home)In terms of design, the monitor features a thin-bezel frame and an aluminum stand offering tilt, swivel, pivot, and height adjustments.
At 3.75kg and with a compact form factor, it could also serve as a desktop-friendly portable monitor solution for professionals on the move or those working in tight spaces.
The stand is described as minimalist and sturdy, although its visual appeal and desk footprint may divide opinion.
Connectivity is broad, with DisplayPort 1.4, HDMI 2.0, USB-C with 65W charging, and two USB-A ports.
This gives the monitor flexibility to integrate with a wide range of setups, from desktop PCs and laptops to even gaming consoles.
A headphone jack is also included, and KTC ships the unit with the necessary cables and even a screwdriver.
Still, while the feature set is promising, it remains to be seen how the H27P3 performs in real-world scenarios - as specs like HDR400 and 8-bit+FRC can look good in marketing but often fall short in practice.
For now, the H27P3 stands out as an ambitious, budget-friendly business monitor that could appeal to a wide audience, provided it lives up to its claims once in use
You might also likeA new report from researchers at the University of Guelph and the University of Waterloo has uncovered a slight improvement in human detection of potential cybersecurity threats, but has warned we're still missing too many signs.
The small study of 36 participants (split equally between basic, intermediate and advanced PC users) had them face six separate software samples, half of which included malware, with varying levels of assistance.
The participants already successfully scored an 88% malware detection accuracy when faced with the potential threats, but this improved even more to 94% with the use of an enhanced Task Manager interface, showing details like CPU usage, network activity and file access.
Humans aren't too bad at detecting malwareDespite relatively strong detection, the researchers observed three key misconceptions.
Users commonly misinterpreted the UAC shield icon as a sign of security while also demonstrating a lack of understanding of digital certificates. They also noted an overthrust in file names and interface aesthetics.
Users' detection techniques varied depending on their experience levels, with basic users relying heavily on superficial cues like icons, typos and aesthetics.
Intermediate users were able to improve their accuracy with additional system data, but advanced users often took a backwards step by over-analyzing threats, leading to false positives.
In this particular test, the researchers were able to identify 25 separate secondary indicators users use to determine whether something is a threat or not, on top of four primary indicators.
One of the paper's limitations mentions the fact that the participants knew they were looking to identify malware – unsuspecting victims downloading files from the web aren't often so lucky to have a heads-up.
Still, the research is especially valuable for developers, who can use the findings to tweak their software "to eradicate misconceptions and improve security related interfaces and notifications."
You might also likeUS healthcare company DaVit has revealed it suffered a ransomware attack and a data breach earlier this year which saw patient data stolen.
The company, which specializes in providing kidney care services, filed a new form with the Office of the Washington State Attorney General, in which it confirmed the attack took place between March 24 and April 12, 2025, and saw the criminals take people’s names, Social Security numbers (SSN), driver’s license numbers, Washington ID card numbers, financial and banking information, full dates of birth, health insurance policy or ID numbers, and other medical information.
In Washington state alone, more than 13,000 people were affected, with the full number of victims unknown at this time.
Interlock takes creditDaVita also shared the data breach notification letter it’s been sending out to the victims, which stated it spotted the attack on April 12, and ousted the infiltrators on the same day. Third-party forensics experts were brought in, and law enforcement was notified.
The data grabbed came from its dialysis labs database and, varying from person to person, could include certain clinical information such as health condition, other treatment information, and certain dialysis lab test results.
“For some individuals, the information included tax identification numbers, and in limited cases images of checks written to DaVita.”
According to Cybernews, the attack was the work of the Interlock ransomware group, which emerged in late 2024, and has since then successfully broken into at least 51 organizations.
While the company says there is no evidence that the data is being misused in the wild, it urged its patients to be wary of incoming emails, especially unsolicited messages claiming to come from DaVita itself. Patients should also review their account statements and monitor their free annual credit reports for suspicious activity and potential errors.
DaVita has offered free identity theft protection, and credit monitoring, through Experian IdentityWorks.
You might also likeThe Google Pixel 10 series is just over the horizon – this year’s Made by Google event is scheduled for August 20, and we’re expecting to see full reveals for the long-rumored Google Pixel 10, Google Pixel 10 Pro, Google Pixel 10 Pro XL, and Google Pixel 10 Pro Fold.
There’s been no slowdown in rumors as Google’s hardware showcase gets closer – quite the opposite. As well as getting a good look at the upcoming phones thanks to accidentally shared product images, we’ve been hearing plenty about the new software tools and features potentially coming to Google’s next-gen flagships.
Now, a new rumor suggests that the Google Pixel 10 series could launch with a new photography feature, dubbed Camera Coach, that uses AI to help users take better photos.
According to a report from Android Headlines, the new Camera Coach feature will analyse the image fed through the cameras and offer contextual suggestions, such as holding the camera at a different angle or looking for better lighting.
The Android Headlines report doesn’t name any further sources, but the site has a fairly solid track record with rumors and tip-offs.
The Google Pixel 9 Pro and Pixel 9 Pro XL already feature on our list of the best camera phones, so adding tools that help users get the most out of their phone’s powerful camera system seems like a no-brainer.
And as a skeptic of generative AI, I like that Camera Coach sounds like it'll be more of an assistive tool, designed to educate and equip users so that they can gain confidence in their own photography later on.
I can see this being especially helpful for newcomers to mobile photography or those who only take a snap every now and then.
However, I’ll reserve my judgements for when I see the final product – this has the potential to be a very useful or somewhat annoying feature, depending on how Camera Coach reacts to your choices and whether there’s a way to scale its advice up or down.
The Android Headlines report also suggests that the Google Pixel 10 series won’t get any major camera sensor upgrades, so software features like Camera Coach could prove important in deciding whether an upgrade is worthwhile.
Until then, be sure to check out our guide to the best Google Pixel phones in preparation for the Google Pixel 10 series reveal on August 20, and let us know if Camera Coach is something you’d use in the comments below.
I've had every generation of Apple TV since the first one, and you'll have to pry my Apple TV 4K from my cold dead hands. So you might expect me to be really excited by the prospect of a brand new model later this year – after all, there have been three years of potential tech improvements since the last one launched in 2022. But I'm feeling pretty underwhelmed by the latest report.
According to MacRumors' "reliable source", Apple is "highly likely" to replace the current Apple TV 4K with a newer model before the end of 2025.
That fits with the usual Apple timetable for its TV streamer; we're not on an iPhone-esque annual upgrade cycle with these devices. But the only thing that sounds really interesting to me about this reported device is that it may be considerably cheaper than the current model. That's great if you don't have one and want one. But what if you're already an Apple TV owner?
tvOS 26 brings some useful changes to existing Apple TV models. (Image credit: Apple)Apple TV 4K 2025: what to expectThe reported improvements are all pretty predictable: Wi-Fi 7 (which many Apple devices don't support; Apple's current MacBook models have 6E), Apple's own design of Bluetooth/Wi-Fi chip, and a newer processor. 8K support is unlikely.
Don't get me wrong. A smoother processor that's even more responsive would be nice, as would passthrough for uncompressed audio (though that feature is claimed to come to existing models too).
But a key part of my lack of excitement is that the Apple TV 4K is already one of the best TV streamers out there and has been for years, and that's more about the software than the hardware – and the things that bug me, such as Siri's frequent inability to understand even very simple voice searches, are software too.
I want my TV streamer to be the least interesting and flashy bit of kit in my home: at the end of a long day, when the kids have finally gone to bed or gone out, I want to press a button, get a bunch of stuff to watch, and hit play. Provided it looks and sounds good – and Apple TV+ on Apple TV 4K does both – that's about all I care about; I get my recommendations from people and publications rather than AI or algorithms, so I'm not really interested in anything AI.
And after years of hearing about how the next Apple TV will be the one that's great for gaming, I'm pretty cynical about that side of it: I've got multiple consoles and handhelds for me and the kids, and we just don't bother with gaming on Apple TV.
That means I'm really struggling to think of a feature that Apple could add that would make me want to upgrade beyond the usual "look at the new shiny!" that I'm often a sucker for.
I hope I'm pleasantly surprised by the new model, but I suspect this update is going to be primarily financial: a cheaper, slightly better version of what we've already got is more about getting new customers than exciting existing ones.
It's not that a new Apple TV would be bad. It's that the current one is already so good.
You might also like