Gaming peripheral brand Turtle Beach has just announced three upcoming racing wheels, all targeting budget to mid-range sim enthusiasts, and they may just be great additions to our best racing wheels guide in the future.
The Turtle Beach VelocityOne Race KD3, Turtle Beach VelocityOne F-RX, and the Turtle Beach Racer are all available to pre-order today from the brand's website, and will launch simultaneously on September 9, 2025. All these products are part of the 'Designed for Xbox' lineup, meaning they're compatible with Xbox Series X and Xbox Series S as well as PC.
Starting with the Turtle Beach VelocityOne Race KD3, this is a direct drive racing wheel that includes a wheel, 'K: Drive' wheel base, and a set of pedals. The motor will deliver 3.2Nm of force feedback and up to 2,160 degrees of rotation. It sounds like a suitably powerful mid-range option in line with the Logitech G923, and will retail at $449.99 / £329.99.
Next is the Turtle Beach VelocityOne F-RX. Similar to the Thrustmaster Ferrari 488 GT3, this is a standalone wheel suited to serious racing sim enthusiasts, and could be a great choice for iRacing or F1 25. It looks to have all the buttons, dials and switches necessary for an immersive sim racing experience, and will be available individually for $249.99 / £189.99. The F-RX is compatible with K: Drive wheel bases, too.
Finally, we have a budget option available in the Turtle Beach Racer. This looks to be the one to go for if you don't have room for a direct drive setup, and is more of a plug-and-play wheel. It has a lap mount if you're only option is playing on the couch, and also supports wireless connectivity with up to 30 hours of battery life. Do keep in mind that there may be some slight latency issues there, though. The Turtle Beach Racer will retail at $179.99 / £139.99.
You might also like...When Meta shocked the industry with its $14.3 billion investment in Scale AI, the reaction was swift. Within days, major customers (including Google, Microsoft, and OpenAI) began distancing themselves from a platform now partially aligned with one of their chief rivals.
Yet, the real story runs deeper: in the scramble to amass more data, too many AI leaders still assume that volume alone guarantees performance. But in domains like robotics, computer vision, or AR - that demand spatial intelligence - that equation is breaking down. If your data can't accurately reflect the complexity of physical environments, then more is not just meaningless; it can be dangerous.
In Physical AI, fidelity beats volumeCurrent AI models have predominantly been built and trained on vast datasets of text and 2D imagery scraped from the internet. But Physical AI requires a different approach. A warehouse robot or surgical assistant isn’t navigating a website, it’s navigating real space, light, geometry, and risk.
In these use cases, data must be high-resolution, context-aware and grounded in real-world physical dimensions. NVIDIA’s recent Physical AI Dataset exemplifies the shift: 15 terabytes of carefully structured trajectories (not scraped imagery), designed to reflect operational complexity.
Robot operating systems trained on these types of optimized 3D datasets will be able to operate in complex real-world environments with a greater level of precision, much like a pilot can fly with pinpoint accuracy after training on a simulator built using precise flight data points.
Imagine a self-driving forklift misjudging a pallet’s dimensions because its training data lacked fine-grained depth cues, or a surgical-assistant robot mistaking a flexible instrument for rigid tissue, simply because its training set never captured that nuance.
In Physical AI, the cost of getting it wrong is high. Edge-case errors in physical systems don’t just cause hallucinations, they come with the potential to break machines, workflows, or even bones. That’s why Physical AI leaders are increasingly prioritizing curated, domain-specific datasets over brute-force scale.
Building fit-for-purpose data strategiesShifting from “collect everything” to “collect what matters” requires a change of mindset:
1. Define physical fidelity metrics
Establish benchmarks for resolution, depth accuracy, environmental diversity, and temporal continuity. These metrics should align with your system’s failure modes (e.g., minimum depth-map precision to avoid collision, or lighting-variance thresholds to ensure reliable object detection under specific conditions).
2. Curate and annotate with domain expertise
Partner with specialists: robotics engineers, photogrammetry experts, field operators, to identify critical scenarios and edge cases. Use structured capture rigs (multi-angle cameras, synchronized depth sensors) and rigorous annotation protocols to encode real-world complexity into your datasets.
3. Iterate with closed-loop feedback
Deploy early prototypes in controlled settings, log system failures, and feed those edge cases back into subsequent data-collection rounds. This closed-loop approach rapidly concentrates dataset growth on the scenarios that matter most, rather than perpetuating blind scaling.
Data quality as the new competitive frontierAs Physical AI moves from labs into critical infrastructure, fulfillment centers, hospitals, construction sites, the stakes at play skyrocket. Companies that lean on off-the-shelf high-volume data may find themselves leapfrogged by rivals who invest in precision-engineered datasets. Quality translates directly into uptime, reliability, and user trust: a logistics operator will tolerate a misrouted package far more readily than a robotic arm that damages goods or injures staff.
Moreover, high-quality datasets unlock advanced capabilities. Rich metadata, semantic labels, material properties, temporal context, enables AI systems to generalize across environments and tasks. A vision model trained on well-annotated 3D scans can transfer more effectively from one warehouse layout to another, reducing re-training costs and deployment friction.
The AI arms race isn’t over, but its terms are changing. Beyond headline-grabbing deals and headline-risk debates lies the true battleground: ensuring that the data powering tomorrow’s AI is not just voluminous, but meticulously fit-for-purpose. In physical domains where real-world performance, reliability, and safety are at stake, the pioneers will be those who recognize that in data as in engineering, precision outperforms pressure (and volume).
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
As enterprise AI becomes more embedded into the fabric of everyday tools, the biggest challenge facing organizations isn’t AI adoption; it’s AI management. Gone are the days when AI features like meeting transcriptions or document summarization stood out as cutting-edge.
Today, they are expected. According to McKinsey's 2024 State of AI report, 72% of organizations have adopted at least one form of generative AI, and over half report using it in more than one business function. But this surge in adoption has led to a new operational crisis: AI sprawl.
What Is AI Sprawl and Why Does It Matter Now?AI sprawl is the unchecked proliferation of AI tools and systems across departments, applications, and infrastructure without a unified strategy. The result? A chaotic digital ecosystem where:
For example, companies eager to integrate AI across their tech stacks often deploy similar capabilities in silos - an AI assistant in a messaging platform, a different one in email, another in help desk software - without a shared interface or policy layer. This fragmented approach increases operational costs, confuses users, and makes compliance audits a nightmare.
The Rise - and Limits - of Vertical AIMost enterprise AI today is what we call "vertical AI": narrow capabilities embedded directly into a specific tool, often by that tool’s own vendor. These AI features are excellent at solving bounded problems but struggle at scaling across workflows or departments.
IDC research notes that organizations are spending up to 30% more per seat due to overlapping AI functionality across their application ecosystems (IDC). While each solution may serve a use case in isolation, collectively they add inefficiency and cost.
The Real Cost of FragmentationHere’s where AI sprawl hurts the most:
Instead of asking, “How many AI tools do we have?” CIOs and CTOs must ask, “How well do our AI systems work together?”
Interoperability means more than just integrations or connectors; it requires AI tools that can share context, adhere to consistent governance, and surface insights across platforms. This horizontal approach avoids the trap of buying more features and focuses instead on making those features work in concert.
Three Core Benefits of AI InteroperabilityTo navigate from fragmentation to function, enterprise leaders must pursue both operational alignment and robust governance practices. The good news is that AI sprawl is not an inevitable cost of innovation - it can be addressed proactively.
By taking a strategic approach that blends centralized governance with interoperable infrastructure, organizations can rein in AI fragmentation before it becomes unmanageable. The way forward is clear, actionable, and within reach.
In fragmented environments, IT and compliance teams are often required to support multiple incompatible permissioning models, audit trails, and deployment protocols. A centralized platform enables governance teams to monitor model performance and data lineage in real-time, reducing exposure while aligning AI use with evolving regulatory expectations.
Less Hype, More HarmonyEnterprise leaders need to stop chasing the next flashy AI feature and start focusing on cohesion, governance, and usability. The future isn’t about having the most AI, it’s about having the most effective, connected, and secure AI.
The maturity curve for AI adoption will increasingly reward organizations that move beyond fragmented experimentation. Those who consolidate capabilities and embed AI within core processes will unlock sustainable growth, resilience, and competitive advantage.
In the age of ubiquitous AI, everyone has tools, but not everyone has traction. The innovators aren’t the ones with the most features; they’re the ones who make it all work together. AI sprawl may be a modern challenge, but orchestrated intelligence is the competitive edge of tomorrow.
We list the best employee experience tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Artificial intelligence (AI) isn’t something on the horizon. It’s already part of how people are getting work done.
Recent research from HP and YouGov found that 72% of UK employees using AI tools say it saves them time every week. One in ten are saving more than five hours. Some are using it to reduce manual admin. Others say it helps them focus, collaborate more effectively, or feel more in control of their day.
But these gains aren’t coming from structured enterprise rollouts. In many cases, they’re the result of quiet experimentation - employees using what’s already at their fingertips, often without training or direction from IT.
At the same time, more than a quarter of UK businesses still report having no formal AI strategy. This creates a growing disconnect: employees are forging ahead on their own, while the organization risks falling behind. It’s not a technology gap; it’s a leadership one.
In my conversations with CIOs and IT leaders across the UK and wider Northwest Europe market, I hear a mix of urgency and uncertainty. Everyone agrees AI is critical to future competitiveness. But there are open questions around where to start, how to scale responsibly, and how to balance experimentation with governance.
That hesitation is understandable, especially in industries where risk and compliance frameworks are tight. But as more teams adopt AI organically, the absence of a centralized plan introduces its own risks - from data leakage to inconsistent performance and lost opportunities for enterprise-wide value.
A rare opportunity to re-architect from the ground upThe end of Windows 10 support in 2025 presents a strategic window. Many organizations are already reviewing their device strategies and digital estate planning. This moment, whether viewed as a compliance trigger or a chance to modernize, is an ideal time to align IT infrastructure decisions with longer-term goals around workplace tools and AI integration.
We’re seeing growing interest in AI-capable endpoint devices as part of that strategy. These systems offer local processing, reduced latency, and better data control-critical features for organizations managing hybrid environments or strict regulatory requirements. But while improved performance and privacy are important, the real benefit is this: AI becomes embedded, accessible, and usable without disrupting the way people already work.
I’ve spoken with IT leaders who are introducing AI incrementally through use cases that matter to employees: summarizing meetings, creating first drafts, reducing clicks. It doesn’t need to be complex to be effective, but it does need to be intentional.
From pilot mode to platform mindsetToo many organizations remain stuck in test-and-wait mode. A pilot project goes well, but momentum fizzles. There’s no clear business owner, no framework to expand, no metrics to track long-term impact. Here, AI remains confined to one team or workflow, useful but limited.
To unlock real value, businesses need to stop thinking in projects and start thinking in systems. That means moving AI out of isolated pockets and into the core of IT and business strategy. From what I’ve seen across sectors, this shift requires three mindset changes.
First, move from experimentation to prioritization. AI isn’t a side initiative anymore. It needs sponsorship, resourcing, and KPIs tied to outcomes the organization cares about - whether that’s productivity, cost savings, or faster decision-making.
Second, move from scattered adoption to secure design. Governance, data privacy, and accountability must be built in from the beginning. In regulated industries, this is non-negotiable. But even in more flexible sectors, employees need to know where AI fits and what the boundaries are.
Third, move from short-term rollout to long-term enablement. AI success isn’t about deployment alone. It’s about building trust, training users, and supporting adoption in ways that stick. That means investing in support infrastructure-not just software licenses.
Some of the most effective CIOs I’ve worked with are building cross-functional AI working groups that bring together IT, data, ops, HR, and business units. These teams aren’t just coordinating rollouts-they’re shaping roadmaps, reviewing risks, and evolving policies together. That kind of alignment isn’t flashy, but it’s what allows AI to move from tactical to transformative.
AI that works - for people and the businessBeyond the tech stack, there’s a broader benefit to consider. In the same HP and YouGov research, AI users reported lower stress, improved work-life balance, and greater satisfaction with their roles. When implemented well, AI doesn’t just make work faster, it makes it more manageable and more meaningful. That translates into retention, productivity, and culture shifts that directly affect the bottom line.
As IT leaders, we don’t just manage systems, we shape environments. Our job is to build the foundations that allow people to do their best work. And increasingly, that means designing ecosystems where AI can be adopted confidently, used securely, and evolved sustainably.
The momentum is already there. Employees are experimenting. The tools are ready. The opportunity now is to implement structure and take those individual wins and build a strategy that turns them into lasting, measurable impact.
We list the best employee management software and the best employee experience tool.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Ever wondered what Windows will be like at the turn of the decade, when 2030 rolls around?
Windows Central discovered a video clip uploaded on Microsoft's YouTube channel in which its Corporate VP for OS Security, David Weston, provides his vision for Windows in 2030 (you can watch it below).
In the short interview, Weston delivers answers to some set questions which are mostly on the topic of security (unsurprisingly, given that's his expertise), AI, jobs, and the business world. He does address the title of the video at one point, though, and gives us his thoughts on how Windows might look by the end of the decade.
Weston observes: "I think we will do less with our eyes and more talking to our computers. And I truly believe that a future version of Windows, and other Microsoft operating systems, will interact in a multi-modal way."
"The computer will be able to see what we see, hear what we hear, and we can talk to it and ask it to do much more sophisticated things. I think it will be a much more natural form of communication."
Weston adds: "The world of mousing around and typing will feel as alien as it does to Gen-Z to use MS-DOS."
Much of the rest of the video discusses AI and jobs, as mentioned, and how we can expect AI to take over grunt work to free us humans up to do more interesting and creative tasks (or that's the long-held theory anyway).
And indeed, how future security experts will be AI bots that you'll interact with just like a real person, talking to them in video chats and meetings, or emailing to give them tasks.
Analysis: Far-fetched?To me, this doesn't feel like a vision of Windows in five years' time (well, it's nearer four if we want to nit-pick, and I do), but a good deal further out than that. Although Weston does hint that this is a broader vision of a 'future version of Windows', and I get the gist: the future is 'multimodal' - moving away from the simple mouse and keyboard as the main inputs for the PC - and, of course, everything's built around AI (naturally).
Will the future of Windows be like this, though? I'm certainly not betting against it being focused heavily on AI, as that very much looks to be the case. In general, AI feels like an almost irresistible force in terms of where computers are heading, and Microsoft is clearly trying to jam more AI into Windows wherever it can - a path that the software giant is doubtless going to forge ahead with.
Today, I've been writing about clues hidden in the background of Windows 11 that suggest another AI agent might be coming to the taskbar in the desktop OS. That possible addition would live alongside the agent already introduced to the Settings app, which is a smart addition.
With powerful NPUs potentially set to be included in desktop chips soon, as well as Copilot+ laptops, AI is likely to become much more widespread in the world of PCs pretty swiftly. I'd even go as far as to guess that the next version of Windows won't be Windows 12, but Windows AI (or Windows Copilot maybe, if that's still the brand for AI), the focus on this arena is likely to be that strong.
There are promises, lofty ideas, and marketing around AI, though - and then the reality of what Microsoft can achieve. Remember when Copilot was first introduced to Windows 11? We were told it would be able to change a swathe of settings in the operating system based on a vague prompt from the user (like 'make me more productive'). That still hasn't happened, and appears to be firmly on the back burner.
Which is to say that while I don’t doubt that Microsoft has these big ambitions, whether a very different way of working with a Windows PC will happen in 2030 seems doubtful to me.
Granted, I can indeed envision that talking - giving voice commands (which are coming along nicely in Windows 11) - could become a much more important, but still supplementary, part of the Windows experience and interface. And AI (presumably) doing more sophisticated things, yes, fair enough - maybe even manipulating Windows settings in one fell swoop at the behest of the user will be realized in a manner that works well.
Hey, maybe Windows AI, or Windows 2030, or whatever it ends up being called, will finally get rid of the legacy Control Panel, as a commenter on Weston's video amusingly observes. Hah - it makes me feel giddy just to imagine it. This is a battle Microsoft has been fighting for far too long, after all,
But mouse-and-keyboard usage is being made to feel like the equivalent of us being forced to revert to the days of DOS, all text and tinkering with the config.sys and autoexec.bat files to get a PC game to work? That feels like more than a stretch, and something much, much further away in the Windows computing timeline - but I could be wrong.
You might also likeYes, Microsoft is still celebrating its 50th anniversary, and while the company has done a lot of looking back, it’s also looking forward. Err, at least taking a step forward.
Sure, we’ve seen some iconic Windows ugly sweaters, including one with Minesweeper and one with Clippy, but Windows XP is going where no other version of Windows has ever gone before – to Crocs.
TechRadar's confirmed with the tech giant that the Microsoft 50th Exclusive Crocs – aka the Windows XP Crocs – are official, and got five images of the shoes.
According to a report from The Verge, Windows XP Crocs are currently available for internal order by Microsoft employees – priced at $80 – with the story noting that the employees “get first dibs” ahead of a “worldwide launch.”
Image 1 of 5(Image credit: Microsoft)Image 2 of 5(Image credit: Microsoft)Image 3 of 5(Image credit: Microsoft)Image 4 of 5(Image credit: Microsoft)Image 5 of 5(Image credit: Microsoft)We’ve seen other collaborations from the Croc brand, with plenty of Disney properties included – I mean, kachow, Lightning McQueen Crocs that light up – along with fashion houses, and even McDonald's. The Windows XP Crocs, though, take the iconic green hills and blue skies wallpaper to the shoe form.
And I know what you’re thinking, but the images of the Windows XP Crocs do indeed confirm the existence of a Clippy Jibbitz (aka what Crocs calls their shoe charms). The Windows XP Crocs will come with an iconic helper as well as a pointer, the MSN butterfly, a classic Internet Explorer logo, the recycling bin, and a folder. That comes to a whopping six Jibbitz in total.
(Image credit: Microsoft)You also get a drawstring tote that's inspired by the classic, now iconic, Windows XP wallpaper. Microsoft did confirm the existence of the Crocs to us and shared these images, but didn't share anything more on pricing or availability.
At a reported price tag of $80, the Windows XP Crocs aren’t cheap, but if you’re a Microsoft collector or someone who’s also opted to get the previous ugly holiday sweaters, they might be the perfect shoe to add to your collection. Of course, I think many would be happy if Microsoft goes the route of other retro, nostalgia-fueled drops – it could be a fresh skin for Windows or even another wallpaper drop, and that would still be a great way to honor the 50th.
You might recall that Microsoft dropped a limited 50th anniversary edition of the Surface Laptop, which looked pretty snazzy. It’s also a more subtle way to celebrate 50 years of Microsoft than, say, blue and green Crocs.
Stick with TechRadar as once we learn more about pricing and how to get a pair of the Microsoft 50th Exclusive Crocs, we'll be sure to update this post.
You might also likeWhat's better than having a couple of upward-firing Dolby Atmos speakers? Having a dozen of them. That's what Yamaha has delivered in its new True X Surround 90A soundbar system, aka the SR-X90A.
The True X Surround 90A is a high-end, high-spec home theater soundbar system with Dolby Atmos and DTS:X support, and that builds on the firm's True X 40 and 50 models, while incorporating tech from Yamaha's Sound Projector range.
According to Yamaha it delivers "an amazing home theater experience that goes beyond the realm of conventional soundbars".
Yamaha True X Surround 90A: key features and pricing(Image credit: Yamaha)The SR-X90A takes the same beam technology from the YSP-1 soundbar and applies it to a dozen up-firing speakers powered by Yamaha's YDA-141 amplifier. There are six speakers dedicated to the height channels projecting upwards at each end of the speaker, and according to Yamaha the results rival true ceiling-mounted speakers.
The very best Dolby Atmos soundbars tend to use four up-firing speakers (in the case of the Samsung HW-Q990F) or even five in the case of the LG S95AR. Some custom-install Dolby Atmos home theaters might use six in-ceiling speakers. 12 up-firing speakers is… hardcore.
The soundbar also features the Surround:AI processing from Yamaha's AV receivers, which is the first time it's been made available in a soundbar.
Those up-firing speakers are teamed up with newly developed eye-shaped oval drivers, which Yamaha says can deliver powerful audio without making the soundbar massive. There are four of these large oval drivers to cover the full frequency range, in conjunction with three tweeters. The speakers are arrange in left, center and right configurations on the soundbars front.
There's a newly developed subwoofer too, which keeps Yamaha's patented symmetrical flare port, and which has an internal plate to control the airflow in order to reduce vibrations from air turbulence and speaker movement. Yamaha says port noise is reduced by up to 20dB compared to more conventional designs.
The True X Surround 90A uses Yamaha's True X wireless connectivity for soundbar, subwoofer and satellite speakers, and the rear speakers it comes with can also be used as stand-alone Bluetooth speakers.
The system also has Yamaha's MusicCast network system for multi-room audio and in-app customization and configuration, and it's Apple AirPlay compatible too.
The True X Surround 90A will be available from September 2025 with an expected recommended retail price of £2,499 / AU$4,499 (about $3,300) – you can expect a more concrete US price closer to the time, depend on the latest tariff situation.
But before then, you can expect our early verdict on this soundbar – it's on its way to us, and this looks like a very exciting addition to the world of the best soundbars.
You might also likeIf you struggled to solve many of the Wordle puzzles served up last month, then don't be too hard on yourself: it was the toughest month in the game's history.
I've crunched the numbers and, by my reckoning, it left every other month so far in the dust for difficulty, with an average score for the 31 games of 4.22.
That's according to the daily figures reported by WordleBot, the game's AI helper tool, which records the average among the many thousands of people who play. In turn, I've kept a list of those averages since the 'Bot launched in April 2022, meaning I now have a spreadsheet ranking 1,221 games by difficulty.
The bad news is that rather than being merely a statistical anomaly, that tough run may point the way towards Wordle's near future. My daily Wordle hints might well be even more useful from here on.
Wordle's month from hellRegular Wordlers will be in no doubt as to the game's difficulty last month, with a string of near-impossible words causing all kinds of problems.
There was TIZZY, for instance, with its repeated letter Zs and average of 4.9, and POPPY with its triple Ps and 4.8 score.
FOIST might not look as difficult, but that was a classic example of Wordle's letter-trap games, where the first letter can be changed to make several other words, in this case JOIST, HOIST, and MOIST; that one also came in at 4.8.
SAVVY, with its double Vs, also hit that score, while BALER at 4.7 was one of the nastiest ER games we've had recently. EXILE (4.6), NERVY (4.5), and FRILL also caused problems, and the fact that LORIS (4.2) can be considered easy in this company points towards the overall difficulty.
July 2025 in Wordle: the 10 toughestGame
Answer
Date
Average score
My score
1497
GOFER
Friday, 25 July 2025
5.6
5
1482
JUMPY
Thursday, 10 July 2025
5.2
5
1493
TIZZY
Monday, 21 July 2025
4.9
4
1475
POPPY
Thursday, 3 July 2025
4.8
6
1487
FOIST
Tuesday, 15 July 2025
4.8
4
1500
SAVVY
Monday, 28 July 2025
4.8
4
1477
BALER
Saturday, 5 July 2025
4.7
4
1484
EXILE
Saturday, 12 July 2025
4.6
4
1488
NERVY
Wednesday, 16 July 2025
4.5
4
1503
FRILL
Thursday, 31 July 2025
4.4
4
But the two worst last month were JUMPY and GOFER – with scores of 5.2 and 5.6, respectively.
Only 21 games in Wordle's history have passed the 5.0 mark (with another nine at exactly that score), so to get two in the space of two weeks is the stuff of nightmares.
JUMPY's problem was that J at the start; as my analysis of every Wordle answer shows, J is the least common letter in the game by far, so spotting it is rarely easy. The existence of LUMPY, DUMPY, and BUMPY will also have been a factor.
With GOFER, meanwhile, it was the combination of an ER ending (the most common in the game) and the not-so-common letters G and F that caused the issue. Its 5.6 score places it as the equal fourth hardest ever, behind only PARER, MUMMY, and CORER, and level with ROWER.
If you failed to solve any of them – or even all of them – then it's entirely understandable; Wordle is a simple game, but it can be fiendishly tricky at times.
Tougher than the restWordleBot only launched in April 2022, a couple of hundred games into the series, so it's possible that December 2021 or March 2022, or another month, was even more difficult. But I doubt it. I've played every Wordle so far and lost only once, and I certainly don't recall anything like July 2025.
To confirm my hunch, I tallied the average score for each day to get the overall average for July, then repeated the process for each of the other 38 months for which I have full details.
One thing I found interesting was that July 2025 wasn't just the most difficult so far – it was the most difficult so far by a long way.
The month's overall average of 4.22 might not sound that much higher than that for October 2024, the next highest in the list at 4.15, but it's statistically significant given that there's only a 0.04 difference between months #2-7 in the list.
Plus, it's a whopping 0.57 guesses harder than the easiest month, December 2023 – which came in at 3.65 – and way higher than the game's overall average of 3.97.
Hard times are comingOne notable feature of July's Wordles was that there were five 'non-original' answers among the 31 games.
When Josh Wardle created Wordle, he and his partner drew up a list of 2,315 words which would form the game's answer list, then scheduled them to appear one a day for the next six or so years.
The New York Times removed a few of those when it bought Wordle in 2022, then left the list more or less unchanged for the next year. Then, in March 2023, it gave us GUANO – the first 'extra' solution added to the original pool and the start of a new era for Wordle.
More have followed since then, 17 in total, including such gems as UVULA, SNAFU, PRIMP, and MOMMY, all of which have been hard in their own right. But these words have been spaced apart, with most months seeing just one or none at all.
There have been exceptions, with June 2023, November 2024, and May 2025 all having two, and January 2025 having three. But to get five in a month, as we did in July, was unprecedented.
(Image credit: New York Times)And all were difficult: ATRIA was a 4.1, NERVY a 4.5, LORIS was 4.2, TIZZY 4.9, and GOFER that immense 5.6. The average across those five games was a staggering 4.66; these were all genuine head-scratchers.
And the thing is, the NYT is going to have to keep adding more of these as time goes on. That's because we're now past 1,500 Wordles, meaning we have only around 800 original games left, with as yet no idea what will happen to the game when that list runs out.
The smart thing for the NYT to do is extend it for as long as possible, which means adding more words. And there lies the problem. Wardle's list already covers many of the most obvious five-letter words in English, so we can expect the majority of the newly added words to be more difficult than the average.
So you can forget about classic Wordle start words such as STARE, CRANE, and SLATE being added – they've all already been and gone. So too ultra-common English words such as HOUSE, TODAY, and BELOW; they've all been past Wordle answers too. Instead, you can look forward to more like BALSA, KAZOO, BEAUT, SQUID, and TAUPE; uncommon words, slang words, words with uncommon letters…
August initially continued the July trend, with BANJO and DAUNT both coming in at 4.4, but the next few games were a little easier; maybe the NYT was giving us all a breather. But don't be surprised if things get tougher again soon, because this game is only going one way from here. Don't say you weren't warned.
You might also likeGoogle’s AI world model has just received a significant upgrade, as the technology giant, specifically Google DeepMind, is introducing Genie 3. This is the latest AI world model, and it kicks things into the proverbial high gear by letting the user generate a 3D world at 720p quality, explore it, and feed it new prompts to interact or change the environment all in real time.
It’s really neat, and I highly recommend you watch the announcement video from DeepMind that’s embedded below. Genie 3 is also keenly different from, say, the still impressive Veo 3, as it offers video with audio that goes well beyond the 8-second limit. Genie 3 offers multiple minutes of what Google calls the ‘interaction horizon,’ allowing you to interact with the environment in real-time and make adjustments as needed.
It’s sort of like if AI and VR merged; it lets you build a world off a prompt, add new items in, and explore it all. Genie 3 appears to be an improvement over Genie 2, which was introduced in late 2024. In a chart shared within Google’s DeepMind post, you can see the progression from GameNGen to Genie 2 to Genie 3, and even a comparison to Veo.
Google's also shared a number of demos, including a few that you can try within the blog post, and it's giving us choose-your-adventure vibes. There are a few different scenes you can try on a snowy hill or even a goal you'd want the AI to achieve within a museum environment.
Google sums it up as, “Genie 3 is our first world model to allow interaction in real-time, while also improving consistency and realism compared to Genie 2.” And while my mind, and my colleague Lance Ulanoff’s, went to interacting in this environment in a VR headset to explore somewhere new or even as a big boon for game developers to test out environments and maybe even characters, Google views this as – no surprise – a step towards AGI. That’s Artificial General Intelligence, and the view here from DeepMind is that it can train various AI agents in an unlimited number of deeply immersive environments within Genie 3.
Another key improvement with Genie 3 is its ability to persist objects within the world – for instance, we observed a set of arms and hands using a paint roller to apply blue paint to a wall. In the clip, we saw a few wide stripes of rolled blue paint on the wall, then turned away and looked back to see the paint marks still in the correct spots.
It’s neat, and similar to some of the object permanence that Apple’s set to achieve with visionOS 26 – of course, that’s overlaying onto your real-world environment, so maybe not quite as impressive.
(Image credit: Google DeepMind)DeepMind lays out the limitations of Genie 3, noting that in its current version, the world model cannot “simulate real-world locations with perfect geographic accuracy” and that it only supports a few minutes of interaction. Genie 3's minutes of capability are still a significant jump over Genie 2, but it’s not enabling hours of use.
(Image credit: Google DeepMind)You also can’t jump into the world of Genie 3 right now. It’s available to a small set of testers. Google does note it’s hoping to make Genie 3 available to other testers, but it’s figuring out the best way to do so. It’s unclear what the interface to interact with Genie 3 looks like at this stage, but from the shared demos, it’s pretty clear that this is some compelling tech.
Whether Google restricts its use to AI research and training, or it explores generating media, I have no doubt we’ll see Genie 4 here in short order … or at least an expansion of Genie 3. For now, I’ll go back to playing with Veo 3.
You might also likeIf someone were casting a robot action hero movie and they didn't want a humanoid (or Arnold Schwarzenegger), they might hire the all-new Unitree A2. This quadruped robot can run, jump, climb, scamper down hills, tumble, carry a heavy load, and, yes, as depicted in the promo video, smash through a plate of glass. All that's missing here is the blockbuster action movie soundtrack.
With a max speed of 11.2mph, the new A2 is something of a landmark in quadrupeds, outrunning the standard Robo Dynamics Spot robot by almost 8 miles per hour.
The A2 Stellar Explorer model depicted in the video is built for all kinds of rugged and uneven terrain. Its 30cm step height helps it step over rocks and mount stairs with abandon. It's also comfortable with a 45-degree incline up or downhill.
While it lacks a head or any features that might help you easily anthropomorphize it, it can "see". The odd-looking quaduped uses front and rear-mounted LiDAR to monitor its environment and make on-the-fly adjustments.
In the video, the A2 doesn't get every step right, but even a trip, fall, or tumble doesn't appear to stop the propulsive A2's 12 high-density motors.
It's also ready to carry significant loads. In the video, an adult man stands on the A2's back, demonstrating its max standing load capabilities of 220 lbs. In motion, it can carry a 25lb pack.
While Unitree is not specifying battery life, the A2 does come equipped with a 9,000mAh battery and the option of an 18,000mAh dual battery pack. In the video, it carries a 30kg payload for over 3 hours across almost 13km.
As for its readiness for the great outdoors, the A2 is rated IP56, which means it can withstand a jet of water, and we would assume that also means a rainstorm. It does not sound like it can handle any kind of water submersion, though.
Pricing and availability have not been set, but Unitree has been coming in below the industry average when it comes to price. We'll see if the A2 fits that model.
You might also likeIn a bizarre yet intriguing experiment, musician and science enthusiast Benn Jordan has explored whether birds could act as a living storage medium.
The bird in question, a young starling, had been rescued as a chick and raised by humans after apparently being abandoned near a noisy train track.
As it turns out, such early exposure made the starling unusually receptive to sounds not typically found in nature - including reverb-heavy speech and mechanical noises.
Turning images into sound, then back againThe starling’s vocal learning abilities were central to the experiment, as unlike parrots, which were dismissed for this trial, songbirds possess a complex vocal organ called a syrinx, capable of highly refined modulation.
Jordan believed this could make them ideal candidates for reproducing complex audio waveforms.
His goal was to see whether the bird could retain and reproduce a sound-based version of an image - specifically, a line drawing of a bird encoded as an audio waveform.
The experiment involved encoding a PNG image into a waveform using a spectral synthesizer.
Jordan played this to the bird repeatedly, attempting to ‘upload’ the image into its memory.
While this may sound far-fetched, something unexpected happened during post-analysis of the recorded sessions.
Amid hours of playback data, a familiar waveform emerged - one resembling the original image - and it appeared later in the session, after Jordan had stopped feeding the sound to the bird.
This suggests the starling itself may have recreated the image waveform vocally.
Jordan estimated the bird reproduced the signal in the same frequency range in which it was originally encoded, transferring roughly “176 kilobytes of uncompressed information.”
Using speculative math and assuming compression, he suggested the bird might have delivered data at around “2 megabytes per second.”
That rate exceeds typical DNA storage readout speeds, though obviously lacks the permanence or reliability of more established media like an external SSD or even a portable HDD.
While the experiment is undeniably creative, it invites skepticism.
Songbirds may imitate sounds, but equating that with consistent, structured data retrieval feels premature.
Unlike an SSD, which offers fast and repeatable access to stored information, a starling cannot guarantee stable performance or retention.
Even if the bird stores the data, how do you get it to sing when you need the data? What about security? The bird can give the data to whoever it deems fit.
The idea of using birds to hold digital data lacks not only scalability but also control - it can even literally fly away with your data.
Although the starling reproduced a sound resembling the encoded image, whether this truly constitutes data storage in any usable sense remains debatable.
At best, this unusual case offers a poetic intersection of biology and computation and at worst, it’s a fleeting curiosity unlikely to replace DNA storage, let alone your external HDD.
Via TomsHardware
You might also likeA global network of more than 5,000 fake pharmacy websites has been uncovered by security experts.
Designed to mimic legitimate drug retailers, the platforms sell counterfeit or unregulated medications while harvesting sensitive personal and financial data.
In many cases, they target buyers searching for discreet access to treatments like erectile dysfunction pills, antibiotics, steroids, and weight-loss drugs.
The blurred line between help and harmAccording to a recent threat report by Gen, the “PharmaFraud” operation relies on a combination of deceptive site design and technical manipulation.
The sites often use AI-generated health articles, falsified reviews, and misleading ads to gain visibility and credibility.
Many of them are structured to bypass basic trust indicators, omitting business credentials and using insecure payment methods such as cryptocurrency.
The danger is not limited to the quality of the drugs sold, as these websites often prompt users to enter private medical details, upload documents, or provide payment information, all of which can be exploited in secondary fraud campaigns.
Even when a product is delivered, there is no guarantee it is safe or effective - some may be expired, contaminated, or simply fake, posing risks well beyond financial loss.
The report also noted a broader rise in cyber threats targeting individuals and small businesses. Financial scams increased by 340% in just three months, often using fake ads and chatbot forms to impersonate legal or investment services.
Tech support scams - frequently appearing as browser popups - rose sharply as well, with many users lured into calling fake help lines.
Staying safe from fake pharmacy scams and related cyber threats requires a combination of awareness and practical digital precautions.
How to stay safeOpenAI has just dropped two new AI models, gpt‑oss‑120b and gpt‑oss‑20b. Not only are they new, but they're the first open‑weight models from ChatGPT's creator since GPT‑2.
The smaller of the two – gpt-oss-20b – is especially notable for being light enough to run on a decently specced consumer PC. If you’ve got about 16GB of RAM and some patience, you can load it up, ask it questions, and actually see how it arrives at answers. The larger 120b model still requires serious hardware or cloud support.
Both models are part of OpenAI's new push to encourage developers to play around with the models and even commercialize them for the average user. For the first time in years, developers and curious individuals alike can download and run OpenAI models on their own machines, inspect how they think, and build on them freely. They're available via Hugging Face and AWS under the Apache 2.0 license.
Being open weight means the models provide a level of transparency and independence that most people haven’t had since ChatGPT first went viral. The real-time reasoning is visible throughout, and you can see how the 'logic' of the model leads to its final options for how to respond and how it makes that decision.
That’s a big shift for OpenAI. The company has spent several years restricting access to its most powerful tools, offering only API endpoints and paid tiers. Now it’s returning a little bit to the GPT-2 era, but with far more capable models. Even so, the lighter model isn't something everyone will rush to as a replacement for the ChatGPT app.
gpt-oss is out!we made an open model that performs at the level of o4-mini and runs on a high-end laptop (WTF!!)(and a smaller one that runs on a phone).super proud of the team; big triumph of technology.August 5, 2025
Open weight accessThe flexibility provided by the new models could be a boon for OpenAI as the open-weight approach becomes more popular. DeepSeek, Meta, and Mistral have all released open models in some fashion recently. But most have turned out to be semi-open, meaning they are trained on undisclosed data or have constricted terms and usage limits.
The gpt-oss models are straightforward in offering the weights and license, though the training data remains proprietary. And OpenAI’s gpt-oss models bring compatibility with OpenAI’s widely used interface, as well as a bigger window into how the model makes decisions that stand out.
So, where DeepSeek models tend to emphasize the raw power and relatively low-cost performance of their models, OpenAI is more interested in explaining how the models work. That will pique the interest of many developers trying to learn or experiment. You can literally pause and look at what’s going on inside the model.
It’s also a signal to the developer community: OpenAI is inviting people to build with its tech in a way that hasn’t been possible for years. These models don’t just output chat completions; they offer a foundation. With the Apache license, you can fine-tune, distill, embed, or wrap them into entirely new products. And you don’t need to send data to a third party to do it. Those hoping for a more decentralized AI ecosystem are likely to be pleased.
For those of us who are less technical, the main point is that you might see AI apps that run very well without needing a subscription price from you, or personalized tools that don't send your data to the cloud. With this release, OpenAI can claim to be willing to share more with developers, if not quite everything.
For a company that’s spent much of the past two years building closed systems and premium subscriptions, releasing a pair of models this capable under open licenses feels like a major philosophical change, or at least the desire to make such a change seem real.
And while these two models don't represent the push into the era of GPT-5, OpenAI chief Sam Altman has teased that even more announcements are on the horizon, so it's likely that the next-generation model is still being readied.
You might also likeIt's that time again: each of the best streaming services have a limited-time that you can watch some films, and that means every licensed movie will eventually face the final curtain – or at least, the final one until the rights are renewed and it reappears in a new catalog.
Some of this month's movie exits from HBO Max will be missed more than others – for me, it's a sad goodbye to Detective Pikachu and a 'don't let the door hit your ass on the way out' to Ted 2 – but some of this month's departures include films that are as watchable as any of the best Max movies.
I've chosen three very different movies for you to catch while you can. One's a family-friendly animation, one's a surprisingly dark 80s actioner with a Shane Black script and one is a drama featuring one of the world's biggest movie stars in front of and behind the camera. But while all three are different kinds of movie I think there's something in all of them that makes them worth watching.
Two of the movies here are also interesting because of their influences and influence: while Lethal Weapon wasn't the first buddy-cop movie it set the template for the decade and beyond, and without the clearly Hitchcock-inspired Play Misty For Me there would be no Fatal Attraction. To the movies!
Lethal WeaponMultiple Lethal Weapon movies are on their way out from Max this month, but if you're tight for time then only the first two are must-watch action flicks: after that the quality nosedives, with Lethal Weapon 3 struggling to get a 60% rating and the utterly inessential fourth movie garnering a frankly rotten 52% from the critics.
There's some argument over which of the first two Lethal Weapons are superior. Many people plump for the sequel, but for me the first movie is the best. In this film Martin Riggs (Mel Gibson) isn't movie-crazy; he's crazy-crazy, Mel-Gibson-stopped-by-a-cop scary: he's going out of his mind with grief and that makes him incredibly dangerous to others and to himself. That gives the first movie a weight that the more conventional buddy sequels don't carry.
"Lethal Weapon is a film teetering on the brink of absurdity when it gets serious," Variety wrote, but "thanks to its unrelenting energy and insistent drive, it never quite falls." Reviewing the 4K re-release, Starburst said that it "stands out as the epitome of 1980s shoot-'em-ups, set in a bygone Hollywood fantasy world where cops and guns are great and an action star like Gibson flexing his pecs and martial arts skills would fill up cinemas. It’s not difficult to see why it’s considered a classic – the action is bloody fun and the buddy cop chemistry strong."
Cloudy With a Chance of MeatballsOne of the things I really love about animation is that it makes the impossible possible, and this cute kids' film – which is fun for adults too – is a great example of that: you'll believe a man can fry (sorry).
Cloudy With a Chance of Meatballs has a great premise and a great cast too: "Where else can you find the varied likes of Bill Hader, Anna Faris, James Caan, Bruce Campbell, and – yes – Mr. T together and all on their A game?" says The Movie Report. It's about an eccentric inventor (Hader) whose machine makes it rain all kinds of food, saving a struggling fishing town from its sardine-based sadness. But then the machine goes out of control with often hilarious consequences.
Reviews were mixed, and it's definitely not up there with the likes of DreamWorks' or Pixar's best. But as Empire put it, it's "no Pixar, but a lot of fun." The film is "bright, silly and a good shout to entertain the whole family."
Play Misty For MeLet's deal with the elephant in the room first: this film was made in 1971 and that means its sexual politics, its understanding of mental illness and director and lead actor Clint Eastwood's sideburns and flares have all aged terribly. But it's an effective potboiler about a late-night DJ whose one night stand with a troubled woman (Jessica Walter) turns into something sinister.
"Eastwood... has obviously seen Psycho and Repulsion more than once," TIME Magazine said, "but those are excellent texts and he has learned his lessons passing well." The Chicago Reader agreed: "Clint Eastwood wisely chose a strong, simple thriller for his first film as a director, and the project is remarkable in its self-effacing dedication to getting the craft right."
The film enabled Eastwood, by then a huge movie star, "to be unsympathetic, selfish and – in the words of the title song – as helpless as a kitten up a tree," Empire wrote, describing it as "thrilling" and giving the film four out of five stars.
You may also enjoyMicron has announced a major expansion of its storage lineup with a new entry in the high-capacity SSD space, the 6600 ION.
The company says this PCIe Gen5-based SSD is now available in a 122TB configuration and is expected to scale up to 245TB in early 2026.
The company is positioning its new model as a direct challenge to hard disk drives in hyperscale and enterprise data centers, aiming to offer greater efficiency in terms of power consumption, physical space, and storage density.
Hard drive alternative for data-heavy environmentsThe 6600 ION is part of a broader portfolio that also includes the 9650 PCIe Gen6 and the 7600 SSD for low-latency tasks.
All three products are built on Micron's G9 NAND, which the company claims enable significant performance and capacity gains.
“With the industry’s first PCIe Gen6 SSD, industry-leading capacities and the lowest latency mainstream SSD—all powered by our first-to-market G9 NAND—Micron is not just setting the pace; we are redefining the frontier of data center innovation,” said Jeremy Werner, senior vice president and general manager of Micron’s Core Data Center Business Unit
Micron claims the 6600 ION can deliver up to 88PB per rack which is huge considering that many of its rivals are still below 40PB per rack.
With support for up to 36 E3.S SSDs in a 2U server, the design enables up to 4.42PB per server.
“With Supermicro’s broadest selection of Petascale storage optimized servers supporting up to 36 E3.S SSDs, the Micron 6600 ION enables up to 4.42PB per 2U server delivering the highest density and power efficiency for large capacity AI workloads,” said Michael McNerney, senior vice president, Marketing and Network Security at Supermicro.
The 6600 ION reportedly delivers a 67% density improvement over previous alternatives.
Micron suggests this could become the largest SSD available commercially, allowing data centers to store exabytes of information with improved energy efficiency.
However, its role in actually replacing hard drives will depend on long-term endurance, cost-per-terabyte economics, and compatibility across platforms.
That said, the 6600 ION reportedly uses only 1 watt per 4.9TB, a figure that undercuts the power draw of traditional HDD arrays.
Micron projects that installations scaling to 2 exabytes could result in daily energy savings equivalent to powering 124 U.S. homes.
These claims point to significant operational savings, but large-scale deployment will depend on more than just power metrics.
As Micron eyes leadership in fastest SSD and largest SSD categories, the actual shift from HDDs will rely on sustained performance under pressure and meaningful cost advantages across the board.
You might also likeMicrosoft is warning users of older versions of Office that they will soon be losing access to certain voice tools, including transcription, dictation and read aloud, as of January 2026.
The dropped features include systems to read documents and emails aloud, speech-to-text conversions and voice-to-text input, but those who fail to update to Office version 16.0.18827.20202 or newer will lose out.
This is for most casual users, however Government Cloud users including GCC, GCC High and DoD environments will have an additional two months to apply the change.
Older Microsoft Office versions losing voice tools soon"To ensure continued high-quality performance of the Read Aloud, Transcription, and Dictation features in Microsoft 365 Office apps, we're upgrading the backend service that powers these capabilities," the notice warned.
Microsoft justified the change by adding it is upgrading the backend service which powers these voice features, therefore older versions must lose support to ensure ongoing compatibility with newer versions.
Word, Outlook, OneNote and PowerPoint are among the most commonly used apps to lose support for voice tools, and there will be no local fallback once the deadline is passed. Perpetual license holders will already be used to limited functionality, lacking most cloud-powered voice tools already.
This isn't the only change that Office users are facing in the coming months – by October 14, 2025, Office 2016 and 2019 will reach end of extended support. Office apps on Windows 10 also lose support later this year, with Windows 10 itself going out of date in October too.
With numerous Microsoft-related deadlines all approaching at rapid speed, the company is finally seeing an uptick in adoption of its latest software. Windows 11 installs finally overtook Windows 10 installs for the first time in July 2025, after a moment of crossover in June.
You might also likeNintendo has released a new overview trailer for Kirby and the Forgotten Land – Nintendo Switch 2 Edition + Star-Crossed World ahead of the game's release later this month.
The Switch 2 version was first revealed during the console's reveal, which confirmed enhancements, additional features, and the announcement of the Star-Crossed World downloadable content (DLC)
In this new trailer, we got a fresh look at brand new Starry Stages, which will be added alongside the story DLC. After a meteor crashed into the Forgotten Land, aspects of the world and familiar locations have been altered, creating new paths and areas to discover.
During the Switch 2 Edition reveal, the first look at Spring Mouth was shared, but the latest trailer has also confirmed that two additional Mouth Mode Transformations will also be featured: Gear Mouth and Sign Mouth.
With Gear Mouth, players can latch onto walls and roll vertically. Sign Mouth allows players to slide along slopes, jump in the air to reach things, and execute a spin attack on enemies, while Spring Mouth lets Kirby "Smash Down" on enemies.
New activities are also on the way. Players will be able to collect Starry Coins in each stage and spend them on the Gotcha Machine EX from Astronomer Waddle Deeto to unlock new figures.
A new challenge at the Colosseum called The Ultimate Cup Z EX will also be available, where players can "test your mettle and might in an even tougher boss rush".
The Switch 2 Edition also arrives with all-new improvements, including improved graphics and faster frame rates for both the base game and the DLC.
Kirby and the Forgotten Land – Nintendo Switch 2 Edition + Star-Crossed World launches on August 28.
Players who already own Kirby and the Forgotten Land on Nintendo Switch can purchase a digital upgrade pack to access Star-Crossed World and the new enhancements on Switch 2.
You might also like...Can you believe it? Little Bobby is all grown up in King of the Hill season 14, which takes us back to the beating heart of Middle America. It’s not surprising considering the show had a release date of August 4 – almost exactly 16 years after it initially stopped airing in 2009. Set in the fictional town of Arlen, Texas, we’re picking back up with Hank (Mike Judge) and Peggy (Kathy Najimy) as they move back to town after Hank’s retirement, while Bobby (Pamela Adlon) is making a new life for himself as a fully-fledged adult,
The new Hulu show (which is also available on Disney+ in the UK and Australia) has been praised as being charming and a slow-grower, much like the original series when it debuted in 1997. It’s both ridiculous and familiar all at once, managing to incorporate a brand-new world while keeping tabs on everything that made the comedy the animated success story it was. As we know, the world has changed a great deal since 2009 (let alone 1997), and it’s almost strange to see a version of Middle America largely unaffected by politics.
But that’s not the change I think we need to keep an eye on. Cultural, societal and political shifts while King of the Hill has been off air go without saying, yet the biggest change affecting the show itself is the rise of streaming services. It’s not something the comedy has ever had to deal with before, and according to its creators, the viewing landscape has undoubtedly changed what we’re watching in season 14.
King of the Hill season 14’s switch to streaming has undeniably changed what we’re watching, say creators“Everybody’s trying to figure out how to match audience viewing behavior to the way business models used to work,” showrunner Saladin K. Patterson told The Hollywood Reporter. “So, a microcosm of that is this whole thing that a season is 10 episodes now, and that certainly affects the stories we can tell, but not all in a bad way. In some ways, 10 episodes is creatively more refreshing than having to do 22 episodes. Trust me, the unspoken secret that we always had was it’s hard doing 22 episodes, and by time you get to episode 17, you’re starting to repeat yourself probably. But monetarily speaking, that was a great model. Now for streamers like Hulu and Disney+, it’s a little different.”
He continued, “When we were breaking out the season arc, it certainly made us skip ahead, I think, in a way that we wouldn’t necessarily have skipped ahead in the first 10 episodes under the broadcast model. Think about the Connie ( Lauren Tom) and Bobby relationship. We wanted the season to end with them getting together, so that meant, along the way, we had to jump that relationship ahead faster than we would have had we had 22 episodes to get them together. That fit to how we brought the stories and what we had to pick and choose in terms of what we showed.
“The word that comes to the top of my head is it makes you be more ‘efficient.’ It also makes you figure out, assuming I want to get from A to B, what in between has to be shown to make it make sense when we get to B. Versus if I had to get to A to H, I have B, C, D, E, F and G to hit along the way. It makes us have to be a little more selective with what we have our characters experience if we’re trying to get them to the same place by the end of a season.”
Of course, the fact the King of the Hill reboot is streaming on Disney+ and Hulu rather than one of the other best streaming services around also changes what we’re seeing. In Patterson’s own words, the comedy never hugely pushed the boundaries of speaking out, but now season 14 is so heavily tied to family-friendly brands, that’s even more constricted.
“On the one hand, the Hulu execs for the show were fans of the original, so we all were on the same page in terms of wanting to recapture what made the original special,” he explained. “But there were situations where the Disney of it all put some limiters on us that I know Fox would not have, even though we were on Hulu and streaming, which theoretically has broader S&P [standards and practices] than Fox. But for us, staying true to the show meant we weren’t ever going to be too gratuitous with the curse words and things, but we do take some liberties. The characters do curse in ways they can’t curse on broadcast.
“That being said, Hulu still made us go through and pull out all the F-bombs because they don’t want the TV-MA label, and it’s fine.”
You might also likeThe Made By Google event is just around the corner – it’s happening August 20 – but some of the tech launching from it might be further away than we had realized if new leaks are to be believed.
That’s according to WinFuture (machine translated to English from German), which claims that its unnamed sources are telling it that while the Google Pixel 10 lineup will land later this month, the new foldable, earbuds and smartwatch won’t be landing as quickly.
The Google Pixel 10 Pro Fold, Pixel Watch 4, and Pixel Buds 2a (which have all previously been teased by leaks) are instead reportedly set to be available from October 9.
Problems with Google’s supply chain are the cited excuse for the delays, though WInFuture says its source didn’t get into specifics.
Google Pixel 10 leak (Image credit: Android Headlines / @OnLeaks)There’s a whiff of irony here if the leak proves correct, as Google just teased Apple for its tech not being ready in a Google Pixel 10 ad, which pokes fun at the delayed launch of the AI-enhanced Siri. At the time of writing, the new Siri still hasn’t launched in its promised full capacity.
If Google’s hardware is delayed, this will cause its own frustrations for many who were desperate to upgrade their tech – especially if it is indeed held up by over a month.
As with all leaks though, we should take these details with a pinch of salt.
Apart from the Google Pixel 10, we haven’t had any direct promise of what hardware we’ll see at Made By Google, and we haven’t heard any official release dates for any devices.
Nevertheless, this leak could be one to keep in mind on August 20 so we aren’t too disappointed if some of Google’s new tech is as delayed as it suggests.
You might also likeBritish hi-fi firm Avid has unveiled the new flagship in its EVO speaker range, the EVO TWO, as part of its ongoing 30th anniversary celebrations, which also recently included its first all-new turntable in 12 years.
Avid describes the EVO TWO as an "advancement" of the current EVO THREE speakers, with the new model delivering an enhanced bass response, a wider dynamic range and a more expansive sound stage.
The speakers will be available in the UK, Europe and the US, but they've been made with the US market in mind: according to managing director Conrad Mas, “It is particularly suitable for larger rooms and properties where wall construction may be less rigid, a common scenario in many North American homes."
(Image credit: Avid HiFi)Avid EVO TWO: key specs and pricingThe EVO TWO feature a 28mm (1.1-inch) hand-coated soft dome tweeter, two 160mm (6.3 inch) mid-range drivers and a 250mm (9.84 inch) low-frequency driver with a low end of 28Hz – the addition of the latter means that Avid is referring to this as a 3.5-way speaker system.
That's not something the company has made up – 3.5-way speakers aren't common, but there's a history of great, hefty, floorstanding speakers with hidden bass drivers.
The drive units are mounted on a rigid anodized aluminum baffle and rear plate, and the speakers are available in two finishes: black or gloss white.
As Conrad Mas explains: "The EVO TWO was developed in response to customer feedback requesting a speaker with great presence and low-end articulation, whilst maintaining the clarity and openness characteristic of the EVO series."
The EVO TWO will be available through authorized dealers from September 2025 and the price is £27,995 / €34,995 / $38,995 (about AU$57,589). I'm not sure it'll crash our list of the best stereo speakers at that price, but I'd love to find out with a nice long listening session myself…
You might also like