Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
Updated: 1 hour 54 min ago

I know which TV tech is the best for watching sports, and these 3 sets are my top picks for your next upgrade

Wed, 06/11/2025 - 13:09

If you’re a sports fan like me, you may have had some complaints in the past about your TV when trying to watch sports. Whether it’s reflections while watching a game in the afternoon or blurring during fast motion, something always seems to need tweaking.

Another issue: a TV that appears dim, with a flat-looking image, particularly for field sports such as football and rugby.

Even the best TVs can struggle with sport, but thankfully, there’s a TV tech that’s ideal for sports fans: mini-LED.

Mini-LED: perfect for sports fans

Mini-LED TVs are not only becoming increasingly popular but also more affordable. This tech delivers an improved picture over standard LED by using backlights with smaller LEDs (hence the mini part).

By miniaturizing the LEDs, a higher number can be used, which results in increased brightness. It also allows for a higher number of local dimming zones in the backlight, which helps to boost contrast and improve black uniformity.

Mini-LED TVs can hit significantly higher brightness levels than other TV panel types, with 2,500 - 4,000 nitspeaks possible in flagship models. But for sports fans, it’s fullscreen brightness – the level of brightness that the TV can sustain over its entire screen area – that matters most, and once again, mini-LED TVs here regularly beat other panel types, including the best OLED TVs.

To provide an example of that from our TV testing, we regularly measure fullscreen brightness levels of between 580 - 800 nits on the best mini-LED TVs. But even the brightest OLED TV we’ve tested, the LG G5, topped out at 331 nits in our fullscreen measurement.

I’ve picked three models below that are examples of the best mini-LED TVs for sports.

1. Samsung QN90F

(Image credit: Future)

The Samsung QN90F is the perfect TV for sports. Not only does it deliver exceptionally high brightness levels – 2,086 nits peak and 667 nits fullscreen in Filmmaker Mode – but it has a Glare-Free screen (first introduced in the Samsung S95D OLED) that effectively eliminates reflections, making it perfect for afternoon sports watching.

The QN90F also delivers the superb motion handling that's essential for fast-paced sports. Even for movies, we found we could get smooth motion, with no sign of the dreaded ‘soap opera effect’, by setting both Blur Reduction and Judder Reduction to 3.

The QN90F delivers vibrant colors, strong contrast and realistic textures for a brilliant picture. And when viewing from an off-center seat, there’s little sign of the backlight blooming that results in contrast fade, meaning it’s great for watching in large groups.

The QN90F is a premium-priced TV, with the 65-inch model we tested priced at $2,499.99 / £2,499 / AU$3,499, but if you’re a sports fanatic, it’s worth the investment. Plus, you can expect prices to drop at some point in the near future.

2. Amazon Fire TV Omni Mini-LED

(Image credit: Future)

When I first began testing the Amazon Fire TV Omni Mini-LED, I didn’t anticipate it would be such a good TV for sports. But in its preset Sports mode with Smoothness (Judder Reduction) set to 4 and Clarity (Blur Reduction) set to 10, sports looked impressively smooth. Color was also surprisingly accurate in that mode, which is unusual as I’ve found the Sports mode makes colors look oversaturated and garish on most TVs.

Something unique about the Omni Mini-LED is that it’s nearly ready out of the box for sports. In contrast, I found when testing competing models such as the Hisense U6N and Hisense U7N that more setup was required to get sports looking right.

The Amazon Omni mini-LED is a significantly more affordable TV than the Samsung QN90F, with its 65-inch model often discounted down to $949.99 / £949.99. It may not have the same level of sports prowess as the Samsung QN90F, but it’s great for the money.

3. TCL QM7K / TCL C7K Image 1 of 2

TCL QM7K - US (slide 1) & TCL C7K - UK (slide 2) (Image credit: Future)Image 2 of 2

(Image credit: Future)

This entry is a hybrid as the TCL model name (and specs) will vary depending on which side of the pond you’re on. Either way, it’s the mid-range model in TCL’s 2025 mini-LED lineup.

Both of these TVs deliver exceptional brightness at a mid-range price, with the TCL QM7K and TCL C7K hitting 2,350 nits and 2,784 nits HDR peak brightness, respectively. More importantly, they hit 640 nits and 678 nits HDR fullscreen brightness, respectively – very good numbers for watching sports in bright rooms.

These TVs require some motion setup. Since I'm based in the UK, I tested the C7K, and I found that I needed to tweak the Sports or Standard picture mode by setting Blur Reduction to 3 and Judder Reduction to 6. I also needed to lower the color setting in Sports, as it was oversaturated in its default settings.

Once this was completed, the C7K was a solid TV for sports. It isn’t quite as effective as the two models above, but it is still a very good mid-range option overall. If the QM7K is anything like its UK counterpart, then the story for that model will be the same.

Again, for the 65-inch models of these two sets, you’re looking at paying $999 / £1,099. That’s a similar price to the Amazon Omni Mini-LED, which has the best motion of the two, but with the TCL, you’re getting that extra hit of brightness.

You might also like
Categories: Technology

I just watched the world's first ‘haptic’ trailer for Apple's F1 movie and my fingers are still tingling

Wed, 06/11/2025 - 13:00
  • Apple has just released the world's first 'haptic' trailer for its F1 movie
  • The trailer vibrates your phone in time with action sequences
  • The F1 movie pulls into theaters internationally from June 25

I thought I'd seen every movie trailer gimmick by now, but Apple has just produced a novel one for its incoming F1 movie – a 'haptic' trailer that vibrates your iPhone in time with the on-screen action.

If you have an iPhone (Android fans are sadly excluded from the rumble party) head to the haptic trailer for F1: The Movieto open it in the Apple TV app. You'll then be treated to two minutes of vibrations that's probably also a taste of what it's like to being a celebrity in the middle of a social media storm.

The trailer's 'haptic' experience was actually better than I was expecting. I assumed it would be a simple, one-dimensional rumble that fired up during race sequences, but it's a little more nuanced than that.

To start with, you feel the light vibration of a driver's seat belt being fastened, before the vibrations ramp up for the driving and crash sequences. There's even a light tap to accompany Brad Pitt's character Sonny Hayes moodily bouncing balls against a wall as he ponders coming out of retirement for one last sports movie trope.

Sure, it isn't exactly an IMAX experience for your phone, but if ever there was a movie designed for a haptic movie trailer, it's Apple's F1 movie...

One last Pitt stop

Apple's F1 movie was also the star of its recent WWDC 2025 event, with the livestream opening with Craig Federighi (Apple's Senior Vice President of Software Engineering) donning a helmet before doing a lap around the roof of its Apple Park building.

There's currently no date for the movie to stream on Apple TV+, with the focus currently on its imminent theater premiere. It officially opens internationally on June 27, but there are some special, one-off screenings in IMAX theaters on June 23 (in North America) and June 25 (internationally) for keen fans who signed up on the movie's official website.

The trailers so far suggest that F1 is going to effectively be Top Gun: Maverick set on a race track – and with both movies sharing the same director (Joseph Kosinski) and screenplay writer (Ehren Kruger), that seems like a pretty safe bet. F1 World Champion Lewis Hamilton was also involved to help amp up the realism.

If the haptic-powered trailer has whetted your appetite, check out our interview with Damson Idris who also stars in F1 and gave us a behind-the-scenes look at what the movie was like to film. Hint; they used specialized tracking cars to help nail the demanding takes flawlessly.

You might also like
Categories: Technology

This is what a 1000TB SSD could look like next year: New E2 Petabyte SSD could accelerate transition from hard drives

Wed, 06/11/2025 - 12:33
  • E2 SSDs aim to balance storage performance capacity and efficiency
  • New form factor fits rising demand for warm tier data storage
  • High density flash could reduce reliance on hard drives long term

As workloads shift and cold data heats up under AI and analytics demands, the traditional split between high-speed SSDs and cost-effective hard drives is no longer serving every use case.

A new SSD form factor known as E2 is being developed to tackle the growing gap in enterprise data storage. Potentially delivering up to 1PB of QLC flash per drive, they could become the middle-ground option the industry needs.

StorageReview claims the E2 form factor is being designed with support from key players including Micron, Meta, and Pure Storage through the Storage Networking Industry Association and Open Compute Project.

Solid speeds, but not cutting-edge

E2 SSDs targets “warm” data - information that’s accessed often enough to burden hard drives but which doesn’t justify the cost of performance flash.

Physically, E2 SSDs measure 200mm x 76mm x 9.5mm. They use the same EDSFF connector found in E1 and E3 drives, but are optimized for high-capacity, dense deployments.

A standard 2U server could host up to 40 E2 drives, translating into 40PB of flash in a single chassis. StorageReview says these drives will connect over PCIe 6.0 using four lanes and may consume up to 80W per unit, although most are expected to draw far less.

Performance will reach 8-10MB/s per terabyte, or up to 10,000MB/s for a 1PB model. That’s faster than hard drives but not in the same class as top-end enterprise SSDs. E2’s priorities will instead be capacity, efficiency, and cost control.

Pure Storage showed off a 300TB E2 prototype in May 2025 featuring DRAM caches, capacitors for power loss protection, and a flash controller suited for this scale. While current servers aren't yet ready for this form factor, new systems are expected to follow.

It’s fair to say E2 won't replace hard drives overnight, but it does signal a shift. As the spec moves toward finalization this summer, vendors are already rethinking how large-scale flash can fit into modern infrastructure.

You might also like
Categories: Technology

First look – Apple's visionOS 26 fixes my biggest Persona problem and takes the mixed reality headset to unexpected places

Wed, 06/11/2025 - 12:00

Apple Vision Pro is unquestionably one of the most powerful pieces of consumer hardware Apple has ever built, but the pricey gadget is still struggling to connect with consumers. And that's a shame because the generational-leaping visionOS 26 adds even more eye-popping features to the $3,500 headset, which I think you'd struggle to find with any other mixed reality gear.

Apple unveiled the latest Vision Pro platform this week as part of its wide-ranging WWDC 2025 keynote, which also introduced a year-OS naming system. For some platforms like iOS, the leap from, say, 18 to 26 wasn't huge, but for the toddler visionOS 2, it was instantly thrust into adulthood and rechristened visionOS 26.

This is not a reimaging of visionOS, and that's probably because its glassiness has been amply spread across all other Apple platforms in the form of Liquid Glass. It is, though, a deepening of its core attributes, especially around spatial computing and imagery.

I had a chance to get an early hands-on experience with the platform, which is notable because Vision Pro owners will not be seeing a visionOS 26 Public beta. Which means that while iPhone, iPad, Apple Watch, and Apple TV owners are test-driving OS 26 platform updates on their favorite hardware, Vision Pro owners will have a longer wait, perhaps not seeing these enhancements until the fall. In the interim, developers will, of course, have access for testing.

Since much of the Vision Pro visionOS 26 interface has not changed from the current public OS, I'll focus on the most interesting and impactful updates.

See "me"

(Image credit: Apple)

During the keynote, Apple showed off how visionOS 26 Personas radically moves the state of the art forward by visually comparing a current Persona with a new one. A Vision Pro Persona is a virtual, live, 3D rendering of your head that tracks your movements, facial expressions, and voice. It can be used for communicating with other people wearing the headgear, and it's useful for calls and group activities.

Apple has been gradually improving Personas, but visionOS 26 is a noticeable leap, and in more ways than one.

You still capture your Persona using the front-facing 3D camera system. I removed my eyeglasses and held the headset in front of my face. The system still guides you, but now the process seems more precise. I followed the audio guidance and looked slowly up, down, left, and right. I smiled and raised my eyebrows. I could see a version of my face faintly on the Vision Pro front display. It's still a bit creepy.

(Image credit: Future)

I then put the headset back on and waited less than a minute for it to generate my new Persona. What I saw both distressed and blew me away.

I was distressed because I hate how I look without my glasses. I was blown away because it looked almost exactly like me, almost entirely removing the disturbing "uncanny valley" look of the previous iterations. If you ever wonder what it would be like to talk to yourself (aside from staring at a mirror and having a twin), this is it.

There was a bit of stiffness and, yes, it fixed my teeth even though part of my setup process included a big smile.

It was easy enough to fix the glasses. The Personas interface lets you choose glasses, and now the selection is far wider and with more shades. I quickly found something that looked almost just like mine.

With that, I had my digital doppelganger that tracked my expressions and voice. I turned my head from side to side and was impressed to see just how far the illusion went.

Facing the wall

(Image credit: Apple)

One of the most intriguing moments of the WWDC Keynote was when they demonstrated visionOS 26's new widget capabilities.

Widgets are a familiar feature on iPhones, iPads, and Macs, and, to an extent, they work similarly on Vision Pro, but the spatial environment takes or at least puts them in new and unexpected places.

In my visionOS 26 demo experience, I turned toward a blank wall and then used the new widget setup to pin a clock widget to the wall. It looked like an actual clock hanging on the wall, and with a flip of one setting, I made it look like it was inset into the wall. It looked real.

On another wall, I found a music widget with Lady Gaga on it. As I stepped closer, a play button appeared in the virtual poster. Naturally, I played a little Abracadabra.

Another wall had multiple widgets, including one that looked like a window to Mount Fiji; it was actually an immersive photo. I instinctively moved forward to "look out" the window. As the vista spread out before me, the Vision Pro warned me I was getting too close to an object (the wall).

I like Widgets, but temper the excitement with the realization that it's unlikely I will be walking from room to room while wearing Vision Pro. On the other hand, it would be nice to virtually redecorate my home office.

An extra dimension

(Image credit: Apple)

The key to Vision Pro's utility is making its spatial capabilities useful across all aspects of information and interaction.

visionOS 26 does that for the Web with spatial browsing, which basically can turn any page into a floating wall of text and spatially-enhanced photos called Spatial Scenes.

(Image credit: Apple)

visionOS 26 handles the last bit on the fly, and it's tied to what the platform can do for any 2D photo. It uses AI to create computational depth out of information it can glean from your flat image. It'll work with virtually any photo from any source, with the only limitation being the source image's original resolution. If the resolution is too low, it won't work.

I marveled at how, when staring at one of these converted photos, you could see detail behind a subject or, say, an outcropping of rock that was not captured in the original image but is inexplicably there.

It's such a cool effect, and I'm sure Vision Pro owners will want to show friends how they can turn almost all their photos into stereoscopic images.

Space time

I love Vision Pro's excellent mixed reality capabilities, but there's nothing quite like the fully immersive experience. One of the best examples of that is the environments that you enable by rotating the crown until the real world is replaced by a 360-degree environment.

visionOS 26 adds what may be the best environment yet: a view of Jupiter from one of its moons, Amalthea. It's beautiful, but the best part of the new environment is the control that lets you scroll back and forth through time to watch sunrises and sunsets, the planet's rotation, and Jupiter's dramatic storms.

This is a place I'd like to hang out.

Of course, this is still a developer's beta and subject to significant change before the final version arrives later this year. It's also another great showcase for a powerful mixed reality headset that many consumers have yet to try. Perhaps visionOS 26 will be the game changer.

You might also like
Categories: Technology

ChatGPT's 10-hour outage has given me a new perspective on AI – it's genuinely helping millions of people get through life

Wed, 06/11/2025 - 11:35

OpenAI servers experienced mass downtime yesterday, causing chaos among its most loyal users for well over 10 hours.

For six hours straight, I sat at my desk live-blogging the fiasco here on TechRadar, trying to give as many updates as possible to an outage that felt, for many, as if they had lost a piece of themselves.

You see, I write about consumer AI, highlighting all the best ways to use AI tools like ChatGPT, Gemini, and Apple Intelligence, yet outside of work, these incredibly impressive platforms have yet to truly make an impact on my life.

As someone who’s constantly surrounded by AI news, whether that’s the launch of new Large Language Models or the latest all-encompassing artificial intelligence hardware, the last thing I want to do outside of work is use AI. The thing is, the more AI develops at this rapid pace, the more impossible it becomes to turn a blind eye to the abilities that it unlocks.

In the creative world, you’ll stumble across more AI skeptics than people who shout from the rooftops about how great it is. And that’s understandable, there’s a fear of how AI will impact the jobs of journalists like me, and there’s also a disdain for the sanitized world it’s creating via AI-slop or robotically-written copy.

But the same skepticism often overlooks the positives of this ever-evolving technology that gives humans new ways to work, collect their thoughts, and create.

After six hours of live blogging and thousands of readers reaching out with their worries surrounding the ChatGPT server chaos, as well as discussing what they use the chatbot for, I’ve come away with a completely new perspective on AI.

Yes, there are scary elements; the unknown is always scary, but there are people who are truly benefiting from AI, and some in ways that had never even crossed my mind.

More than a chatbot

An hour into live blogging the ChatGPT outage, and I was getting bored of repeating, “It’s still down” in multiple different ways. That was when I had an idea: if so many people were reading the article, they must care enough to share their own reasons for doing so.

Within minutes of asking readers for their opinions on the ChatGPT outage, my inbox was inundated with people from around the globe telling me how hard it was to cope without access to their trusty OpenAI-powered chatbot.

From Canada to New Zealand, Malaysia to the Netherlands, ChatGPT users shared their worries and explained why AI means so much to them.

Some relied on ChatGPT to study, finding it almost impossible to get homework done without access to the chatbot. Others used ChatGPT to help them with online dating, discussing conversations from apps like Tinder or Hinge to ensure the perfect match. And a lot of people reached out to say that they spent hours a day speaking with ChatGPT, filling a void, getting help with rationalizing thoughts, and even helping them to sleep at night.

One reader wrote me a long email, which they prefaced by saying, “I haven’t written an email without AI in months, so I’m sorry if what I’m trying to say is a bit all over the place.”

Those of us who don’t interact with AI on a regular basis have a basic understanding of what it can do, often simplifying its ability down to answering questions (often wrongly), searching the web, creating images, or writing like a robot.

But that’s such an unfair assessment of AI and the way that people use it in the real world. From using ChatGPT to help with coding, allowing people who have never been able to build a program an opportunity to do so, to giving those who can’t afford a professional outlet for their thoughts a place to speak, ChatGPT is more capable than many want to accept.

(Image credit: Shutterstock/Rokas Tenys)

ChatGPT and other AI tools are giving people all around the world access to something that, when used correctly, can completely change their lives, whether that’s by unlocking their productivity or by bringing them comfort.

There’s a deeply rooted fear of AI in the world, and rightfully so. After all, we hear on a regular basis how artificial intelligence will replace us in our jobs, take away human creativity, and mark the beginning of the robot uprising.

But would we collectively accept it more if those fears were answered? If the billionaires at the top were to focus on highlighting how AI will improve the lives of the billions of people struggling to cope in this hectic world?

AI should be viewed as the key to unlocking human creativity, freeing up our time, and letting us do less of the mundane and more of enjoying our short time on this planet. Instead, the AI renaissance feels like a way to make us work harder, not smarter, and with that comes an intense amount of skepticism.

After seeing just how much ChatGPT has impacted the lives of so many, I can’t help but feel like AI not only deserves less criticism, but it deserves more of an understanding. It’s not all black and white, AI has its flaws, of course it does, but it’s also providing real practical help to millions of people like nothing I've seen before.

You might also like
Categories: Technology

Windows 10 might be at death’s door, but Microsoft hasn’t finished trying to force Bing and Edge on its users

Wed, 06/11/2025 - 10:59
  • Windows 10 has a new update that adds a couple of features
  • Unfortunately, one of these is focused on promoting Bing and Edge
  • Microsoft is pushing its search engine and browser via the calendar panel off the taskbar

Windows 10 has a new update and it actually introduces a new feature – although you might wish it didn’t when you discover what this latest addition is.

That said, the freshly-released update for June (which is KB5060533 for Windows 10 22H2) does come with a tweak that could raise a smile, namely that the clock in the taskbar now displays the seconds when you click to view the time in the calendar panel.

Quite why Microsoft ditched that in the first place is beyond me, but anyway, while that might be a pleasing return of a feature for some, there’s a sting in the tail further down in said calendar flyout – namely that Bing has crept into the mix here.

Not overtly, mind, but as Windows Latest explains, there’s been a change to the bottom section of the calendar panel where normally you’ll see your own events or reminders – if you have any, that is. If you don’t, this used to be blank, but as of the June update you’ll see popular public events and their dates.

Of course, pretty much every day is now dedicated to something – for example, today, June 11, is ‘National Corn on the Cob Day’ (apparently) – and reminders for these events will now appear in the calendar panel.

How does Bing figure in this? Well, if you click on said event, you’ll get information on it fired up in… wait for it… yes, Bing search engine. And what web browser will that appear in? Microsoft Edge, of course. Why promote one service, when you can promote two, after all?

(Image credit: Marjan Apostolovic / Shutterstock)Analysis: Why risk the besmirchment?

This is a bit sneaky as it’s far from clear that you’re invoking Bing and Edge when you click something on the calendar flyout out of curiosity. Moreover, this happens despite the Windows 10 preferences you’ve chosen for your default search engine or browser, which again is an unwelcome twist.

This is the kind of behavior that impacts negatively on Microsoft’s reputation and it doesn’t help that the tweak isn’t mentioned in the update notes. We’re only told that the June patch provides a “rich calendar experience” (well, it’s making someone rich, or at least a little richer, possibly – but not you).

The kicker here is that Windows 10 is only four months from being declared a dead operating system, with its life support removed (unless you pay for additional security patches for an extra year). So, why even bother making changes like this when Windows 10 is facing its final curtain? Why take any risks at all that could cause reputational damage?

Well, one thought occurs: maybe Microsoft isn’t convinced that floods of people are going to be leaving Windows 10 when the End of Life deadline rolls around in October 2025. After all, an alarmingly hefty number of diehards are still clinging on to the older operating system. In which case, perhaps Microsoft sees the value and worth in still bugging Windows 10 users for the foreseeable, while they stick around either paying for support, or risking their unpatched PC being compromised while refusing (or being unable) to upgrade to Windows 11.

Oh well. At least we’ve got the seconds back on the calendar clock display, hurray.

You might also like...
Categories: Technology

The Alexa+ rollout is finally happening – here's what early testers love and hate about it

Wed, 06/11/2025 - 10:54
  • Alexa+ is still rolling out to test users, reaching 1 million users after hitting 100,000 back in May
  • Those with access have been taking to Reddit to share their experiences with Amazon's new AI-enhanced voice assistant
  • So far it's garnered mixed reactions, but most users seem to be satisfied

Amazon has been taking its time with the roll out of Alexa+, the company’s new voice assistant with a big AI revamp, but after hitting the 100,000 user milestone in May, Alexa+ has now reached 1 million test users – a huge jump in just a few weeks.

When the company announced Alexa’s first major upgrade in February, it said that Alexa+ would be US-only for now before being rolled out widely, though this date is still unknown.

It’s been difficult to find a person or close friend who has early access, but now more users are sharing that Alexa+ has been activated on selected Echo devices. Additionally, users are sharing their first impressions of the new AI features – which has garnered a range of mixed reactions.

Alexa+ activated from r/alexaAlexa’s new voice causes a divide

After deep diving through different Reddit threads, the main feature that has divided early access users is the new Alexa+ voice, whose major redesign aims to offer a less robotic inflection (similar to ChatGPT) with improved recognition capabilities.

Some users have been impressed by its ability to carry out a straight conversation without having to pre-prompt it, with one user stating that it provides a natural back-and-forth flow.

However, the user also highlighted its similarities with other voice assistants, adding: "It's early days, but it feels a tiny bit closer to what I have with ChatGPT." By the sounds of it, Alexa+ will have to offer something a little different if it wants users to stick around.

So I am actually liking Alexa+ from r/alexa

Another user went even further, describing the new Alexa+ voice as "obnoxious", but they also highlighted the fact that the new assistant has the ability to change its tone: "I asked if she could soften her voice, and she offered to make it more 'feminine' (her words not mine)."

I’ve spent a few days with Alexa Plus from r/alexa

Testers have been generally pleased with how Alexa+ stands above other LLMs with its detailed explanations and clear understanding of voice prompts. One Reddit user was impressed that "[it] understands everything regardless of how you stumble on words."

Comment from r/alexa

Another user shared a similar positive experience, but went on to explain that Alexa+ would fall into the trap of contradicting itself, admitting to it and apologizing when the called out.

Comment from r/alexa

Despite a few hiccups with the new Alexa+, the response from testers has been generally positive which makes us intrigued to see what it will be like once it’s widely available. So far it’s not enough for users to fully subscribe to, but time will tell.

You might also like
Categories: Technology

Hackers are now pretending to be jobseekers to spread malware

Wed, 06/11/2025 - 10:27
  • DomainTools spots hackers creating fake job seeker personas
  • They target recruiters and HR managers with the More Eggs backdoor
  • The backdoor can steal credentials and execute commands

Hackers are now pretending to be jobseekers, targeting recruiters and organizations with dangerous backdoor malware, experts have warned.

Cybersecurity researchers DomainTools recently spotted a threat actor known as FIN6 using this method in the wild, noting the hackers would first create fake personas on LinkedIn, and create fake resume websites to go along.

The website domains are bought anonymously via GoDaddy, and are hosted on Amazon Web Services (AWS), to avoid being flagged or quickly taken down.

More Eggs

The hackers would then reach out to recruiters, HR managers, and business owners on LinkedIn, building a rapport before moving the conversation to email. Then, they would share the resume website which filters visitors based on their operating system and other parameters. For example, people coming through VPN or cloud connections, as well as those running macOS or Linux, are served benign content.

Those that are deemed a good fit are first served a fake CAPTCHA, after which they are offered a .ZIP archive for download. This archive, in what the recruiters believe is the resume, actually drops a disguised Windows shortcut file (LNK) that runs a script which downloads the "More Eggs" backdoor.

More Eggs is a modular backdoor that can execute commands, steal login credentials, deliver additional payloads, and execute PowerShell in a simple yet effective attack relying on social engineering and advanced evasion.

AWS has since came forward to thank the security community for the findings, and to stress that campaigns like this one violate its terms of service and are frequently removed from the platform.

“AWS has clear terms that require our customers to use our services in compliance with applicable laws," an AWS spokesperson said.

"When we receive reports of potential violations of our terms, we act quickly to review and take steps to disable prohibited content. We value collaboration with the security research community and encourage researchers to report suspected abuse to AWS Trust & Safety through our dedicated abuse reporting process."

Via BleepingComputer

You might also like
Categories: Technology

Adobe launches new Express tool for small businesses - and I spoke exclusively to its chief to find out the top 5 things you need to know

Wed, 06/11/2025 - 10:17
  • Adobe Express for Ads is now live - I spoke to Express SVP to find out more
  • Content integration with Google Ads, LinkedIn, TikTok
  • Includes Social Safe Zone to refine ads for each platform

Adobe Express has introduced a new tool designed to help small businesses create and monitor online ads across popular social media channels.

Since Adobe Max London, the design platform has seen a host of new AI updates like Clip Maker and Generate Similar for spinning out new content based on existing images. Now, Express for Ads brings even more options for marketers and small businesses to scale up content production and track performance.

In an exclusive TechRadar Pro interview, I spoke to Adobe Express SVP Govind Balakrishnan to find out what users can expect from the new ad platform - and what else we can look forward to in the coming months.

What’s new in Adobe Express and what is Express for Ads?

This isn’t the first foray into social media content creation for Express, which has long offered the ability to create ad templates, and schedule and publish directly to platforms.

But the platform is giving users a jump-start of scaling up ad creation across core advertising platforms. You can check out the new Express tools by clicking here - but here’s what you can expect.

(Image credit: Adobe)
  • 1. Ad platform support

What this new update adds is the ability to create content workflows specifically for Google, LinkedIn, Meta, TikTok, and later down the line, Amazon, too.

As Balakrishnan told me, “What we have now done though is also bring in the tools and capabilities to make it incredibly easy for you to create content that performs well for the critical or prominent ad platforms like Google Ads, LinkedIn, Meta, and more coming in the not too distant future. We've essentially made it easy for you to start with the template or even generate a template, and create content using the best in class tools that we have available in Express.”

  • 2. Better best practices

Alongside expanded ad platform support, users can now also use what Adobe’s calling a Social Safe Zone.

This is effectively a set of best practices to prevent the dreaded rejection of ads - and it’s currently supported for Facebook Stories, Instagram Reels, and LinkedIn Videos. There are plans to support additional formats soon.

“We've added a capability called Social Safe Zone,” said Balakrishnan. “It’s essentially a set of guidelines or guard rails that are incorporated as you're creating your content to ensure that the key visual elements that you have in your content are not obstructed by the various social media platforms. So, it helps you essentially create content to ensure that the visual elements that you care most about are front and centre, and are optimized to be best-performing for each of the social media platforms that you're targeting.”

  • 3. A one-stop shop for ad creation

In a bid to improve the creative workflow, Adobe is now letting users play in the Express sand-box without having to move out to other apps.

Balakrishnan calls it a one-stop shop, adding: “We have made it incredibly easy to publish straight to the ad platforms, so we have made it. Express can establish a connection with Google Ads, LinkedIn Ads, and Tiktok. You can go from Express directly to each of these ad platforms.”

Of course, Adobe Express has long offered the option to resize templates, but in this latest update, the company has gone further.

“We have now ensured that [Resize] works for these ad platforms,” Balakrishnan told me. “Essentially, you start with the template, you have Safe Zones to ensure that your content looks great for each, and now you have the ability to publish straight into these ad platforms. So, Express becomes this one-stop shop where you start with an intent, you create your content, you publish to various platforms, and you get your insights back right there. You don't have to jump between various tools, various platforms.”

It’s an area that Balakrishnan is most excited for, telling me, “I am most excited about the fact that you can create for a specific ad platform and resize seamlessly for other platforms. As we all know, most marketing marketers are trying to reach multiple platforms and struggle to do that because they have to recreate a lot of their content over and over again for multiple app platforms. The fact that they can fairly quickly and easily create the best possible content for each of those ad platforms, I think, is incredibly exciting.”

  • 4. Improved metrics and tracking

One of the best updates, I think, coming with Express for Ads, is the ability to now monitor ad performance across supported platforms, delivering much-needed feedback to refine future ideation and creation. With that in mind, Express now has included Metricool and Bitly add-ons.

Expanding on this, Balakrishnan said, “We've added the ability to get metrics and analytics on the content and how the content is performing through integrations with Metricool and Bitly. These are two recent integrations that we have launched where, once you post your content to these platforms, you now have the ability to get feedback on how your content is performing, in addition to obviously seeing how it maps to current trends and current fads that may be in play.”

And it turns out Adobe might’ve underestimated just how many users are welcoming this update.

Balakrishnan said, “I'm finding that a number of our users are excited about the Metricool integration. I don't know if we had fully realised how compelling this could be, but as we have gotten deeper into the integration and as we have engaged with more of our user base, it has become clear that it is an integration that a large number of our users are incredibly excited about because they then get the insights from how their content is performing right there in the tool without having to leave the tool and go somewhere else.”

  • 5. The future of Adobe Express

As Express continues to evolve, I couldn’t resist finding out what users can expect later down the line. Here, Balakrishnan teased a couple of future updates.

“The next stage that we are incredibly excited about, and I know it's not necessarily related to the ads creation scenario today, but it will be relevant in the not too distant future is the ability to completely reimagine creativity or the opportunity to completely reimagine creativity through agentic AI. The idea there would be that you just enter a prompt and you get you start with a blank screen, you enter a prompt, and you interact through a prompt to essentially generate full-fledged designs from scratch. We are now making it even easier for anyone to come in and describe what's in their mind's eye and have that show up on a digital screen in seconds.”

That will come as little surprise for followers of Adobe, where agentic AI is fast becoming de rigueur across the company’s apps. But it’s not the only area where Balakrishnan envisions AI advancements. He confirmed he’d like to see “more advancements in the realm of generative AI” for Express users who don’t want to see a lowering of the barrier to entry via agentic AI.

And, as you’d expect from a platform that integrates across the Creative Cloud suite, the team is looking at further integrations with Adobe Acrobat.

Balakrishnan explained: “We are seeing an increasing trend, so to speak, where creativity and productivity are coming closer together and we see some incredible opportunities to leverage the very broad base of Acrobat users and give them the tools and capabilities to add more richness to PDF and Acrobat documents. And we're doing that by building seamless integrations and workflows from Acrobat into Express, where you're if you're in Acrobat, if you're in [Acrobat] Reader, if you're viewing a regular PDF document, we are now giving you the ability to edit images, generate images to stylize your document all from within Acrobat.”

You can find out more in Adobe's latest blog.

Want to start creating your next ad campaign now? Check out the new Adobe Express for Ads right now. It's free to use with plans for teams and business users, and you'll also find it included as part of an add-on alongside other Adobe apps like Photoshop.

Click here to find out more.

You might also like
Categories: Technology

Apple’s new Liquid Glass UI design unveiled at WWDC 2025 is nothing new - I can see right through it

Wed, 06/11/2025 - 10:00

The old saying, “if you wait long enough, everything comes back into style eventually,” is usually attributed to the fashion industry, but it seems to apply to pretty much anything, especially mobile phone interface design.

So, while my younger colleagues are getting all hot and bothered about Apple’s new Liquid Glass design for its operating systems, like iOS, macOS, iPadOS, and tvOS, forgive me if I can’t help but be a little less enthusiastic, because I’ve seen all this before.

The crux of the new Liquid Glass design is that the “material” (an odd choice of words from Apple to describe something that’s purely digital) used for the background to menus, and out of which icons are “crafted”, behaves like glass would in the real world, if it also flowed like a liquid.

That obviously means you can see through it, which is what people are getting very excited about.

Those of us who have been using tech for a while now will realize that we’ve been here before. Back in 2007, Microsoft introduced the Aero design in Windows Vista, which contained menu borders that had a level of transparency to them and icons with rounded edges.

This transparent look and feel persisted into Windows 7, which had a transparent taskbar, but it was eventually dropped in favour of the more 2D and square-looking Windows 8 interface.

Microsoft has recently brought back transparency in Windows 11.

Windows Vista introduced transparency to the borders of its windows. The fundamentals of design

It all comes back to the fundamentals of design and what companies are trying to achieve with a mobile phone interface.

When iOS first came out in 2007, skeuomorphism was the order of the day, which means the icons and interface elements tried to resemble real-world objects as much as possible.

This had the advantage of making them look accessible, but it also felt unnecessarily fussy, especially since we were dealing with digital images, which didn’t need to conform to the same laws as physical objects.

And so a conflict was born. A kind of design war broke out between those who thought that interface design should reflect the real world as closely as possible and those who preferred to think of design as functional first: interface design should be legible, easily accessible, and practical before all else.

Eventually, the latter group won out, but it took a long time and required the death of one of skeuomorphism's strongest advocates.

The old skeuomorphic design of iOS. (Image credit: OldOS - Zane Kleinberg)Farewell Steve Jobs

Apple’s then CEO, Steve Jobs, was a huge fan of the skeuomorphic approach to design. That’s why the icon for the Notes app in iOS looked like actual note paper, for instance. It's also why the Calculator looked like a real calculator, and the Calendar app looked like a real calendar.

On the iPhone, there were rounded, glossy edges to all the icons, with shadowing and a slight 3D effect thrown in.

Sadly, Steve Jobs passed away in 2011, and Apple’s other leading design light, Jony Ive, was given free rein to come up with something different for iOS 7 in 2013. What Ive produced perhaps was a little too far in the other direction. It was best described as very, very flat in comparison to what had come before.

In iOS 7, the 3D skeuomorphic elements were banished in favour of, well, not quite 2D, but a very flat-looking design with very bright, colorful icons that stood out a mile from the phone.

Ive, who was responsible for Apple’s increasingly minimalist approach to product design, had a very strong design aesthetic, and it showed.

iOS 7 introduced brave new visual design elements. All clear on the Apple front

You can view Apple's new digital Glass as the final rejection of Ive’s iOS 7 vision for the iPhone.

Being able to see through everything is very futuristic, and I’m sure it works great in sci-fi movies, TV shows, and in AR headsets like Apple Vision Pro, but on a small device in my hand, it doesn’t increase legibility at all. In fact, it makes text harder to read.

As somebody who already has to put on reading glasses to do most things on my iPhone, this isn’t going to help. And what about all the people who have other kinds of visual impairment?

At WWDC 2025, Apple was very keen to show off how the buttons that cover video playback in the new Liquid Glass design are now transparent, so they don’t distract from the video you’re watching. Well, that’s great, but what if you want to actually read the text that’s written on or next to the buttons?

Even worse, the new “all clear” style (shown below), which drains all color from your icons so they all look like they’re made from glass, is very stylish, but is it functional?

Will it make it easier to find the app you’re looking for or just harder? I’ll have to reserve my final judgement until I’ve tried the finished version of iOS 26, but I think I already know my answer - no, it won’t.

Apple's new "all clear" style in Liquid Glass drains the color from all your icons. (Image credit: Apple)Give it another 15 years

Jony Ive, the designer’s designer, knew what he was doing with iOS 7 when he introduced such a bold, confident new look. Perhaps it was a bit too much of a shock to the system for some people, but the fact that Google instantly copied it in Android is a tribute to how it changed mobile phone interface design for the better.

Since then, Apple has been picking away at Ive’s original vision, which has been easier to do since he left the company, and watering it down with each new iOS release, but now it has really thrown it in the trash with the new Liquid Glass.

So, in 2026, we’re back to transparency, darker tints, rounded corners, and 3D effects.

Remember, these things run in cycles. Give it another 15 years and I think we’ll be back to bold, bright colors and flatter icons. Mark my words.

You may also like
Categories: Technology

Resident Evil Requiem will feature multiple viewpoints, letting you switch between first and third-person frights

Wed, 06/11/2025 - 10:00
  • Resident Evil Requiem will feature both a first-person and third-person camera
  • Players will likely be able to switch between the two
  • This was confirmed by a behind-closed-doors demo

TechRadar Gaming can confirm that the upcoming horror game Resident Evil Requiem will feature both first-person and third-person viewpoints.

This information comes from a behind-closed-doors Resident Evil Requiem demo shown to the press at Summer Game Fest 2025, which ended with footage of the player entering the menus and showing a toggle button that changes the perspective between first and third-person gameplay, followed by a glimpse of the latter. As a result, it seems as though players will be able to readily choose which to use via an option in the settings menu.

The main entries in the Resident Evil series have traditionally been played from a third-person perspective, though the recent Resident Evil 7: Biohazard and Resident Evil Village switched up the formula by introducing a more intimate first-person camera.

A third-person option was eventually added to Resident Evil Village as part of the post-launch Winters' Expansion, and it can be toggled via a new 'View Mode' option in the camera settings menu. It looks like it will be similarly implemented in Resident Evil Requiem, though I expect that more information on how it works will be revealed in the build-up to launch.

Resident Evil Requiem was first revealed as part of the Summer Game Fest 2025 main show, with a gripping trailer that introduced us to protagonist Grace Ashcroft. An FBI agent, Ashcroft will investigate a series of strange killings connected to a sinister hotel where her mother was murdered eight years ago.

The game is set to release on February 27, 2026, for Xbox Series X and Series S, PlayStation 5, and PC.

You might also like...
Categories: Technology

Home theater fans rejoice! The Apple TV 4K's next free update will give you a much-wanted option for elite audio systems

Wed, 06/11/2025 - 09:29
  • Passthrough audio option is coming to tvOS 26
  • It's in the audio framework, but developers need to use it
  • tvOS 26 is expected later this year

Apple's tvOS doesn't get quite as much attention during WWDC as the bigger-selling products such as the iPhone, and that means it usually takes a few days for some of the most interesting Apple TV news to emerge even beyond the top five tvOS 26 feature you need to know about. And that's the case this year, because an important audio change is coming in tvOS that wasn't mentioned during the event.

AppleInsider reports that there's a new reference on the Apple Developer documentation for AVFAudio that has a "passthrough" setting. AVFAudio is Apple's framework for playing, recording and processing audio on tvOS as well as iOS, macOS and watchOS.

AppleInsider has apparently checked in with Apple, and Apple has confirmed that yes, there will be passthrough in the tvOS upgrade. And that's a big change for serious audio equipment.

It doesn't matter what's at the other end of your HDMI: Apple TV processes the audio from apps. (Image credit: Apple)Why passthrough was wanted by hardcore home theater fans

When you use a streaming app on Apple TV, whether it's Apple's own Apple TV+, Netflix, Prime Video or any one of the best streaming services, the audio decoding and initial processing is handled by your Apple TV before being output to your system – so even if you have really high-end home theater hardware at the end of your HDMI, you're stuck with Apple's processed audio as your input.

Apple TV's audio processing is really good: I often burst into a grin when there's a particularly impressive bit of Atmos action. But higher-spec hardware is likely to be even better, so it's good to have the option to let that hardware handle 100% of the sound processing rather than leave it to Apple.

This isn't limited to Apple TV; the same framework handles the audio on iPhones, Macs and even Apple Watches. But of course it's on the Apple TV where it's likely to matter most, so you can pass the raw audio data to a beefy AV receiver.

For now, though, passthrough is only a possibility: it may be in the framework, but it's up to developers to implement it in their apps, or for Apple to make it selectable in the Apple TV settings; so far at least, the latter option isn't in the developer beta of tvOS 26, but would maybe be the ideal end result.

You might also like
Categories: Technology

How to tackle shadow IT when AI is everywhere

Wed, 06/11/2025 - 09:20

You’re no doubt familiar with shadow IT — the practice of employees using software, applications and other tech tools that aren’t sanctioned by IT. And if IT doesn’t know about something, they can’t regulate it or defend against it. Clearly, this creates a massive security risk and headaches for both IT and security.

Now, with generative AI tools flooding the workplace, that headache is turning into a full-blown migraine.

The rush to adopt AI productivity tools has opened a Pandora's box of security vulnerabilities that most organizations are completely unprepared for. These tools are expanding existing visibility gaps while simultaneously creating a constant stream of new ones.

Invisibility — in plain sight

“Patchy” (pun very much intended) visibility is only part of the problem. There’s a widespread awareness of the threat posed by unregulated tools, but there’s a startling gap in translating that awareness into concrete readiness. According to Ivanti’s 2025 State of Cybersecurity Report, 52% of IT and security professionals view API-related and software vulnerabilities as high- or critical-level threats. So, why do only 31% and 33%, respectively, consider themselves very prepared to address these risks? It’s the difference between theory and practice.

Making that shift to readiness is easier said than done, given the widespread and elusive nature of shadow IT practices. Software that employees use, including shadow IT, ranks as the number one area where IT and security leaders report insufficient data to make informed security decisions — a problem affecting 45% of organizations.

Let that sink in: nearly half of security teams are operating without visibility into the applications running within their own networks. Not good. At all.

The Gen AI multiplier effect

Generative AI has created a perfect storm for the proliferation of shadow IT. Employees eager to boost productivity are installing AI tools with little thought to security implications, while security teams struggle to keep pace.

The ubiquity and ease of access to these tools mean they can appear in your environment faster than traditional software ever could. A text summarization tool here, a code generation platform there — each creating new pathways for data leakage and potential breaches.

What makes this particularly dangerous is how these tools operate. Unlike traditional shadow IT applications, which often store data locally, generative AI solutions typically send corporate data to external cloud environments for processing. Once your sensitive information leaves your controlled environment, all bets are off.

The root of the shadow IT problem…

This isn’t a manifesto on regulating employee behavior. It’s genuinely understandable, at least to me, why employees would seek out tools that help them do their work more efficiently. That is to say, shadow IT isn’t always done out of malice, but rather because something is lacking in the organizational structure.

Specifically, data silos between security and IT teams create perfect conditions for shadow IT to flourish.

These divisions manifest in a few different ways, for example:

  • Security data and IT data are walled off from each other in 55% of organizations
  • 62% report that siloed data slows security response times
  • 53% claim these silos actively weaken their organization's security posture

When IT lacks visibility into security threats and security lacks visibility into IT operations, shadow IT thrives in the gaps between.

…and how to solve it

Addressing the shadow IT challenge, particularly in this AI-centric era, requires a totally different approach from what IT and security teams might have tried in the past. Instead of attempting to eliminate shadow IT entirely — a likely futile effort — organizations need to build frameworks that provide visibility and control.

Breaking down those data silos that separate IT and security teams is a critical first step. This means implementing unified platforms that provide comprehensive visibility across the entire attack surface, including shadow IT and the vulnerabilities it creates.

With proper integration between security and IT data, organizations can move from reactive firefighting to proactive defense. They can identify unsanctioned AI tools as they appear, assess their risk levels and implement appropriate controls — all without hampering the productivity gains these tools offer.

Of course, dismantling silos is an oversimplified directive. There needs to be an ongoing culture shift where employees no longer feel the need to engage in shadow IT practices covertly. Employers should listen to employees about what tech-related barriers they face. Employee-preferred tools should be evaluated for potential inclusion. Employees must be trained on risks and understand how their choices directly impact business outcomes.

Micromanagement is certainly not the solution, nor is AI itself the problem. AI is a reality of our current workplace, and a lot of good stems from many of the new AI tools. The problem comes when employers fail to dismantle silos, tackle visibility gaps, bring shadow IT into the open and proactively prepare for the attack vectors that come with these tools.

Ignoring the problem will not make it go away. As generative AI continues to gain prevalence and capability, the problem will only worsen.

We've featured the best online cybersecurity course.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Xreal just teased its Android XR specs, and they boast a massive upgrade over its other AR smart glasses

Wed, 06/11/2025 - 09:18
  • Xreal detailed Project Aura at AWE 2025
  • It will have a massive 70-degree field of view
  • It will also be tethered to a spatial computing puck running Android XR

While we'd suspected that Android XR would be a key component of Google I/O 2025, we couldn’t have predicted some of the partners Google announced that it would be working with, which include the gadget makers at the top of our best smart glasses list: Xreal.

As promised by Xreal at I/O, it has taken to the Augmented World Expo 2025 stage in Long Beach California to provide us with new details on its Project Aura glasses, and it’s shaping up to be one impressive device.

For me, the most important detail is that the device will apparently boast a 70-degree field of view, which is absolutely huge.

The 50-degree field of view of the Xreal One already felt large, and the 57-degree Xreal One Pro is a noticeable step up size-wise (you’ll need to wait a little longer for our full review). 70-degrees will be massive.

The field-of-view upgrade suggests – Xreal hasn’t confirmed this yet – that the Aura specs will borrow the Xreal One Pro’s new optic engine (and perhaps even upgrade it further) including its flat-prism lenses, as one of its key advantages is that it enables a greater FOV .

This optic system comes with other upgrades as well, which could help to make the Android XR glasses much easier to use all day as you walk around.

(Image credit: Google)

Another interesting tidbit is that these specs – like Xreal’s other glasses – are tethered, meaning they’re powered by an external device which they’re connected to via a cable.

We already knew Aura wouldn’t be standalone, but Xreal has revealed that the new compute device shipping with Aura won’t just be your standard phone, or the Xreal Beam Pro.

It’s something all-new, running Android XR, and powered by a Snapdragon chip from Qualcomm – which seems to be making all of the Android XR processors.

Xreal isn’t abandoning its own chipset however. Aura itself will sport an upgraded X1S chip that’s a “modified version of X1 with even more power under the hood.” The X1 chipset is what’s inside the Xreal One and Xreal One Pro specs.

A new X1 chipset is coming (Image credit: Future)

Xreal has yet to confirm if it will sell the puck and glasses separately, but if it does then I'll be interested to see what that decision means for its approach to the upgradability of its tech going forward.

At the moment you can pick up a pair of Xreal glasses and a Beam spatial computer as a bundle, and then upgrade either or both over time. Newer glasses offer better visuals and audio if that’s your main concern, while the new Beam Pro offers improved processing and spatial features.

This is a less wasteful and generally more affordable design philosophy, as you only need to replace the one component that’s holding you back. However, as I mentioned, Xreal has yet to confirm if it will sell the puck and glasses separately. Its current wording calls Project Aura “one solution made up of the wired XR glasses and a dedicated compute device” suggesting they might also be one complete, non-upgradable package.

As for a launch date, Xreal is still keeping us mostly in the dark, though it has said Project Aura is coming in 2026, so we hopefully won’t be waiting for too long.

Xreal One Pro dead on arrival?

The Xreal One Pro (Image credit: Future)

Following this announcement some fans are starting to wonder if their Xreal One Pro purchase was a good idea – if they'd waited a year or so longer and they could have snagged an Xreal Android XR setup instead.

I’ll concede that for some Xreal One Pro purchasers waiting may indeed have been the better approach, but I think others can rest easier, as while the Aura and One Pro will likely share similarities I suspect they’ll be very different devices.

For a start, while Xreal’s glasses are often at their best with the Beam Pro add-on, it isn’t required. You can use the specs with a range of USB-C compatible devices, and even many HDMI devices with the right cables.

Based on Xreal’s descriptions so far Project Aura isn’t just a wearable display for entertainment; it’s a complete spatial computing package with all the nifty Android XR features we’ve been shown.

This won’t just mean that Aura’s purpose is different from Xreal’s other glasses; I expect its price will be very different too.

Right now an Xreal Beam Pro and Xreal One Pro would cost you $848 / £768 (before factoring in any bundle or limited-time discounts). For what sounds like it will be greatly improved hardware I imagine Project Aura will cost closer to $1,000 / £1,000, if not more.

The Xreal Beam Pro (Image credit: Xreal)

And remember, you can buy the Xreal One Pro separately for just $649 / £579.

Better tech is always on the horizon at any given time, but this (for now) doesn’t look set to be a repeat of the Meta Quest Pro / Meta Quest 3 fiasco, which saw the latter, far superior product launch at less than half the price of the former.

Instead Project Aura looks set to be more of a diagonal shift, with new hardware boasting better specs and a different purpose.

If you want to wait for Project Aura you absolutely should, as you might also be tempted by any of the various Android XR, Meta smart glasses, and new Snap spectacles set to be launching in the next year or so. But choosing not to wait won't a bad option either – the Xreal One Pro certainly isn’t going to turn out to be dead on arrival as some might fear.

You might also like
Categories: Technology

Why Process Intelligence is vital for success with Agentic AI

Wed, 06/11/2025 - 09:09

The pace of change in AI has felt bewilderingly fast over the past 12 months, with new technologies emerging and seemingly being usurped on a weekly basis. For decision-makers, this can be a daunting challenge. However, the encouraging news is that AI development is largely iterative, each new tool builds on the foundations laid by its predecessors.

This has brought us to the next phase of the AI revolution, Agentic AI. This latest development describes the development and implementation of autonomous software agents, grounded in Generative AI, that can make decisions and take action independently of human input. According to Gartner, by 2028, 33% of enterprise software applications will include AI agents, and 15% of work decisions will be made autonomously. Forward-thinking organizations are already using AI agents to uncover business value and achieve goals such as accelerating software development.

Yet, just as Generative AI needs training data to be truly effective, AI agents need a clear understanding of business context. How can leaders ensure that AI agents comprehend how their businesses operate? The answer lies in Process Intelligence (PI). PI takes data from systems such as Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) software to track how events progress within an organization. It creates a dynamic, living digital twin of business operations, offering a holistic view of how work gets done. This makes it a foundational tool for implementing AI in ways that actually deliver value.

Why AI agents?

Agentic AI refers to autonomous ‘agents’ that can handle complex tasks independently. Many agents are armed with access to Large Language Models (LLMs), along with access to business-specific data (for instance, knowledge base articles or the order information). Employees can interact with many of them using natural language, asking them to then rapidly analyze business data to work out what the next step of a process should be, and even take follow on actions automatically.

AI agents are not, however, a one-size-fits-all technology panacea that can solve every business problem right out of the box. For AI agents to succeed, they must be built to solve specific problems and they need insight into how the business really functions.

This is where PI plays a critical role. It gathers together fragmented data from across dozens or hundreds of business processes, offering AI agents a ‘common language’ to understand events such as invoicing and shipping, and offering high-quality, timely data which can enable AI agents to make better decisions.

With a ‘digital twin’ of business operations in hand, AI agents can analyze how processes truly impact each other across the whole business, and uncover opportunities to drive efficiency.

Putting AI agents to work

Businesses are already creating AI agents built to harness the power of PI and seeing tangible results. One customer has worked with Celonis to develop an AI-driven inventory to track parts and materials. Within two months the AI tools had identified that many purchase orders were raised for spare parts that were already in stock as well as highlighting that a significant portion of spare parts were over eight years old.

An additional AI Agent uses the inventory to optimize spare part availability for plant engineers, with users able to describe the parts they need using technical descriptions or common industry terms, eliminating the need for exact part numbers.

In another case, PI and Agentic AI helped a company double the speed of software delivery by improving predictability and cutting stage waiting times by 30–40%. AI-driven tools pinpointed bottlenecks, offered predictive alerts, and suggested mitigations ranked by potential impact. Leaders could ask simple, natural-language questions to uncover delays and risks, using an AI copilot that translated complex data into clear, actionable insights.

Why AI needs PI

Agentic AI holds the potential to revolutionize enterprise operations, but its effectiveness depends on the quality of data agents have access to. PI ‘bridges the gap’ to provide AI with the input it needs, offering oversight of the totality of the business’s processes. PI is thus a vital tool for optimizing enterprise processes.

Enterprise customers who try to improve their processes using AI without the vital insights from PI all too often fail. In fact 89% of business leaders globally we surveyed recently said that giving AI the context of how their business runs is crucial if it is going to deliver meaningful results.

That is why we believe there can be no effective enterprise AI without PI. Process intelligence is integrated into live systems, so even when systems change, it offers AI agents real time access to the current state of processes. Think of it like the mapping data for a GPS.

Without a map, you’re just following a line on a blank screen. You won’t know why you were turning left and it would be all too easy to take a wrong turn. Similarly, Process Intelligence gives AI agents the essential context to navigate business complexity reliably.

A smarter future

Agentic AI is set to become increasingly central to enterprise success. But its impact depends on access to timely, accurate, and contextual data. Process Intelligence provides this foundation—enabling AI agents to drive meaningful change across business functions, from software development to finance.

The message is clear: Agentic AI needs the right data, and the right context. That’s exactly what Process Intelligence delivers.

We've featured the best AI chatbot for business.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Texas government reports 300,000 plus crash records stolen in cyberattack

Wed, 06/11/2025 - 09:04
  • The Texas Department of Transportation confirmed suffering a cyberattack
  • A threat actor used compromised credentials to access the system
  • Hundreds of thousands of names, addresses, and PII, were exposed

The Texas Department of Transportation (TxDOT), a government agency responsible for overseeing the construction, maintenance, and operation of the state's transportation system, suffered a cyberattack and lost sensitive personal records.

The agency confirmed the news in a brief notification published on its website earlier this week.

According to the announcement, a threat actor used a compromised government account to access TxDOT’s systems. After spotting “unusual activity” in the Crash Records Information System (CRIS), the agency investigated further, and found that the attacker accessed, and downloaded, nearly 300,000 crash reports.

The data stolen in the breach includes full names, postal addresses, driver’s license numbers, license plate numbers, car insurance policy numbers, and other information (such as sustained injuries or crash description).

GTA, Minecraft, CoD, Sims all hit

TxDOT said it immediately disabled access from the compromised account, and notified affected individuals. They have been warned to be wary of potential phishing and social engineering attacks, themed around car crashes. It also said it was implementing “additional security measures for accounts” to prevent similar incidents in the future, but did not detail what these measures are.

This type of information is quite useful for cybercriminals. They can use it to send personalized phishing emails, themed around something the victim is familiar with and has interacted with in the past. Such phishing attacks are more successful than random, generic ones, and can lead to identity theft, wire fraud, malware attacks, or even ransomware.

Government agencies are a popular target, mostly since they often hold sensitive citizen information. In early April 2025, Florida Department of State suffered a data breach that may have exposed information of 500,000 people, and in August 2024, National Public Data confirmed it was hit by data breach — and that millions of users were at risk.

At press time, no threat actors claimed responsibility for this attack.

Via BleepingComputer

You might also like
Categories: Technology

NYT Strands hints and answers for Thursday, June 12 (game #466)

Wed, 06/11/2025 - 09:00
Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Wednesday's puzzle instead then click here: NYT Strands hints and answers for Wednesday, June 11 (game #465).

Strands is the NYT's latest word game after the likes of Wordle, Spelling Bee and Connections – and it's great fun. It can be difficult, though, so read on for my Strands hints.

Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc's Wordle today page for the original viral word game.

SPOILER WARNING: Information about NYT Strands today is below, so don't read on if you don't want to know the answers.

NYT Strands today (game #466) - hint #1 - today's themeWhat is the theme of today's NYT Strands?

Today's NYT Strands theme is… Gone fishing

NYT Strands today (game #466) - hint #2 - clue words

Play any of these words to unlock the in-game hints system.

  • WISE
  • CLONE
  • BEEN
  • SCENE
  • BONES
  • KERB
NYT Strands today (game #466) - hint #3 - spangram lettersHow many letters are in today's spangram?

Spangram has 9 letters

NYT Strands today (game #466) - hint #4 - spangram positionWhat are two sides of the board that today's spangram touches?

First side: left, 4th row

Last side: right, 4th row

Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.

NYT Strands today (game #466) - the answers

(Image credit: New York Times)

The answers to today's Strands, game #466, are…

  • HOOK
  • REEL
  • LURE
  • SWIVEL
  • SCALE
  • SINKER
  • BOBBER
  • SPANGRAM: TACKLE BOX
  • My rating: Easy
  • My score: Perfect

I love the use of the phrase “gone fishing” instead of just saying I’m unavailable. It harks back to the golden days when absent people really had packed up shop and gone fishing. I use it on my office email when I go on vacation and people always ask me how my fishing trip was. 

There was no second guessing with today’s search, which was very much “it is what it says on the tin”, complete with every angling word you’d expect. My search for words began by finding "box" and then "tackle", which I put together to become today's spangram TACKLE BOX. 

And from there? Well, it was as easy as shooting fish in a barrel – which is not technically fishing. Or advisable. Where’s Bob today? Oh he’s gone shooting fish. 

How did you do today? Let me know in the comments below.

Yesterday's NYT Strands answers (Wednesday, June 11, game #465)
  • REVIVAL
  • DECO
  • BAROQUE
  • BRUTALIST
  • CLASSICAL
  • SPANGRAM: ARCHITECTURE
What is NYT Strands?

Strands is the NYT's not-so-new-any-more word game, following Wordle and Connections. It's now a fully fledged member of the NYT's games stable that has been running for a year and which can be played on the NYT Games site on desktop or mobile.

I've got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you're struggling to beat it each day.

Categories: Technology

Foundation season 3 trailer reveals the Apple TV+ show's most dangerous villain yet – and an unexpected alliance

Wed, 06/11/2025 - 09:00
  • Apple has unveiled the official trailer for Foundation season 3
  • The show's latest teaser puts its new, incredibly dangerous villain front and center
  • His arrival in the Apple TV+ sci-fi series will cause two former foes to join forces

Foundation season 3's official trailer has made its debut online – and it not only reveals the show's new villain in all his terrifying glory, but also indicates that two former foes are about to form an unexpected alliance.

The Apple TV Original returns to our screens on Friday, July 11, and with exactly one month to go (at the time of publication) until it does so, a new teaser for one of its best Apple TV+ sci-fi shows around has certainly raised my excitement levels for its next installment.

So, what does the latest trailer for Foundation's third season tell us about its story? It tells us that The Mule, one of the most famous antagonists in Isaac Asimov's book series namesake, will be the primary villain of this season's 10-episode arc.

That won't come as a shock to fans of the critically-acclaimed program. Indeed, The Mule's arrival was teased in last season's finale – read my Foundation season 2 ending explained article for more details. He's also a hugely significant character in Foundation & Empire and Second Foundation, i.e., the second and third Foundation novels penned by Asimov, which have inspired the plot for seasons 2 and 3. So, The Mule needed to show up in one of the best Apple TV+ shows sooner rather than later.

Regardless, his live-action debut is set to put the proverbial cat among the pigeons. As this trailer and Foundation season 3's first teaser suggest, the threat posed by The Mule is so great that it'll force The Foundation and The Empire to form an uneasy alliance.

The Empire's Cleonic dynasty will need to ally themselves with The Foundation (Image credit: Apple TV+)

Given that the series' two main factions have been at each other's throats throughout the show, it's clear that it would have taken something (or, rather, someone) especially alarming to make them do the unthinkable and join forces. Expect these frenemies to set aside their differences – albeit temporarily – to tackle a common foe in The Mule.

For more story-based details, check out the official blurb for this season: "Set 152 years after the events of season 2, The Foundation has become increasingly established far beyond its humble beginnings while the Cleonic Dynasty’s Empire has dwindled.

"As both of these galactic powers forge an uneasy alliance, a threat to the entire galaxy appears in the fearsome form of a warlord known as The Mule, whose sights are set on ruling the universe by use of physical and military force, as well as mind control. It’s anyone’s guess who will win, who will lose, who will live and who will die as Hari Seldon, Gaal Dornick, the Cleons and Demerzel play a potentially deadly game of intergalactic chess."

Gaal Dornick will be integral to the story that plays out in season 3 (Image credit: Apple TV+)

Season 3 will see the return of key cast members in Jared Harris, Lee Pace, Lou Llobell, Laura Birn, Cassian Bilton, and Terrence Mann. Joining them on the main cast roster is Game of Thrones alumnus Pilou Asbaek, who replaced Mikael Persbrandt as The Mule last March as part of a Foundation season 3 cast shake-up.

That's not the only major adjustment to the show's cast and crew. In February 2024, Foundation showrunner David S. Goyer apparently stepped back from its production amid reported concerns about the Apple TV+ show's budget and filming schedule. Bill Bost is said to have replaced Goyer as its temporary showrunner to complete work on season 3.

Foundation's third season will launch with a one-episode premiere on Apple TV+, aka one of the world's best streaming services. New episodes will air weekly every Friday until its finale is released on September 12.

You might also like
Categories: Technology

Quordle hints and answers for Thursday, June 12 (game #1235)

Wed, 06/11/2025 - 09:00
Looking for a different day?

A new Quordle puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Wednesday's puzzle instead then click here: Quordle hints and answers for Wednesday, June 11 (game #1234).

Quordle was one of the original Wordle alternatives and is still going strong now more than 1,100 games later. It offers a genuine challenge, though, so read on if you need some Quordle hints today – or scroll down further for the answers.

Enjoy playing word games? You can also check out my NYT Connections today and NYT Strands today pages for hints and answers for those puzzles, while Marc's Wordle today column covers the original viral word game.

SPOILER WARNING: Information about Quordle today is below, so don't read on if you don't want to know the answers.

Quordle today (game #1235) - hint #1 - VowelsHow many different vowels are in Quordle today?

The number of different vowels in Quordle today is 3*.

* Note that by vowel we mean the five standard vowels (A, E, I, O, U), not Y (which is sometimes counted as a vowel too).

Quordle today (game #1235) - hint #2 - repeated lettersDo any of today's Quordle answers contain repeated letters?

The number of Quordle answers containing a repeated letter today is 0.

Quordle today (game #1235) - hint #3 - uncommon lettersDo the letters Q, Z, X or J appear in Quordle today?

• No. None of Q, Z, X or J appears among today's Quordle answers.

Quordle today (game #1235) - hint #4 - starting letters (1)Do any of today's Quordle puzzles start with the same letter?

The number of today's Quordle answers starting with the same letter is 0.

If you just want to know the answers at this stage, simply scroll down. If you're not ready yet then here's one more clue to make things a lot easier:

Quordle today (game #1235) - hint #5 - starting letters (2)What letters do today's Quordle answers start with?

• S

• B

• U

• P

Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.

Quordle today (game #1235) - the answers

(Image credit: Merriam-Webster)

The answers to today's Quordle, game #1235, are…

  • SCANT
  • BATCH
  • UNDER
  • PARSE

Phew! I only just completed today’s puzzle after getting stuck on a word that I knew ended A-T-C-H, wasting two guesses before finally getting BATCH. 

This left me no margin for error with the remaining guesses, but I completed the mission thanks to a healthy smattering of letters in incorrect positions.

How did you do today? Let me know in the comments below.

Daily Sequence today (game #1235) - the answers

(Image credit: Merriam-Webster)

The answers to today's Quordle Daily Sequence, game #1235, are…

  • SIREN
  • FRISK
  • DUMPY
  • SLOSH
Quordle answers: The past 20
  • Quordle #1234, Wednesday, 11 June: CRAVE, ROOST, ANGLE, FLOOD
  • Quordle #1233, Tuesday, 10 June: DECRY, CHEEK, FILET, EASEL
  • Quordle #1232, Monday, 9 June: DERBY, LEMON, WRITE, HOVEL
  • Quordle #1231, Sunday, 8 June: REBAR, ALERT, PAYEE, FLUME
  • Quordle #1230, Saturday, 7 June: FLUNK, ESTER, SPITE, CHEAP
  • Quordle #1229, Friday, 6 June: ELUDE, KHAKI, VISTA, SMOKY
  • Quordle #1228, Thursday, 5 June: CHIDE, RABBI, GUSTY, LANCE
  • Quordle #1227, Wednesday, 4 June: BANAL, STOUT, SEDAN, HIPPO
  • Quordle #1226, Tuesday, 3 June: FUGUE, SYRUP, FLACK, WORST
  • Quordle #1225, Monday, 2 June: THINK, BELLE, CRONE, BOULE
  • Quordle #1224, Sunday, 1 June: POINT, MERIT, WHOOP, APHID
  • Quordle #1223, Saturday, 31 May: CRUMB, ELFIN, DRIER, QUITE
  • Quordle #1222, Friday, 30 May: RAJAH, CAUSE, BLACK, ETUDE
  • Quordle #1221, Thursday, 29 May: CRIER, DRAPE, STRUT, NEIGH
  • Quordle #1220, Wednesday, 28 May: HELLO, BEADY, VIGIL, PURER
  • Quordle #1219, Tuesday, 27 May: TWEET, RANGE, POPPY, RADAR
  • Quordle #1218, Monday, 26 May: BLEAT, HOWDY, ASIDE, SCOOP
  • Quordle #1217, Sunday, 25 May: OCEAN, AMBER, PIPER, GLEAN
  • Quordle #1216, Saturday, 24 May: HUSKY, HEIST, FOGGY, POLAR
  • Quordle #1215, Friday, 23 May: SHIRE, GIANT, AWAIT, CAPER
Categories: Technology

NYT Connections hints and answers for Thursday, June 12 (game #732)

Wed, 06/11/2025 - 09:00
Looking for a different day?

A new NYT Connections puzzle appears at midnight each day for your time zone – which means that some people are always playing 'today's game' while others are playing 'yesterday's'. If you're looking for Wednesday's puzzle instead then click here: NYT Connections hints and answers for Wednesday, June 11 (game #731).

Good morning! Let's play Connections, the NYT's clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need Connections hints.

What should you do once you've finished? Why, play some more word games of course. I've also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc's Wordle today page covers the original viral word game.

SPOILER WARNING: Information about NYT Connections today is below, so don't read on if you don't want to know the answers.

NYT Connections today (game #732) - today's words

(Image credit: New York Times)

Today's NYT Connections words are…

  • BOWLING
  • WRESTLING
  • MISSING
  • DISHING
  • SPOONING
  • SIRING
  • BUZZING
  • SEWING
  • LORDING
  • SPILLING
  • HUGGING
  • DOCTORING
  • SNUGGLING
  • ACUPUNCTURING
  • WHISPERING
  • CUDDLING
NYT Connections today (game #732) - hint #1 - group hints

What are some clues for today's NYT Connections groups?

  • YELLOW: Two become one
  • GREEN: Tittle tattling 
  • BLUE: Think words that rhyme with weedle and sin
  • PURPLE: Begin with honorifics

Need more clues?

We're firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today's NYT Connections puzzles…

NYT Connections today (game #732) - hint #2 - group answers

What are the answers for today's NYT Connections groups?

  • YELLOW: GETTING COZY 
  • GREEN: GOSSIPING 
  • BLUE: ENGAGING IN AN ACTIVITY WITH PINS OR NEEDLES 
  • PURPLE: STARTING WITH TITLES 

Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON'T WANT TO SEE THEM.

NYT Connections today (game #732) - the answers

(Image credit: New York Times)

The answers to today's Connections, game #732, are…

  • YELLOW: GETTING COZY CUDDLING, HUGGING, SNUGGLING, SPOONING
  • GREEN: GOSSIPING BUZZING, DISHING, SPILLING, WHISPERING
  • BLUE: ENGAGING IN AN ACTIVITY WITH PINS OR NEEDLES ACUPUNCTURING, BOWLING, SEWING, WRESTLING
  • PURPLE: STARTING WITH TITLES DOCTORING, LORDING, MISSING, SIRING
  • My rating: Moderate
  • My score: 1 mistake

All of the _ING words in the grid made for a very baffling game today, but a couple of the groups were also designed to confuse.

GETTING COZY was elementary enough, but I struggled to put together the green group.

Correctly thinking it was about GOSSIPING, I included DOCTORING as I think of this as a phrase about making things up, which is what most gossip is (invented by PRs to benefit their client or by journalists to benefit their numbers).

On my second go at it I included BUZZING only because of the vaguely gossipy Buzzfeed website, not because I’d ever heard of the term buzzing. Every day’s a school day.

Next, I knew that ACUPUNCTURING and SEWING were linked and saw the connection with BOWLING pins, but it wasn’t until the game was long over that I realized why WRESTLING was part of the group, thanks to the many different types of pin moves from the Gannosuke Clutch to the Oklahoma Roll (yes, I am looking at Wikipedia).

How did you do today? Let me know in the comments below.

Yesterday's NYT Connections answers (Wednesday, June 11, game #731)
  • YELLOW: BOAST BLUSTER, CROW, SHOW OFF, STRUT
  • GREEN: ARC-SHAPED THINGS BANANA, EYEBROW, FLIGHT PATH, RAINBOW
  • BLUE: CEREAL MASCOTS COUNT, ELVES, LEPRECHAUN, ROOSTER
  • PURPLE: WAYS TO DENOTE A CITATION ASTERISK, DAGGER, NUMBER, PARENS
What is NYT Connections?

NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.

On the plus side, you don't technically need to solve the final one, as you'll be able to answer that one by a process of elimination. What's more, you can make up to four mistakes, which gives you a little bit of breathing room.

It's a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.

It's playable for free via the NYT Games site on desktop or mobile.

Categories: Technology

Pages