Microsoft has fixed a Secure Boot vulnerability that allowed threat actors to turn off security solutions and install bootkit malware on most PCs.
Security researchers Binarly recently discovered a legitimate BIOS update utility, signed with Microsoft’s UEFI CA 2011 certificate. This root certificate, used in the Unified Extensible Firmware Interface (UEFI) Secure Boot process, plays a central role in verifying the authenticity and integrity of bootloaders, operating systems, and other low-level software before a system boots.
According to the researchers, the utility is trusted on most modern systems utilizing UEFI firmware - but the problem stems from the fact it reads a user-writable NVRAM variable without proper validation, meaning an attacker with admin access to an operating system can modify the variable and write arbitrary data to memory locations during the UEFI boot process.
Microsoft finds 13 extra modulesBinarly managed to use this vulnerability to disable Secure Boot and allow any unsigned UEFI modules to run. In other words, they were able to disable security features and install bootkit malware that cannot be removed even if the hard drive is replaced.
The vulnerable module had been circulating in the wild since 2022, and was uploaded to VirusTotal in 2024 before being reported to Microsoft in late February 2025.
Microsoft recently released the June edition of Patch Tuesday, its cumulative update addressing different, recently-discovered, vulnerabilities - among which was the arbitrary write vulnerability in Microsoft signed UEFI firmware, which is now tracked as CVE-2025-3052. It was assigned a severity score of 8.2/10 (high).
The company also determined that the vulnerability affected 14 modules in total, now fixing all of them.
"During the triage process, Microsoft determined that the issue did not affect just a single module as initially believed, but actually 14 different modules," Binarly said. "For this reason, the updated dbx released during the Patch Tuesday on June 10, 2025 contains 14 new hashes."
Via BleepingComputer
You might also likeIf you’re a sports fan like me, you may have had some complaints in the past about your TV when trying to watch sports. Whether it’s reflections while watching a game in the afternoon or blurring during fast motion, something always seems to need tweaking.
Another issue: a TV that appears dim, with a flat-looking image, particularly for field sports such as football and rugby.
Even the best TVs can struggle with sport, but thankfully, there’s a TV tech that’s ideal for sports fans: mini-LED.
Mini-LED: perfect for sports fansMini-LED TVs are not only becoming increasingly popular but also more affordable. This tech delivers an improved picture over standard LED by using backlights with smaller LEDs (hence the mini part).
By miniaturizing the LEDs, a higher number can be used, which results in increased brightness. It also allows for a higher number of local dimming zones in the backlight, which helps to boost contrast and improve black uniformity.
Mini-LED TVs can hit significantly higher brightness levels than other TV panel types, with 2,500 - 4,000 nitspeaks possible in flagship models. But for sports fans, it’s fullscreen brightness – the level of brightness that the TV can sustain over its entire screen area – that matters most, and once again, mini-LED TVs here regularly beat other panel types, including the best OLED TVs.
To provide an example of that from our TV testing, we regularly measure fullscreen brightness levels of between 580 - 800 nits on the best mini-LED TVs. But even the brightest OLED TV we’ve tested, the LG G5, topped out at 331 nits in our fullscreen measurement.
I’ve picked three models below that are examples of the best mini-LED TVs for sports.
1. Samsung QN90F(Image credit: Future)The Samsung QN90F is the perfect TV for sports. Not only does it deliver exceptionally high brightness levels – 2,086 nits peak and 667 nits fullscreen in Filmmaker Mode – but it has a Glare-Free screen (first introduced in the Samsung S95D OLED) that effectively eliminates reflections, making it perfect for afternoon sports watching.
The QN90F also delivers the superb motion handling that's essential for fast-paced sports. Even for movies, we found we could get smooth motion, with no sign of the dreaded ‘soap opera effect’, by setting both Blur Reduction and Judder Reduction to 3.
The QN90F delivers vibrant colors, strong contrast and realistic textures for a brilliant picture. And when viewing from an off-center seat, there’s little sign of the backlight blooming that results in contrast fade, meaning it’s great for watching in large groups.
The QN90F is a premium-priced TV, with the 65-inch model we tested priced at $2,499.99 / £2,499 / AU$3,499, but if you’re a sports fanatic, it’s worth the investment. Plus, you can expect prices to drop at some point in the near future.
2. Amazon Fire TV Omni Mini-LED (Image credit: Future)When I first began testing the Amazon Fire TV Omni Mini-LED, I didn’t anticipate it would be such a good TV for sports. But in its preset Sports mode with Smoothness (Judder Reduction) set to 4 and Clarity (Blur Reduction) set to 10, sports looked impressively smooth. Color was also surprisingly accurate in that mode, which is unusual as I’ve found the Sports mode makes colors look oversaturated and garish on most TVs.
Something unique about the Omni Mini-LED is that it’s nearly ready out of the box for sports. In contrast, I found when testing competing models such as the Hisense U6N and Hisense U7N that more setup was required to get sports looking right.
The Amazon Omni mini-LED is a significantly more affordable TV than the Samsung QN90F, with its 65-inch model often discounted down to $949.99 / £949.99. It may not have the same level of sports prowess as the Samsung QN90F, but it’s great for the money.
3. TCL QM7K / TCL C7K Image 1 of 2TCL QM7K - US (slide 1) & TCL C7K - UK (slide 2) (Image credit: Future)Image 2 of 2(Image credit: Future)This entry is a hybrid as the TCL model name (and specs) will vary depending on which side of the pond you’re on. Either way, it’s the mid-range model in TCL’s 2025 mini-LED lineup.
Both of these TVs deliver exceptional brightness at a mid-range price, with the TCL QM7K and TCL C7K hitting 2,350 nits and 2,784 nits HDR peak brightness, respectively. More importantly, they hit 640 nits and 678 nits HDR fullscreen brightness, respectively – very good numbers for watching sports in bright rooms.
These TVs require some motion setup. Since I'm based in the UK, I tested the C7K, and I found that I needed to tweak the Sports or Standard picture mode by setting Blur Reduction to 3 and Judder Reduction to 6. I also needed to lower the color setting in Sports, as it was oversaturated in its default settings.
Once this was completed, the C7K was a solid TV for sports. It isn’t quite as effective as the two models above, but it is still a very good mid-range option overall. If the QM7K is anything like its UK counterpart, then the story for that model will be the same.
Again, for the 65-inch models of these two sets, you’re looking at paying $999 / £1,099. That’s a similar price to the Amazon Omni Mini-LED, which has the best motion of the two, but with the TCL, you’re getting that extra hit of brightness.
You might also likeI thought I'd seen every movie trailer gimmick by now, but Apple has just produced a novel one for its incoming F1 movie – a 'haptic' trailer that vibrates your iPhone in time with the on-screen action.
If you have an iPhone (Android fans are sadly excluded from the rumble party) head to the haptic trailer for F1: The Movieto open it in the Apple TV app. You'll then be treated to two minutes of vibrations that's probably also a taste of what it's like to being a celebrity in the middle of a social media storm.
The trailer's 'haptic' experience was actually better than I was expecting. I assumed it would be a simple, one-dimensional rumble that fired up during race sequences, but it's a little more nuanced than that.
To start with, you feel the light vibration of a driver's seat belt being fastened, before the vibrations ramp up for the driving and crash sequences. There's even a light tap to accompany Brad Pitt's character Sonny Hayes moodily bouncing balls against a wall as he ponders coming out of retirement for one last sports movie trope.
Sure, it isn't exactly an IMAX experience for your phone, but if ever there was a movie designed for a haptic movie trailer, it's Apple's F1 movie...
One last Pitt stopApple's F1 movie was also the star of its recent WWDC 2025 event, with the livestream opening with Craig Federighi (Apple's Senior Vice President of Software Engineering) donning a helmet before doing a lap around the roof of its Apple Park building.
There's currently no date for the movie to stream on Apple TV+, with the focus currently on its imminent theater premiere. It officially opens internationally on June 27, but there are some special, one-off screenings in IMAX theaters on June 23 (in North America) and June 25 (internationally) for keen fans who signed up on the movie's official website.
The trailers so far suggest that F1 is going to effectively be Top Gun: Maverick set on a race track – and with both movies sharing the same director (Joseph Kosinski) and screenplay writer (Ehren Kruger), that seems like a pretty safe bet. F1 World Champion Lewis Hamilton was also involved to help amp up the realism.
If the haptic-powered trailer has whetted your appetite, check out our interview with Damson Idris who also stars in F1 and gave us a behind-the-scenes look at what the movie was like to film. Hint; they used specialized tracking cars to help nail the demanding takes flawlessly.
You might also likeAs workloads shift and cold data heats up under AI and analytics demands, the traditional split between high-speed SSDs and cost-effective hard drives is no longer serving every use case.
A new SSD form factor known as E2 is being developed to tackle the growing gap in enterprise data storage. Potentially delivering up to 1PB of QLC flash per drive, they could become the middle-ground option the industry needs.
StorageReview claims the E2 form factor is being designed with support from key players including Micron, Meta, and Pure Storage through the Storage Networking Industry Association and Open Compute Project.
Solid speeds, but not cutting-edgeE2 SSDs targets “warm” data - information that’s accessed often enough to burden hard drives but which doesn’t justify the cost of performance flash.
Physically, E2 SSDs measure 200mm x 76mm x 9.5mm. They use the same EDSFF connector found in E1 and E3 drives, but are optimized for high-capacity, dense deployments.
A standard 2U server could host up to 40 E2 drives, translating into 40PB of flash in a single chassis. StorageReview says these drives will connect over PCIe 6.0 using four lanes and may consume up to 80W per unit, although most are expected to draw far less.
Performance will reach 8-10MB/s per terabyte, or up to 10,000MB/s for a 1PB model. That’s faster than hard drives but not in the same class as top-end enterprise SSDs. E2’s priorities will instead be capacity, efficiency, and cost control.
Pure Storage showed off a 300TB E2 prototype in May 2025 featuring DRAM caches, capacitors for power loss protection, and a flash controller suited for this scale. While current servers aren't yet ready for this form factor, new systems are expected to follow.
It’s fair to say E2 won't replace hard drives overnight, but it does signal a shift. As the spec moves toward finalization this summer, vendors are already rethinking how large-scale flash can fit into modern infrastructure.
You might also likeApple Vision Pro is unquestionably one of the most powerful pieces of consumer hardware Apple has ever built, but the pricey gadget is still struggling to connect with consumers. And that's a shame because the generational-leaping visionOS 26 adds even more eye-popping features to the $3,500 headset, which I think you'd struggle to find with any other mixed reality gear.
Apple unveiled the latest Vision Pro platform this week as part of its wide-ranging WWDC 2025 keynote, which also introduced a year-OS naming system. For some platforms like iOS, the leap from, say, 18 to 26 wasn't huge, but for the toddler visionOS 2, it was instantly thrust into adulthood and rechristened visionOS 26.
This is not a reimaging of visionOS, and that's probably because its glassiness has been amply spread across all other Apple platforms in the form of Liquid Glass. It is, though, a deepening of its core attributes, especially around spatial computing and imagery.
I had a chance to get an early hands-on experience with the platform, which is notable because Vision Pro owners will not be seeing a visionOS 26 Public beta. Which means that while iPhone, iPad, Apple Watch, and Apple TV owners are test-driving OS 26 platform updates on their favorite hardware, Vision Pro owners will have a longer wait, perhaps not seeing these enhancements until the fall. In the interim, developers will, of course, have access for testing.
Since much of the Vision Pro visionOS 26 interface has not changed from the current public OS, I'll focus on the most interesting and impactful updates.
See "me"(Image credit: Apple)During the keynote, Apple showed off how visionOS 26 Personas radically moves the state of the art forward by visually comparing a current Persona with a new one. A Vision Pro Persona is a virtual, live, 3D rendering of your head that tracks your movements, facial expressions, and voice. It can be used for communicating with other people wearing the headgear, and it's useful for calls and group activities.
Apple has been gradually improving Personas, but visionOS 26 is a noticeable leap, and in more ways than one.
You still capture your Persona using the front-facing 3D camera system. I removed my eyeglasses and held the headset in front of my face. The system still guides you, but now the process seems more precise. I followed the audio guidance and looked slowly up, down, left, and right. I smiled and raised my eyebrows. I could see a version of my face faintly on the Vision Pro front display. It's still a bit creepy.
(Image credit: Future)I then put the headset back on and waited less than a minute for it to generate my new Persona. What I saw both distressed and blew me away.
I was distressed because I hate how I look without my glasses. I was blown away because it looked almost exactly like me, almost entirely removing the disturbing "uncanny valley" look of the previous iterations. If you ever wonder what it would be like to talk to yourself (aside from staring at a mirror and having a twin), this is it.
There was a bit of stiffness and, yes, it fixed my teeth even though part of my setup process included a big smile.
It was easy enough to fix the glasses. The Personas interface lets you choose glasses, and now the selection is far wider and with more shades. I quickly found something that looked almost just like mine.
With that, I had my digital doppelganger that tracked my expressions and voice. I turned my head from side to side and was impressed to see just how far the illusion went.
Facing the wall(Image credit: Apple)One of the most intriguing moments of the WWDC Keynote was when they demonstrated visionOS 26's new widget capabilities.
Widgets are a familiar feature on iPhones, iPads, and Macs, and, to an extent, they work similarly on Vision Pro, but the spatial environment takes or at least puts them in new and unexpected places.
In my visionOS 26 demo experience, I turned toward a blank wall and then used the new widget setup to pin a clock widget to the wall. It looked like an actual clock hanging on the wall, and with a flip of one setting, I made it look like it was inset into the wall. It looked real.
On another wall, I found a music widget with Lady Gaga on it. As I stepped closer, a play button appeared in the virtual poster. Naturally, I played a little Abracadabra.
Another wall had multiple widgets, including one that looked like a window to Mount Fiji; it was actually an immersive photo. I instinctively moved forward to "look out" the window. As the vista spread out before me, the Vision Pro warned me I was getting too close to an object (the wall).
I like Widgets, but temper the excitement with the realization that it's unlikely I will be walking from room to room while wearing Vision Pro. On the other hand, it would be nice to virtually redecorate my home office.
An extra dimension(Image credit: Apple)The key to Vision Pro's utility is making its spatial capabilities useful across all aspects of information and interaction.
visionOS 26 does that for the Web with spatial browsing, which basically can turn any page into a floating wall of text and spatially-enhanced photos called Spatial Scenes.
(Image credit: Apple)visionOS 26 handles the last bit on the fly, and it's tied to what the platform can do for any 2D photo. It uses AI to create computational depth out of information it can glean from your flat image. It'll work with virtually any photo from any source, with the only limitation being the source image's original resolution. If the resolution is too low, it won't work.
I marveled at how, when staring at one of these converted photos, you could see detail behind a subject or, say, an outcropping of rock that was not captured in the original image but is inexplicably there.
It's such a cool effect, and I'm sure Vision Pro owners will want to show friends how they can turn almost all their photos into stereoscopic images.
Space timeI love Vision Pro's excellent mixed reality capabilities, but there's nothing quite like the fully immersive experience. One of the best examples of that is the environments that you enable by rotating the crown until the real world is replaced by a 360-degree environment.
visionOS 26 adds what may be the best environment yet: a view of Jupiter from one of its moons, Amalthea. It's beautiful, but the best part of the new environment is the control that lets you scroll back and forth through time to watch sunrises and sunsets, the planet's rotation, and Jupiter's dramatic storms.
This is a place I'd like to hang out.
Of course, this is still a developer's beta and subject to significant change before the final version arrives later this year. It's also another great showcase for a powerful mixed reality headset that many consumers have yet to try. Perhaps visionOS 26 will be the game changer.
You might also likeOpenAI servers experienced mass downtime yesterday, causing chaos among its most loyal users for well over 10 hours.
For six hours straight, I sat at my desk live-blogging the fiasco here on TechRadar, trying to give as many updates as possible to an outage that felt, for many, as if they had lost a piece of themselves.
You see, I write about consumer AI, highlighting all the best ways to use AI tools like ChatGPT, Gemini, and Apple Intelligence, yet outside of work, these incredibly impressive platforms have yet to truly make an impact on my life.
As someone who’s constantly surrounded by AI news, whether that’s the launch of new Large Language Models or the latest all-encompassing artificial intelligence hardware, the last thing I want to do outside of work is use AI. The thing is, the more AI develops at this rapid pace, the more impossible it becomes to turn a blind eye to the abilities that it unlocks.
In the creative world, you’ll stumble across more AI skeptics than people who shout from the rooftops about how great it is. And that’s understandable, there’s a fear of how AI will impact the jobs of journalists like me, and there’s also a disdain for the sanitized world it’s creating via AI-slop or robotically-written copy.
But the same skepticism often overlooks the positives of this ever-evolving technology that gives humans new ways to work, collect their thoughts, and create.
After six hours of live blogging and thousands of readers reaching out with their worries surrounding the ChatGPT server chaos, as well as discussing what they use the chatbot for, I’ve come away with a completely new perspective on AI.
Yes, there are scary elements; the unknown is always scary, but there are people who are truly benefiting from AI, and some in ways that had never even crossed my mind.
More than a chatbotAn hour into live blogging the ChatGPT outage, and I was getting bored of repeating, “It’s still down” in multiple different ways. That was when I had an idea: if so many people were reading the article, they must care enough to share their own reasons for doing so.
Within minutes of asking readers for their opinions on the ChatGPT outage, my inbox was inundated with people from around the globe telling me how hard it was to cope without access to their trusty OpenAI-powered chatbot.
From Canada to New Zealand, Malaysia to the Netherlands, ChatGPT users shared their worries and explained why AI means so much to them.
Some relied on ChatGPT to study, finding it almost impossible to get homework done without access to the chatbot. Others used ChatGPT to help them with online dating, discussing conversations from apps like Tinder or Hinge to ensure the perfect match. And a lot of people reached out to say that they spent hours a day speaking with ChatGPT, filling a void, getting help with rationalizing thoughts, and even helping them to sleep at night.
One reader wrote me a long email, which they prefaced by saying, “I haven’t written an email without AI in months, so I’m sorry if what I’m trying to say is a bit all over the place.”
Those of us who don’t interact with AI on a regular basis have a basic understanding of what it can do, often simplifying its ability down to answering questions (often wrongly), searching the web, creating images, or writing like a robot.
But that’s such an unfair assessment of AI and the way that people use it in the real world. From using ChatGPT to help with coding, allowing people who have never been able to build a program an opportunity to do so, to giving those who can’t afford a professional outlet for their thoughts a place to speak, ChatGPT is more capable than many want to accept.
(Image credit: Shutterstock/Rokas Tenys)ChatGPT and other AI tools are giving people all around the world access to something that, when used correctly, can completely change their lives, whether that’s by unlocking their productivity or by bringing them comfort.
There’s a deeply rooted fear of AI in the world, and rightfully so. After all, we hear on a regular basis how artificial intelligence will replace us in our jobs, take away human creativity, and mark the beginning of the robot uprising.
But would we collectively accept it more if those fears were answered? If the billionaires at the top were to focus on highlighting how AI will improve the lives of the billions of people struggling to cope in this hectic world?
AI should be viewed as the key to unlocking human creativity, freeing up our time, and letting us do less of the mundane and more of enjoying our short time on this planet. Instead, the AI renaissance feels like a way to make us work harder, not smarter, and with that comes an intense amount of skepticism.
After seeing just how much ChatGPT has impacted the lives of so many, I can’t help but feel like AI not only deserves less criticism, but it deserves more of an understanding. It’s not all black and white, AI has its flaws, of course it does, but it’s also providing real practical help to millions of people like nothing I've seen before.
You might also likeWindows 10 has a new update and it actually introduces a new feature – although you might wish it didn’t when you discover what this latest addition is.
That said, the freshly-released update for June (which is KB5060533 for Windows 10 22H2) does come with a tweak that could raise a smile, namely that the clock in the taskbar now displays the seconds when you click to view the time in the calendar panel.
Quite why Microsoft ditched that in the first place is beyond me, but anyway, while that might be a pleasing return of a feature for some, there’s a sting in the tail further down in said calendar flyout – namely that Bing has crept into the mix here.
Not overtly, mind, but as Windows Latest explains, there’s been a change to the bottom section of the calendar panel where normally you’ll see your own events or reminders – if you have any, that is. If you don’t, this used to be blank, but as of the June update you’ll see popular public events and their dates.
Of course, pretty much every day is now dedicated to something – for example, today, June 11, is ‘National Corn on the Cob Day’ (apparently) – and reminders for these events will now appear in the calendar panel.
How does Bing figure in this? Well, if you click on said event, you’ll get information on it fired up in… wait for it… yes, Bing search engine. And what web browser will that appear in? Microsoft Edge, of course. Why promote one service, when you can promote two, after all?
(Image credit: Marjan Apostolovic / Shutterstock)Analysis: Why risk the besmirchment?This is a bit sneaky as it’s far from clear that you’re invoking Bing and Edge when you click something on the calendar flyout out of curiosity. Moreover, this happens despite the Windows 10 preferences you’ve chosen for your default search engine or browser, which again is an unwelcome twist.
This is the kind of behavior that impacts negatively on Microsoft’s reputation and it doesn’t help that the tweak isn’t mentioned in the update notes. We’re only told that the June patch provides a “rich calendar experience” (well, it’s making someone rich, or at least a little richer, possibly – but not you).
The kicker here is that Windows 10 is only four months from being declared a dead operating system, with its life support removed (unless you pay for additional security patches for an extra year). So, why even bother making changes like this when Windows 10 is facing its final curtain? Why take any risks at all that could cause reputational damage?
Well, one thought occurs: maybe Microsoft isn’t convinced that floods of people are going to be leaving Windows 10 when the End of Life deadline rolls around in October 2025. After all, an alarmingly hefty number of diehards are still clinging on to the older operating system. In which case, perhaps Microsoft sees the value and worth in still bugging Windows 10 users for the foreseeable, while they stick around either paying for support, or risking their unpatched PC being compromised while refusing (or being unable) to upgrade to Windows 11.
Oh well. At least we’ve got the seconds back on the calendar clock display, hurray.
You might also like...Amazon has been taking its time with the roll out of Alexa+, the company’s new voice assistant with a big AI revamp, but after hitting the 100,000 user milestone in May, Alexa+ has now reached 1 million test users – a huge jump in just a few weeks.
When the company announced Alexa’s first major upgrade in February, it said that Alexa+ would be US-only for now before being rolled out widely, though this date is still unknown.
It’s been difficult to find a person or close friend who has early access, but now more users are sharing that Alexa+ has been activated on selected Echo devices. Additionally, users are sharing their first impressions of the new AI features – which has garnered a range of mixed reactions.
Alexa+ activated from r/alexaAlexa’s new voice causes a divideAfter deep diving through different Reddit threads, the main feature that has divided early access users is the new Alexa+ voice, whose major redesign aims to offer a less robotic inflection (similar to ChatGPT) with improved recognition capabilities.
Some users have been impressed by its ability to carry out a straight conversation without having to pre-prompt it, with one user stating that it provides a natural back-and-forth flow.
However, the user also highlighted its similarities with other voice assistants, adding: "It's early days, but it feels a tiny bit closer to what I have with ChatGPT." By the sounds of it, Alexa+ will have to offer something a little different if it wants users to stick around.
So I am actually liking Alexa+ from r/alexaAnother user went even further, describing the new Alexa+ voice as "obnoxious", but they also highlighted the fact that the new assistant has the ability to change its tone: "I asked if she could soften her voice, and she offered to make it more 'feminine' (her words not mine)."
I’ve spent a few days with Alexa Plus from r/alexaTesters have been generally pleased with how Alexa+ stands above other LLMs with its detailed explanations and clear understanding of voice prompts. One Reddit user was impressed that "[it] understands everything regardless of how you stumble on words."
Comment from r/alexaAnother user shared a similar positive experience, but went on to explain that Alexa+ would fall into the trap of contradicting itself, admitting to it and apologizing when the called out.
Comment from r/alexaDespite a few hiccups with the new Alexa+, the response from testers has been generally positive which makes us intrigued to see what it will be like once it’s widely available. So far it’s not enough for users to fully subscribe to, but time will tell.
You might also likeHackers are now pretending to be jobseekers, targeting recruiters and organizations with dangerous backdoor malware, experts have warned.
Cybersecurity researchers DomainTools recently spotted a threat actor known as FIN6 using this method in the wild, noting the hackers would first create fake personas on LinkedIn, and create fake resume websites to go along.
The website domains are bought anonymously via GoDaddy, and are hosted on Amazon Web Services (AWS), to avoid being flagged or quickly taken down.
More EggsThe hackers would then reach out to recruiters, HR managers, and business owners on LinkedIn, building a rapport before moving the conversation to email. Then, they would share the resume website which filters visitors based on their operating system and other parameters. For example, people coming through VPN or cloud connections, as well as those running macOS or Linux, are served benign content.
Those that are deemed a good fit are first served a fake CAPTCHA, after which they are offered a .ZIP archive for download. This archive, in what the recruiters believe is the resume, actually drops a disguised Windows shortcut file (LNK) that runs a script which downloads the "More Eggs" backdoor.
More Eggs is a modular backdoor that can execute commands, steal login credentials, deliver additional payloads, and execute PowerShell in a simple yet effective attack relying on social engineering and advanced evasion.
AWS has since came forward to thank the security community for the findings, and to stress that campaigns like this one violate its terms of service and are frequently removed from the platform.
“AWS has clear terms that require our customers to use our services in compliance with applicable laws," an AWS spokesperson said.
"When we receive reports of potential violations of our terms, we act quickly to review and take steps to disable prohibited content. We value collaboration with the security research community and encourage researchers to report suspected abuse to AWS Trust & Safety through our dedicated abuse reporting process."
Via BleepingComputer
You might also likeAdobe Express has introduced a new tool designed to help small businesses create and monitor online ads across popular social media channels.
Since Adobe Max London, the design platform has seen a host of new AI updates like Clip Maker and Generate Similar for spinning out new content based on existing images. Now, Express for Ads brings even more options for marketers and small businesses to scale up content production and track performance.
In an exclusive TechRadar Pro interview, I spoke to Adobe Express SVP Govind Balakrishnan to find out what users can expect from the new ad platform - and what else we can look forward to in the coming months.
What’s new in Adobe Express and what is Express for Ads?This isn’t the first foray into social media content creation for Express, which has long offered the ability to create ad templates, and schedule and publish directly to platforms.
But the platform is giving users a jump-start of scaling up ad creation across core advertising platforms. You can check out the new Express tools by clicking here - but here’s what you can expect.
(Image credit: Adobe)What this new update adds is the ability to create content workflows specifically for Google, LinkedIn, Meta, TikTok, and later down the line, Amazon, too.
As Balakrishnan told me, “What we have now done though is also bring in the tools and capabilities to make it incredibly easy for you to create content that performs well for the critical or prominent ad platforms like Google Ads, LinkedIn, Meta, and more coming in the not too distant future. We've essentially made it easy for you to start with the template or even generate a template, and create content using the best in class tools that we have available in Express.”
Alongside expanded ad platform support, users can now also use what Adobe’s calling a Social Safe Zone.
This is effectively a set of best practices to prevent the dreaded rejection of ads - and it’s currently supported for Facebook Stories, Instagram Reels, and LinkedIn Videos. There are plans to support additional formats soon.
“We've added a capability called Social Safe Zone,” said Balakrishnan. “It’s essentially a set of guidelines or guard rails that are incorporated as you're creating your content to ensure that the key visual elements that you have in your content are not obstructed by the various social media platforms. So, it helps you essentially create content to ensure that the visual elements that you care most about are front and centre, and are optimized to be best-performing for each of the social media platforms that you're targeting.”
In a bid to improve the creative workflow, Adobe is now letting users play in the Express sand-box without having to move out to other apps.
Balakrishnan calls it a one-stop shop, adding: “We have made it incredibly easy to publish straight to the ad platforms, so we have made it. Express can establish a connection with Google Ads, LinkedIn Ads, and Tiktok. You can go from Express directly to each of these ad platforms.”
Of course, Adobe Express has long offered the option to resize templates, but in this latest update, the company has gone further.
“We have now ensured that [Resize] works for these ad platforms,” Balakrishnan told me. “Essentially, you start with the template, you have Safe Zones to ensure that your content looks great for each, and now you have the ability to publish straight into these ad platforms. So, Express becomes this one-stop shop where you start with an intent, you create your content, you publish to various platforms, and you get your insights back right there. You don't have to jump between various tools, various platforms.”
It’s an area that Balakrishnan is most excited for, telling me, “I am most excited about the fact that you can create for a specific ad platform and resize seamlessly for other platforms. As we all know, most marketing marketers are trying to reach multiple platforms and struggle to do that because they have to recreate a lot of their content over and over again for multiple app platforms. The fact that they can fairly quickly and easily create the best possible content for each of those ad platforms, I think, is incredibly exciting.”
One of the best updates, I think, coming with Express for Ads, is the ability to now monitor ad performance across supported platforms, delivering much-needed feedback to refine future ideation and creation. With that in mind, Express now has included Metricool and Bitly add-ons.
Expanding on this, Balakrishnan said, “We've added the ability to get metrics and analytics on the content and how the content is performing through integrations with Metricool and Bitly. These are two recent integrations that we have launched where, once you post your content to these platforms, you now have the ability to get feedback on how your content is performing, in addition to obviously seeing how it maps to current trends and current fads that may be in play.”
And it turns out Adobe might’ve underestimated just how many users are welcoming this update.
Balakrishnan said, “I'm finding that a number of our users are excited about the Metricool integration. I don't know if we had fully realised how compelling this could be, but as we have gotten deeper into the integration and as we have engaged with more of our user base, it has become clear that it is an integration that a large number of our users are incredibly excited about because they then get the insights from how their content is performing right there in the tool without having to leave the tool and go somewhere else.”
As Express continues to evolve, I couldn’t resist finding out what users can expect later down the line. Here, Balakrishnan teased a couple of future updates.
“The next stage that we are incredibly excited about, and I know it's not necessarily related to the ads creation scenario today, but it will be relevant in the not too distant future is the ability to completely reimagine creativity or the opportunity to completely reimagine creativity through agentic AI. The idea there would be that you just enter a prompt and you get you start with a blank screen, you enter a prompt, and you interact through a prompt to essentially generate full-fledged designs from scratch. We are now making it even easier for anyone to come in and describe what's in their mind's eye and have that show up on a digital screen in seconds.”
That will come as little surprise for followers of Adobe, where agentic AI is fast becoming de rigueur across the company’s apps. But it’s not the only area where Balakrishnan envisions AI advancements. He confirmed he’d like to see “more advancements in the realm of generative AI” for Express users who don’t want to see a lowering of the barrier to entry via agentic AI.
And, as you’d expect from a platform that integrates across the Creative Cloud suite, the team is looking at further integrations with Adobe Acrobat.
Balakrishnan explained: “We are seeing an increasing trend, so to speak, where creativity and productivity are coming closer together and we see some incredible opportunities to leverage the very broad base of Acrobat users and give them the tools and capabilities to add more richness to PDF and Acrobat documents. And we're doing that by building seamless integrations and workflows from Acrobat into Express, where you're if you're in Acrobat, if you're in [Acrobat] Reader, if you're viewing a regular PDF document, we are now giving you the ability to edit images, generate images to stylize your document all from within Acrobat.”
You can find out more in Adobe's latest blog.
Want to start creating your next ad campaign now? Check out the new Adobe Express for Ads right now. It's free to use with plans for teams and business users, and you'll also find it included as part of an add-on alongside other Adobe apps like Photoshop.
You might also like