New research has uncovered some of the finer details around why many businesses are still being cautious with their approach to Windows 11 migration, with security threats and financial impacts proving to be major hurdles.
The report from Panasonic found nearly two-thirds (62%) of devices need replacing or upgrading for Windows 11 compatibility, highlighting the scale of the problem – a figure that rises to 76% among larger organizations with 5,000+ employees.
However, despite migration-related concerns, the study claims many organizations still recognize the benefits of upgrading from Windows 10 and older operating systems.
Businesses still have some concerns about upgrading WindowsPanasonic found 94% fear increased ransomware and malware risks if they don't upgrade, with 93% also concerned about data breaches. But two in three noted overall higher costs associated with migrating to Windows 11, with 55% stating that it could add to cybersecurity expenses.
Nearly half also noted software compatibility issues (47%) and productivity loss during downtime (45%), and for many (25%), hardware upgrades come with software upgrades, compounding the financial impact of OS upgrades.
However, with Microsoft estimating that ESU could cost around £320,000 over three years for 1,000 devices, the need to upgrade is clear.
Around a third each acknowledge that upgrading will give them better performance and processing power (36%), a more future proof ecosystem (36%) and access to AI features like Microsoft Copilot (34%).
Panasonic TOUGHBOOK Europe Head of Go-to-Market Chris Turner commented: "The window is closing for organisations to make a well-planned, measured and cost-effective transition to Windows 11 and start unlocking its benefits."
"Organisations that are still to undertake Windows 11 migration need support to ensure their deployment is not rushed and risky," Turner added.
You might also likeIt’s looking likely that you’ll be able to buy the Google Pixel 10 Pro and the Pixel 10 Pro XL in a choice of Obsidian (black), Porcelain (white), Moonstone (slate blue-gray), and Jade (a soft pistachio green with gold accents), as not only have some of these Pixel 10 colors been mentioned before, but now all four have been shown off in leaked renders.
Android Headlines has shared what it claims are official renders of the Pixel 10 Pro and the Pixel 10 Pro XL in these four shades, and while we’d take this leak with a pinch of salt, these certainly look to be high-quality images, so they may well be official.
If these renders are accurate, then the Pro models will be available in two fairly plain, ordinary shades, in the case of Obsidian and Porcelain, since they’re basically just black and white. But the other two options are a bit more interesting.
Image 1 of 2Leaked renders of the Pixel 10 Pro in four colors (Image credit: Android Headlines)Image 2 of 2Leaked renders of the Pixel 10 Pro XL in four colors (Image credit: Android Headlines)A bit more colorThere’s Moonstone, which we’ve actually seen the Pixel 10 Pro in already via an official teaser. This is rather understated, but the hint of blue in it makes this more interesting than a pure gray option.
The highlight, though, is arguably Jade – it’s a soft, delicate shade that still somewhat fits with the rest of the color options, but is a bit brighter and more unusual. Really, we’d like to see more of this sort of thing, rather than top-end phones defaulting to plain shades, but at least there’s one option here for those who want a splash of color.
We’ll find out how accurate this color leak is soon, as Google is set to unveil the Pixel 10 series on August 20. We’re expecting to see the Pixel 10 itself along with the Pixel 10 Pro, the Pixel 10 Pro XL, and the Pixel 10 Pro Fold, so there should be a lot to look forward to.
You might also likeAt least three major Chinese hacking groups were abusing recently discovered vulnerabilities to target businesses using Microsoft SharePoint, the company has said.
Microsoft recently released an urgent patch to fix two zero-day vulnerabilities affecting on-premises SharePoint servers, tracked as CVE-2025-49704 (a remote code execution bug), and CVE-2025-49706 (a spoofing vulnerability), which were being abused in the wild.
Now, Microsoft is saying that the groups targeting the flaws are Chinese state-sponsored groups - namely Linen Typhoon, Violet Typhoon, and Storm-2603.
Get Keeper's Personal Password Manager plan for just $1.67/month
Keeper is a password manager with top-notch security. It's fast, full-featured, and offers a robust web interface. The Personal Plan gets you unlimited password storage across all your devices, auto-login & autofill to save time, secure password sharing with trusted contacts, biometric login & 2FA for added security.View Deal
Two typhoons and a stormThe first two are part of the larger “typhoon” operation, counting at least half a dozen organizations, including Brass Typhoon, Salt Typhoon, Volt Typhoon, and Silk Typhoon.
In the last couple of years, these groups were attributed with breaches into critical infrastructure organizations, government, defense, and military firms, telecom operators, and similar businesses, across the western world and NATO members.
Some researchers are saying that these groups were tasked with persisting in the target networks, in case the standoff between the US and China over Taiwan escalates into actual war. That way, they would be able to disrupt or destroy critical infrastructure, eavesdrop on important conversations, and thus gain the upper hand in the conflict.
At least seven major telecommunications operators in the United States have recently confirmed discovering Typhoon operatives on their networks and removing them from the virtual premises.
"Investigations into other actors also using these exploits are still ongoing," Microsoft said in a blog post, stressing that the attackers will definitely continue targeting unpatched systems.
SharePoint Server Subscription Edition, SharePoint Server 2019, and SharePoint Server 2016 were said to be affected. SharePoint Online (Microsoft 365) was secure.
Microsoft recommends customers to use supported versions of on-premises SharePoint servers with the latest security updates immediately, and says users should ensure their antivirus and endpoint protection tools are up to date.
You might also likeWhat's better than wireless charging? Even faster wireless charging. The latest Qi2.2 wireless charging standard makes wireless power much faster, much smarter and even more useful – and while several brands have recently obtained Qi2.2 certification, Baseus is the first to publicly release visuals and detailed specifications of three certified devices. So while others make promises, Baseus is already making Qi2.2 products.
That means Baseus customers will be among the very first people to get a massive wireless power-up.
The AM52 is a super-slim power bank with speedy 25W wireless charging (Image credit: Baseus)Why Qi2.2 is brilliant news for youQi2.2 is the very latest version of the world's favourite wireless charging standard. Qi charging is supported by all the big names in smartphones and accessories, delivering convenient and safe wireless charging for all kinds of devices. And the latest version is the best yet. Qi2.2 is much faster, even more efficient and even safer.
There are three key parts to Qi2.2: supercharged wireless power, smarter heat control and magnetic precision. The first means that instead of maxing out at 15W of power like existing wireless chargers do, Qi2.2 can push the limit to 25W. That means much faster charging and less time waiting: Qi2.2 can charge your phone up to 67% faster than Qi2.0.
Wireless charging generates heat, and Qi2.2 keeps that down with next-generation thermal regulation, stricter surface temperature limits and improved coils. And the new Magnetic Power Profile (MPP) built into the standard ensures more precise alignment with your phone, reducing energy waste and improving charging efficiency by 15% whether you're charging in the car, at home or on the go.
The powerful PicoGo AM61 comes with its own USB-C cable so you can charge wired and wirelessly at the same time. (Image credit: Baseus)Qi2.2 is made for everything everywhereQi2.2 is made to work across all kinds of devices from the iPhone 12 and endless Androids to future models that haven't even been made yet. And while it's focused on the future it's also fully backwards compatible: your Baseus Qi2.2 power bank or charger will happily power up a device made for older Qi standards, and Qi phone cases can add wireless charging capability to older phones that weren't built with wireless charging inside.
Baseus is the industry leader in Qi2.2 charging, and it's just launched three new products that take full advantage of Qi2.2's extra power and improved efficiency: two powerful PicoGo magnetic power banks for any device and a really useful foldable 3-in-1 PicoGo charger for your phone, earbuds and smartwatch.
The two magnetic power banks are the PicoGo AM61 Magnetic Power Bank and the PicoGo AM52 Ultra-Slim Magnetic Power Bank. Both versions deliver a massive 10,000mAh of power, both have a 45W USB-C charging port so you can charge two things at once, and both can charge your device wirelessly at up to 25W via the new Qi2.2 standard without any danger of overheating.
The AM52's ultra-slim design features a graphene and aluminium shell for heat dissipation and smart temperature control that protects all of your devices while charging, and the slightly larger AM61includes a built-in USB-C cable for extra convenience.
If you're looking for a super-speedy compact charger, you'll love the PicoGo AF21 foldable 3-in-1 wireless charger. It delivers the same super-fast 25W wireless charging as its siblings, and with a total 35W of power across its three modules it can wirelessly power up not just your phone but your earbuds and smartwatch too.
That makes it an ideal bedside charger as well as a great travel charger: it’s extremely small at just 75.5 x 80 x 38.11am and it’s highly adjustable for optimal viewing and charging. You can rotate the watch panel 180º, adjust the phone panel through 115 degrees and adjust the base bracket too.
The PicoGo AF21 foldable 3-in-1 wireless charger is super-portable and extremely adjustable. (Image credit: Baseus)Ride the next wireless wave with Baseus' brilliant power-upsBaseus is setting the standard for Qi2.2 wireless charging, and whether you grab the powerful dual-charging PicoGo AM61, the super-slim PicoGo AM52 or the multi-talented PicoGo AF21 charger you're getting the latest, greatest and fastest charging for your phone. With Qi2.2 Baseus isn't just riding the next wireless wave. It's shaping it.
The Baseus PicoGo AM61 Magnetic Power Bank, PicoGo AM52 Magnetic Power Bank and PicoGo AF21 3-in-1 Foldable 3-in-1 Wireless Charger will all be available this August, and you'll be able to order them directly from Baseus’s website and from major retailers such as Amazon.
Third-party attacks are one of the most prominent trends within the threat landscape, showing no signs of slowing down, as demonstrated by recent high-profile cyber incidents in the retail sector.
Third-party attacks are very attractive to cybercriminals: threat actors drastically increase their chances of success and return on investment by exploiting their victims’ supplier networks or open-source technology that numerous organizations rely on.
A supply chain attack is one attack with multiple victims, with exponentially growing costs for the those within the supply chain as well as significant financial, operational and reputational risk for their customers.
In a nutshell, in the era of digitization, IT automation and outsourcing, third-party risk is impossible to eliminate.
Global, multi-tiered and more complex supply chainsWith supply chains becoming global, multi-tiered and more complex than they have ever been, third-party risks are increasingly hard to understand.
Supply chain attacks can be extremely sophisticated, hard to detect and hard to prevent. Sometimes the most innocuous utilities can be used to initiate a wide-scale attack. Vulnerable software components that modern IT infrastructures run on are difficult to identify and secure.
So, what can organizations do to improve their defenses against third-party risk? We have outlined three areas organizations can take to build meaningful resilience against third-party cyber risk:
1. Identify and mitigate potential vulnerabilities across the supply chainUnderstanding third-party risk is a significant step towards its reduction. This involves several practical steps, such as:
i) Define responsibility for supply chain cyber risk management ownership. This role often falls between two stools - the internal security teams who will focus primarily on protecting the customer, while the compliance and third-party risk management programs who own responsibility for third party risk and conduct, but don’t feel confident addressing cyber risks given their technical bias.
ii) Identify, inventory and categorize third parties, to determine the most critical supplier relationships. From a cyber security perspective, it is important to identify suppliers who have access to your data, access into your environment, those who manage components of your IT management, those who provide critical software, and – last but not least – those suppliers who have an operational impact on your business.
This is a challenging task, especially for large organizations with complex supply chains, and often requires security teams to work together with procurement, finance and other business teams to identify the entire universe of supplier relationships, then filter out those out of scope from a cyber security perspective.
Assess risk exposure by understanding the security controls suppliers deploy within their estate or the security practices they follow during the software development process, and highlight potential gaps. It is important to follow this up with agreement on the remediation actions acceptable to both sides, and to work towards their satisfactory closure. The reality is that suppliers are not always able to implement the security controls their clients require.
Sometimes this leads to client organizations implementing additional resilience measures in-house instead – often dependent on the strength of the relationship and the nature of the security gaps.
Move away from point-in-time assessments to continuous monitoring, utilizing automation and open-source intelligence to enrich the control assessment process. In practice, this may involve identifying suppliers’ attack surfaces and vulnerable externally-facing assets, monitoring for changes of ownership, identifying indicators of data leaks and incidents affecting critical third parties, and monitoring for new subcontractor relationships.
2. Prepare for supply chain compromise scenariosRegrettably, even mature organizations with developed third-party risk management programs get compromised.
Supply chain attacks have led to some of the most striking headlines about cyber hacks in recent years and are increasingly becoming the method of choice for criminals who want to hit as many victims as possible, as well as for sophisticated actors who want to remain undetected while they access sensitive data.
Preparedness and resilience are quickly becoming essential tools in the kit bag of organizations relying on critical third parties.
In practice, the measures that organizations can introduce to prepare for third-party compromise include:
i) Including suppliers in your business continuity plans. For important business processes that rely on critical suppliers or third-party technology, understand the business impact, data recovery time and point objectives, workarounds, and recovery options available to continue operating during a disruption.
ii) Exercising cyber-attack scenarios with critical third parties in order to develop muscle memory and effective ways of working during a cyber attack that may affect both the third party and the client. Ensure both sides have access to the right points of contact – and their deputies – to report an incident and work together on recovery in a high-pressure situation.
iii) Introducing redundancies across the supply chain to eliminate single points of failure. This is a difficult task, especially in relation to legacy suppliers providing unique services or products. However, understanding your options and available substitutes will reduce dependency on suppliers and provide access to workarounds during disruptive events such as a supply chain compromise.
3. Secure your own estate (monitor third-party access, contractual obligations)Protecting your own estate is as important as reducing exposure to third-party risk. Strengthening your internal defenses to mitigate damage if a third party is compromised involves a number of important good practice measures, including but not limited to:
i) Enhanced security monitoring of third-party user activity on your network,
ii) Regular review of access permissions granted to third-party users across your network, including timely termination of leavers,
iii) Continuous identification and monitoring of your own external attack surface, including new internet-facing assets and vulnerable remote access methods,
iv) Employee security training and social engineering awareness, including implementation of additional security verification procedures to prevent impersonation of employees and third parties.
Security vetting of third-party users with access to your environment or dataAs third-party threats evolve and become more prominent, organizations must have a clear view of who they’re connected to and the risks those connections pose. An end-to-end approach to cyber due diligence, encompassing assessment, monitoring, and response capabilities to threats across their supply chains before damage is done.
Third-party risk will remain a challenge for many organizations for years to come, especially as more threat actor groups begin to explore supply chain compromise as an attractive tactic, offering high rewards with relatively low resistance.
Regulators across all sectors are beginning to pay greater attention to supply chain security. Frameworks such as DORA, NIS2 and the Cyber Resilience Act reflect the growing concerns that supply chain security must be a key component of digital strategy. Those who lead on this issue will be best placed to navigate supply chain compromise.
We list the best identity management software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Many people have never heard of Wikidata, yet it’s a thriving knowledge graph that powers enterprise IT projects, AI assistants, civic tech, and even Wikipedia’s data backbone. As one of the world’s largest freely editable databases, it makes structured, license-free data available to developers, businesses, and communities tackling global challenges.
With a gleaming new API, an AI-ready initiative, and a long-standing vision of decentralization, Wikidata is redefining open data’s potential. This article explores its real-world impact through projects like AletheiaFact and Sangkalak, its many technical advances, and its community-driven mission to build knowledge “by the people, for the people,” while unassumingly but effectively enhancing Wikipedia’s global reach.
Wikidata’s impact: from enterprise to civic innovationLaunched in 2012 to support Wikipedia’s multilingual content, today Wikidata centralizes structured data — facts like names, dates, and relationships — and streamlines updates across Wikipedia’s language editions. A single edit (like the name of a firm’s CEO) propagates to all linking pages, ensuring consistency for global enterprises and editors alike. And beyond Wikipedia, Wikidata’s machine-readable format makes it ideal for business-tech solutions and ripe for developer innovation.
Wikidata’s database includes over 1.3 billion structured facts and even more connections that link related data together. This massive scale makes it a powerful tool for developers. They can access the data using tools like SPARQL (a query language for exploring linked data) or the EventStreams API for real-time updates. The information is available in a wide variety of tool-friendly formats like JSON-LD, XML, and Turtle. Best of all, the data is freely available under CC-O, making it easy for businesses and startups to build on.
Wikibase’s robust and open infrastructure drives transformative projects. AletheiaFact, a platform for verifying political claims based in São Paulo, harnesses Wikidata’s records to drive civic transparency, empowering communities with trusted government insights and showcasing open knowledge’s transformative impact. In India, Wikidata was used to create a map of medical facilities in Murshidabad district, color-coded by type (sub-centers, hospitals, etc.) , making healthcare access easier.
In Bangladesh, Sangkalak opens up access to Bengali Wikisource texts, unlocking a trove of open knowledge for the region. These projects rely on a mix of SPARQL for fast queries, the REST API for synchronization, and Wikimedia’s Toolforge platform for free hosting, empowering even the smallest of teams to deploy impactful tools.
A lot of large tech companies also use Wikidata’s data. One example is WolframAlpha, which uses Wikidata through its WikidataData function, retrieving data like chemical properties via SPARQL for computational tasks, or analyzing chemical properties. This integration with free and open data streamlines data models, cuts redundancy, and boosts query accuracy for businesses, all with zero proprietary constraints.
Wikidata’s vision: scaling for a trusted, AI-driven futureHandling nearly 500,000 daily edits, Wikidata pushes the limits of MediaWiki, the software it shares with Wikipedia, and the team is working on various areas of scaling Wikidata. As part of this work, a new RESTful API has simplified data access, thereby energizing Paulina, a public domain book discovery tool, and LangChain, an AI framework with strong Wikidata support. Developers enjoy the API’s responsiveness, sparking excitement for Wikidata’s potential in everything from civic platforms like AletheiaFact to quirky experiments.
The REST API release has had immediate impact. For example, developer Daniel Erenrich has used it to integrate access to Wikidata’s data into LangChain, allowing AI agents to retrieve real-time, structured facts directly from Wikidata, which in turn supports generative AI systems in grounding their output in verifiable data. Another example is the aforementioned Paulina, which relies on the API to surface public domain literature from Wikisource, the Internet Archive and more, a fine demonstration of how easier access to open data can enrich cultural discovery.
Then there is the visionary leap of the Wikibase Ecosystem project, which enables organizations to store data in their own federated knowledge graphs using MediaWiki and Wikibase, interconnected according to Linked Open Data standards. Decentralizing the data reduces strain on Wikidata and lets it go on serving core data. With its vision of thousands of interconnected Wikibase instances, this project could create a global open data network, boosting Wikidata’s value for enterprises and communities.
The potential here is enormous: local governments, enterprises, libraries, research labs, and museums could each maintain their own Wikibase instance, contributing regionally relevant data while maintaining interoperability with global systems. Such decentralization makes the platform more resilient and more inclusive, offering open data stewardship at every scale.
Community events drive this mission. WikidataCon, organized by Wikimedia Deutschland and running from 31 October to 2 November 2025, unites developers, editors, and organizations in an effort to refine tools and data quality. Wikidata Days, local meetups and editathons foster collaboration and offer support for budding projects like Paulina. These events embody Wikidata’s ethos of knowledge built by the people, for the people, and help it remain transparent and community-governed.
Wikidata and AI: the Embedding Project and beyondThe Wikidata Embedding Project is an effort to represent Wikidata’s structured knowledge as vectors, enabling generative AI systems to employ up-to-date, verifiable information. It aims to address persistent challenges in AI — such as hallucinations and outdated training data — by grounding machine outputs in curated, reliable sources. This could render applications like virtual assistants significantly more accurate, transparent, and aligned with public knowledge.
The next decade holds promising opportunities for Wikidata’s continued relevance. As enterprise needs become more complex and interconnected, the demand for interoperable, machine-readable, and trusted datasets will only grow. Wikidata is uniquely positioned to meet this demand — remaining free, open, community-driven, and technically adaptable.
Enterprise IT teams will find particular value in Wikidata’s real-time APIs and its nearly 10,000 external identifiers, which link entries across platforms like IMDb, Instagram, and national library systems. These links reduce duplication, streamline data integration, and bridge otherwise isolated datasets. Whether it’s mapping identities across services or enhancing AI with structured facts, Wikidata provides a scalable foundation that saves time and improves precision.
With AI chatbots and large-language models now woven into everything from enterprise search to productivity software, the need for accurate, real-time information is more urgent than ever. Wikidata’s linked data embeddings could herald a new generation of AI tools — blending the speed of automation with the reliability of human-curated, public knowledge.
As AI reshapes the digital landscape, Wikidata stands out as a beacon of trust and collaboration. By empowering developers, enterprises, and communities alike through projects like AletheiaFact and Sangkalak, it supports transparency, civic innovation, and educational equity. With the Embedding Project improving AI accuracy, the Wikibase Ecosystem enabling federated knowledge networks, and events like WikidataCon and Wikidata Days sparking global collaboration, Wikidata is building an accountable future full of open data. More than a knowledge graph, it’s a people-powered infrastructure for the trustworthy web.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Personalized content is now a fact of life – what was once considered innovative is now standard for online marketing. As anybody who has indulged in a bit of online retail therapy can tell you, websites are now surprisingly accurate in what they recommend, with promotions appearing at just the right time and content adapting as if by magic.
Whilst that’s the super power of personalization, it’s also a bit… disconcerting? As convenient as it might be to see exactly the right product at exactly the right time, this also raises a lot of questions: where does all of this information actually come from? Exactly how much does this company know about me? Did I really consent to sharing all of this data?
These questions are only becoming more frequent as consumers become more aware of the value of their data. A recent Deloitte study showed over two-thirds of smartphone users worry about data security and privacy on their devices, whilst in the US 86% of consumers are more worried about their data privacy than the state of the economy.
These are sobering statistics and beg the question: if consumers are crying out for better data protection, how can businesses enact a privacy-first approach to data-driven personalization?
Thinking strategically about personalizationThe first important part of making personalization fit for our privacy-conscious age is ensuring that it’s done with purpose. Thinking strategically about personalization, as opposed to just considering the technical aspects of it, is crucial to building a model which is both useful to a business and respects data privacy demands from consumers.
Personalizing without a clear goal risks losing consumer trust: just because a business can collect a certain piece of data or display content to a specific target group, it doesn’t mean they should. Over-personalization or irrelevant suggestions can cause rejection – especially when it’s unclear where the information comes from, so it is always better to personalize with purpose.
This also applies to the data that businesses collect. Even with consent, users today expect to decide what information they share. The starting point shouldn’t be a tracking script, but a deliberate content strategy: Which data is truly necessary? What do we want to achieve with it? And how can we explain it clearly and understandably?
Doing this properly brings two benefits: the data is legally secure and often significantly better in quality. Transparency also builds trust – which is more important than ever in digital marketing. Instead of asking for a full set of personalization data upfront, businesses should consider asking for smaller data points like a postcode to show local offers. This approach creates value for both sides and, crucially, builds consumer trust.
Segments rather than individualsAdvances in technology now mean that personalization can be really granular – but is that always desirable? In a privacy-conscious world, definitely not.
Not every user wants to be individually addressed, and not every website needs to do so. Often, it’s more effective to tailor content for groups with similar interests, behavior, or needs. Common segments include first-time visitors vs. return users, mobile vs. desktop users, regional audiences, or browsers who never add items to their cart.
Targeting these groups allows for impactful content variation – without the complexity of individual personalization. Privacy preferences can also be respected: cautious users are addressed neutrally, while opt-in users get a more personal experience.
Flexibility is keyMany companies struggle to reconcile data protection and personalization – often because they see them as contradictory. But the opposite is true: taking data protection seriously builds trust and allows for better personalization.
Take consent banners as an example: one which clearly differentiates data types and allows easy management of preferences is more transparent and, so consistent data shows, reduces bounce rates.
The key is to recognize that flexibility on what consumers expect is king. Personalization is not a one-time project and, just as regulation is continuously evolving, so are user expectations. Successful privacy-first personalization means regularly reviewing and adapting content, processes, and technology.
The bottom line is that personalization is not an end in itself. Rather, it’s meant to help deliver the right content to the right audience at the right time – without crossing lines. Focusing on what users truly need and are willing to share often leads to better results than collecting as much data as possible.
A privacy-first approach to personalization isn’t an oxymoron, it’s a necessity in the modern world. Personalization shouldn’t just be a technical concept, but one that places consumers at the heart of what a business does and offers – not just relevant content, but a brand built on clarity, consistency and respect for consumer attitudes towards privacy.
We list the best Linux distro for privacy and security.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Last week, a new country song called “Together” appeared on Spotify under the official artist page of Blaze Foley, a country artist shot and killed in 1989. The ballad was unlike his other work, but there it was: cover art, credits, and copyright information – just like any other new single. Except this wasn't an unearthed track from before his death; it was an AI-generated fake.
After being flagged by fans and Foley's label, Lost Art Records, and reported on by 404 Media, the track was removed. Another fake song attributed to the late country icon Guy Clark, who passed away in 2016, was also taken down.
The report found that the AI-generated tracks carried copyright tags listing a company named Syntax Error as the owner, although little is known about them. Stumbling across AI-made songs on Spotify isn't unusual. There are entire playlists of machine-generated lo-fi beats and ambient chillcore that already rake in millions of plays. But, those tracks are typically presented under imaginary artist names and usually have their origin mentioned.
The attribution is what makes the Foley case unusual. An AI-generated song uploaded to the wrong place and falsely linked to real, deceased human beings is many steps beyond simply sharing AI-created sounds.
Synthetic music embedded directly into the legacy of long-dead musicians without permission from their families or labels is an escalation of the long-running debate over AI-generated content. That it happened on a giant platform like Spotify and didn't get caught by the streamer's own tools is understandably troubling.
And unlike some cases where AI-generated music is passed off as a tribute or experiment, these were treated as official releases. They appeared in the artists’ discographies. This latest controversy adds the disturbing wrinkle of real artists misrepresented by fakes.
Posthumous AI artistsAs for what happened on Spotify's end, the company attributed the upload to SoundOn, a music distributor owned by TikTok.
“The content in question violates Spotify’s deceptive content policies, which prohibit impersonation intended to mislead, such as replicating another creator’s name, image, or description, or posing as a person, brand, or organization in a deceptive manner,” Spotify said in a statement to 404.
“This is not allowed. We take action against licensors and distributors who fail to police for this kind of fraud and those who commit repeated or egregious violations can and have been permanently removed from Spotify.”
That it was taken down is great, but the fact that the track appeared at all suggests an issue with flagging these problems earlier. Considering Spotify processes tens of thousands of new tracks daily, the need for automation is obvious. However, that means there may be no checking into the origins of a track as long as the technical requirements are met.
That matters not just for artistic reasons, but as a question of ethics and economics. When generative AI can be used to manufacture fake songs in the name of dead musicians, and there’s no immediate or foolproof mechanism to stop it, then you have to wonder how artists can prove who they are and get the credit and royalties they or their estates have earned.
Apple Music and YouTube have also struggled to filter out deepfake content. And as AI tools like Suno and Udio make it easier than ever to generate songs in seconds, with lyrics and vocals to match, the problem will only grow.
There are verification processes that can be used, as well as building tags and watermarks into AI-generated content. However, platforms that prioritize streamlined uploads may not be fans of the extra time and effort involved.
AI can be a great tool for helping produce and enhance music, but that's using AI as a tool, not as a mask. If an AI generates a track and it's labeled as such, that's great. But if someone intentionally passes that work off as part of an artist’s legacy, especially one they can no longer defend, that’s fraud. It may seem a minor aspect of the AI debates, but people care about music and what happens in this industry could have repercussions in every other aspect of AI use.
You might also likeIn short order after Apple TV+ shared our first look at the set of Ted Lasso season 4, one of the best streaming services is wasting no time setting its eye on other content that’s in the pipeline.
Now, we’ve already known that Vince Gilligan, the creator of Breaking Bad and Better Call Saul, was working on a new show for Apple TV+, but now we have a countdown, which will hopefully bring us even more information.
Tagged “From The Creator Of Breaking Bad” in thin black text over a yellow background, and next to a jar with a smiley face drawn in a Petri dish via a Q-Tip, the image is attached to a post on X (formerly Twitter) that reads, “Happiness is Contagious.”
Happiness is Contagious. pic.twitter.com/izGKiHgPItJuly 22, 2025
It’s certainly a nod to Gilligan’s project, and likely hints that a formal show title, description, full casting, and maybe even a first look or trailer are on the horizon. We already knew that it would star Rhea Seehorn, who appeared on Better Call Saul, but a countdown clock is now visible on the Apple TV+ YouTube channel, pointing to a reveal on Saturday, July 26, 2025.
We do know that the show will be a mix of science fiction and drama, but not much else is known about it. Maybe it’ll feature a Petri dish with a smiley face, though. Gilligan previously teased way back in 2023 that the show has no overlap with Better Call Saul, but will be in Albuquerque, just a very different Albuquerque.
At the time, Gilligan told Variety that it’s heavy science-fiction, and noted, “It’s the modern world – the world we live in – but it changes very abruptly. And the consequences that that reaps hopefully provide drama for many, many episodes after that.”
It’s been a long time coming, but we’ll finally know more when this countdown hits zero on July 25 – just don’t expect Apple TV+ to drop every episode on that date. Still, I’ll be keeping an eye on the comments on the YouTube livestream countdowns, and on social media for some theories on this one.
You might also likeForgotten accounts for apps you no longer use might not seem like your most pressing security concern, but new research has claimed they can be far more than digital clutter.
A study by Secure Data Recovery found 94% of respondents admitted to having one or more zombie accounts - accounts left unused for at least 12 months.
These neglected profiles often remain active and vulnerable, giving cybercriminals a quiet back door into users’ digital lives.
Pandora, Groupon, and Shutterfly lead the list of forgotten servicesPandora tops the list of abandoned services, with 40% of respondents admitting they still have unused accounts, with Groupon and Shutterfly following closely, reflecting a wider trend of users drifting away from once-popular platforms.
“That account you haven’t logged into for over a year? It’s still there,” the study notes, warning that abandoned profiles are ripe for hijacking.
These unused accounts aren’t limited to music or shopping, as photo-sharing platforms like Dropbox, Tumblr, and Flickr are also frequently forgotten - and the trend even extends to more sensitive categories, with dating apps such as Tinder, OkCupid, and Bumble ranking highest in abandonment. In the financial space, Acorns, Mint, and YNAB are often left idle, despite potential access to personal or financial information.
Many users simply forget these accounts exist, assuming that inactivity means deletion. In other cases, disinterest drives abandonment.
Facebook ranks highest in dissatisfaction, followed by Twitter/X and Amazon Prime Video. Some platforms failed to keep up with expectations, while others, like Prime Video, alienated users by adding ads.
Interestingly, Prime Video also appears on the list of most-missed services, suggesting users are divided in their views.
The consequences of ignoring these accounts go well beyond clutter.
Reusing passwords across sites, especially between zombie accounts and work or banking logins, creates serious risk.
Secure Data Recovery warns: “Having the same login for that eight-year-old Tumblr account and your active work email might not be in your best interest.”
How to stay safeThe countdown clock to Rockstar Games' Grand Theft Auto 6 feels like it's ticking faster than ever, with a release date set for May 26, 2026 – and in the meantime, a new rumor may spell great news for PS5 Pro owners.
According to reputable leaker Detective Seeds on X, GTA 6 will run at 60fps on PS5 Pro as Sony engineers are reportedly working closely with Rockstar to help achieve the performance target. This comes from the Oblivion remake leaker, so it's safe to say there's a level of credibility here.
Detective Seeds suggests that there will be multiple graphical settings, but will reportedly only be available on the PS5 Pro, and not the base configuration. It doesn't sound completely far-fetched either, as it's evident that Sony and Rockstar have maintained a strong marketing partnership over the years, and that's rumored to continue leading up to GTA 6's launch.
Based on the leak, there are clear hints that 60fps on the base PS5 isn't completely off the cards; rumors also hint at Sony and Rockstar optimizing other titles for 60fps as well, which rings a bell, surrounding Red Dead Redemption 2.
Fans have been requesting a 60fps patch for the critically acclaimed title, so it would be surprising if this wasn't aimed at the base PS5 (especially since it has already been achieved via console exploits). The visual fidelity in GTA 6 is arguably vastly superior to Red Dead Redemption 2's, but the two are still in similar ballparks – so, if the base PS5 gets a 60fps patch for the 2018 title, could that mean the same for GTA 6?
(Image credit: Rockstar Games)Analysis: 60fps or not, I'm not paying $700 for the PS5 ProSurely, I'm not the only one who doesn't really care whether or not GTA 6 runs at 60fps on console or not? I mean, don't get me wrong, I'd love to see it available in some capacity, and this isn't me saying '30fps is perfectly fine, stop complaining. ' However, you better believe I'm not paying $700 for a PS5 Pro just to achieve that performance target.
I'd argue that Rockstar Games' GTA 6 is one of the only titles where I'd happily settle with high-quality visuals at 4K 30fps over 60fps (only if optimization for 60fps wasn't possible) on console.
Perhaps you could say that's my excitement for its eventual launch on PC speaking, since I know much higher frame rates will inevitably be available – but if I could play Final Fantasy XVI on PS5 on its quality graphics mode, a fast-paced action RPG game, without it ruining the experience, then I can easily do the same with the arguably the most anticipated game of all-time.
Again, I must stress that 60fps should become a priority for developers on console, but I don't think it will be the end of the world if that doesn't happen for GTA 6 on the base PS5.
You might also likeAustralian fashion brand SABO leaked sensitive data on millions of its customers by keeping an unencrypted, non-password-protected database on the internet, available to anyone who knew where to look.
Jeremiah Fowler, a security researcher known for discovering these types of leaks found a 292 GB archive, containing 3,587,960 .PDF documents containing names, physical addresses, email addresses, phone numbers, and other personally identifiable information (PII) belonging to both retail and corporate SABO customers.
The number of entities whose information was leaked could be around 3.5 million, but it could also be - fifty times as many.
Locking the database down“In one single PDF file, there were 50 separate order pages, indicating that the total number of potential customers is higher than the total number of PDF files in the database,” Fowler explained.
The information was generated via an internal document management storage system, designed to track sales and returns, as well as the corresponding domestic and international shipping documents.
Since the file dates range from 2015 to 2025, it is safe to assume that some of the information is outdated, and some is highly relevant.
Fowler reached out to SABO with the information, and the database was locked down “within hours”. However, the company never replied to the researcher’s email, so we don’t know for how long the database remained open, who maintained it, or if someone managed to find and exfiltrate the information before he did.
SABO is an Australian fashion brand, designing and selling exclusive collections of clothes, shoes, swimwear, sleepwear, and formal attires. It is primarily an Australian brand, operating in the country. However, it also sells its products online and allows for worldwide shipments.
It currently has three stores in the country and has reported an annual revenue of $18 million for 2024.
You might also like