Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

TechRadar News

New forum topics

Subscribe to TechRadar News feed
All the latest content from the TechRadar team
Updated: 12 min 54 sec ago

The foldable iPhone could have much smaller screens than the Samsung Galaxy Z Fold 7

Wed, 07/23/2025 - 05:56
  • A report suggests the foldable iPhone will have a 7.8-inch foldable screen and a 5.5-inch cover screen
  • We've heard those sizes before, so they may well be accurate
  • That though would make the foldable iPhone's screens significantly smaller than those of its main rival

There are still a lot more questions about the foldable iPhone than answers, and one of those questions is what size screens it will have. But we aren’t left wondering this due to a lack of answers – rather there have been many conflicting answers.

Now though, we’re starting to see some consensus, as a report from TrendForce (via MacRumors) claims that the foldable iPhone will have a 7.8-inch foldable display and a 5.5-inch cover screen – which are both dimensions we’ve heard before.

Ming-Chi Kuo (an analyst with a good track record for Apple information) said the same back in March, and we also heard almost identical sizes from tipster Digital Chat Station in February.

So while not all sources agree, there are now enough leaks pointing towards these sizes that they seem the most likely.

Smaller than the competition

The Samsung Galaxy Z Fold 7 has much bigger screens (Image credit: Lance Ulanoff / Future)

If these sizes are correct, then the foldable iPhone’s screen wouldn’t be especially large. The Samsung Galaxy Z Fold 7 for example has an 8.0-inch foldable screen and a 6.5-inch cover display. That means you essentially have a small tablet and a medium-sized smartphone all in one, whereas the foldable iPhone could end up with a foldable screen that’s still a fair bit smaller than a tablet’s, and a cover screen that’s quite compact.

We’d argue that might hold it back, since it probably wouldn’t fulfill either smartphone or tablet roles as well for most people, but then screen size is just one piece of the puzzle. If the remaining specs impress, the phone is suitably slim and light, and it’s not prohibitively expensive, then this could still be the device to make foldable phones mainstream.

We probably won’t find out for a while, as the foldable iPhone is unlikely to launch before late 2026 – and may arrive even later than that.

You might also like
Categories: Technology

Apple Watch leak suggests it could soon get a sleep-tracking upgrade it should have had years ago

Wed, 07/23/2025 - 05:36
  • Apple could add a sleep score in a future Apple Watch update
  • It could measure your sleep stages, temperature, and more
  • We don’t know when – or if – this feature will be added to watchOS

The best Apple Watches can track many things, including several different aspects of your nighttime slumber. But one thing they can’t do right now is provide you with a score that indicates the quality of your sleep. Yet according to a leaked graphic, that’s something that might soon be coming to Apple’s wearable.

That information was discovered by writer Steve Moser, who dredged up a graphic named “Watch Focus Score” from deep within the code of Apple’s Health app (via MacRumors). The combination of the image’s name and its contents might imply that Apple is working on a new sleep score feature for watchOS.

The picture depicts an Apple Watch with the number 84 in the center of its display. This number is surrounded by three bars that curve to form a circle. Interestingly, the bars are colored red, light blue and purple, and these tones correspond to the sleep stages shown in the Health app (there, red indicates time awake, light blue means REM sleep, and purple means deep sleep. The app also uses dark blue for core sleep, which could be what the graphic is showing).

The number and colored bars might hint at an overall score that takes into account the different sleep stages and how much of each you got at night. That would provide an extra level of data that you don’t currently find in watchOS.

More than just sleep stages?

(Image credit: Future / Britta O'Boyle)

But there are indications that other factors could be considered for this score. In Apple’s graphic, the Apple Watch is flanked on both sides by various icons, including a moon at stars, a “zzz,” a bed, and an alarm clock. Right now, Apple uses the bed icon for the sleep focus mode, while the alarm clock may signify when your alarm went off or when you got out of bed.

Moser also spotted a thermometer icon, which could be a hint that Apple will take more than just sleep stages into account when calculating a sleep score. It might incorporate wrist temperature as an indicator of your health, for example, and there may be other as-yet-unknown metrics that are also included as part of the overall score.

If this sleep score feature becomes a reality, Apple will be far from the first smartwatch maker to include it in their products: both Fitbit and Garmin have included sleep scores in their devices for years.

But Apple fans won’t mind that if they do indeed get this functionality in a future update – you never know, it might come to watchOS 26 later this year.

You might also like
Categories: Technology

Transformation fatigue: the silent barrier to AI success

Wed, 07/23/2025 - 05:33

You can feel it in the silence, after the announcement, “We’re rolling out AI. It’s going to change everything.” No excitement. Just a quiet recalibration. More meetings. More tools. More disruption. Again.

For many organizations, AI isn’t landing as a breakthrough; it’s landing as a burden. Not because the technology doesn’t have potential, but because the way it’s being implemented is exhausting people. And exhausted people don’t drive transformation. This is what transformation fatigue looks like, and in the age of AI, it’s more common than ever.

AI’s problem isn’t the tech. It’s trust.

Across industries, teams are buckling under the weight of initiatives that arrive fast and land flat. With big promises, buzzwords and a new “strategic pivot” every quarter, under the surface, something deeper is breaking, having trust in the process.

Fatigue isn’t just exhaustion from doing too much, it’s frustration from doing too much that doesn’t matter. And AI, for all its promise, is becoming the latest culprit. When AI tools are introduced before teams are prepared, and when outcomes are measured in jargon, not value, enthusiasm evaporates.

Why product thinking cuts through the noise

This isn’t just a change problem, it’s a design problem. Today, too many organizations still treat transformation as a project. But AI doesn’t work that way, rather it evolves and iterates, it needs to be adopted in the flow of work, not bolted on.

This is where a product-led mindset makes the difference. In a product-centric operating model, change is continuous, and teams are cross-functional and close to the customer with value being delivered incrementally. And outcomes, not activities, guide decisions.

For IT management teams in particular, this shift is critical, they are often the first to feel the friction, implementing systems without full buy-in, training people on tools that weren’t designed with them in mind. These functions carry the weight of cultural change, yet are frequently excluded from strategic planning until rollout is already under way.

However, most organizations aren’t ready. A Harvard Business Review study found that 59 percent of product managers lack the skills to manage AI-driven products. To close the gap, 73 percent of companies are launching internal training, and those who do report a 28 percent increase in product success rates. It’s not the tech that makes AI work, it’s the capability around it.

What transformation fatigue actually looks like

The signs of fatigue aren’t always obvious, but they are almost always cultural.

One of the main causes of transformation fatigue is the long wait for value. AI initiatives often take too long to show impact, and belief in the cause drops off – teams disengage before results arrive. Then there’s the sense that new change looks suspiciously like the old change, leaders rebrand and employees begin to roll their eyes. In the end, it feels like version five of the same plan.

On top of this, methodologies start replacing thinking. Progress is measured in process, not outcomes. Buzzwords like “agile”, “transformation”, and “AI” lose meaning. And when capability gaps appear, the burden of change falls on people least equipped to carry it.

This is especially visible among frontline managers. They’re asked to adopt new systems, support new processes, and keep performance on track – all without enough context, training or time to adapt. The result isn’t just inefficiency, it’s disillusionment, which causes talent to walk out the door.

These are not just operational challenges, they are trust issues, and the longer they go unaddressed, the deeper the fatigue sets in.

So how do we fix it?

The importance of ownership

Companies should start with ownership, not just of tools, but of the transformation itself.

What this means is capability before rollout, organizing teams around delivering value, not around hierarchy, governing through experimentation, not perfection. It also means creating room for small failures, fast learning and constant adjustment.

Above all, it requires clarity. This means saying what’s changing, saying why it matters and making sure to say it again and again. Repetition isn’t the problem, confusion is.

This also means involving teams earlier in the process. Let them test, question and shape how change is applied in their context. Ownership doesn’t happen by decree. It happens through participation.

Transformation that actually transforms

Transformation fatigue isn’t inevitable. It’s a signal that the way we’re leading change isn’t working. The good news is that we don’t have to keep doing it this way.

Product-led thinking gives teams a different path forward, one that doesn’t rely on perfect plans, but instead builds momentum through visible progress. It builds capability, creating feedback loops and gets people involved.

It also builds trust. Not through slogans, but through small wins that actually matter. When teams see impact, they stay engaged and when leaders follow through, people follow back. In the end, when experiments are welcomed, better ideas emerge.

When you design change to work for people, not just around them, AI becomes a tool for focus, not friction. It becomes something worth investing in and believing in again.

That’s when transformation stops being exhausting and starts being real.

We've listed the best product management software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

I am an AI expert and here's what businesses should know about using popular AI chatbots for writing content

Wed, 07/23/2025 - 05:08

The AI hype felt relentless in 2023/24. While the initial frenzy has subsided somewhat, executives and professionals now grapple with the reality of deploying Artificial Intelligence (AI), specifically Generative AI (GenAI), within their organization.

LLMs (Large Language Models), the technology behind popular GenAI chatbots, are powerful, but there remains a significant disconnect between the perception of what they can do and their practical application for business writing.

Easy to use interfaces like ChatGPT make GenAI seem like it "can literally do anything".

This is a dangerous misconception. While incredibly useful for certain tasks, GenAI chatbots can be totally useless, and even harmful when not used appropriately.

Fundamental differences

The fundamental difference lies in how GenAI works compared to traditional software.

1. Traditional software is deterministic

It follows fixed logic and algorithms, producing the exact same, 100% accurate, and therefore repeatable result every time you give it the same input. Think of hitting CTRL+F in Word – you get a precise, repeatable count of a term.

2. Generative AI is non-deterministic

LLMs predict the next word based on probabilities from their training data. This means asking the same question twice will often give you different answers. They are designed to be variable.

Critical characteristics to understand

This core difference results in two critical characteristics businesses must understand:

1. Hallucinations: GenAI can confidently generate incorrect information or make things up. This isn't a bug; it's how the technology works. It's guessing based on patterns, not verifying facts. Copilot, for example, can wildly miscalculate readability scores or miss most instances of a search term.

2. Lack of Repeatability: You simply cannot guarantee the same output from the same prompt.

Here is the absolute critical takeaway: if your writing or document review task requires 100% accuracy or 100% repeatability, you must use deterministic software, not GenAI. Using GenAI for tasks demanding precision is a classic case of wielding a "GenAI hammer" and seeing every problem as a nail.

Flaws and errors in practise

Consider the disastrous consequences. I’ve used MS Copilot to search for every instance of "cybersecurity" in a contract for compliance purposes, only for the GenAI tool to miss 23 out of 27 occurrences. Trying to "shred" a document line-by-line into an Excel matrix for compliance, a task requiring perfect repeatability, is another inappropriate use case where GenAI will fail.

For businesses, especially in regulated sectors, using GenAI for tasks where factual accuracy is paramount is dangerous. Users may trust outputs due to brand credibility, not realizing the risks of inaccuracy.

Real-world failures like Air Canada's chatbot providing false information resulting in a lawsuit underscore the significant brand and trust damage inaccurate GenAI can cause.

So, where IS GenAI useful for business writing?

GenAI thrives for tasks where variability, creativity, or a "good enough" answer is acceptable or desired.

Appropriate use cases include:

  • First Draft Creation: Generating initial versions of documents like management plans, executive summaries, or proposal sections based on context. This can save significant time.
  • Creative Assistance: Rewriting content in a different tone or style.
  • Summarization: Condensing lengthy documents.
  • Simplification/Rephrasing: Making complex text more accessible or refining paragraphs.
  • Research & Analysis: Using public data for competitive analysis or sales research where perfect accuracy on every detail isn't required for generating insights. Using NLP (another type of AI) for thematic analysis across communications to check message consistency.

Beyond simple chatbots, the real value often lies in specialized applications. These layer GenAI into workflows for specific jobs, intelligently combining GenAI for creative/drafting tasks with deterministic software for accuracy-critical functions like readability scoring or compliance checks.

They understand the "job to be done" and apply the right technology. NotebookLM, which generates audio summaries of documents, is a great example of a focused application.

Garbage In, Garbage Out: The Unsexy Truth of Knowledge Management

Generative AI, even when combined with techniques like Retrieval Augmented Generation (RAG) to access proprietary data, is not a magic wand that can overcome poor data quality. The old adage "garbage in, garbage out" is more relevant than ever. If your internal knowledge bases are a mess of outdated content, multiple revisions, and poorly tagged documents, the AI's output will reflect that chaos.

As the Harvard Business Review noted, "Companies need to address data integration and mastering before attempting to access data with generative AI". Good data hygiene – clear folder structures, naming conventions, and processes for maintaining content – is crucial but is fundamentally a human behavior problem, not just a tech one. Investing in proper knowledge management now will pay dividends when you roll out any GenAI solution.

Data Security: The Enterprise Achilles' Heel

Many popular AI chatbots rely on public cloud-based LLMs. For businesses, especially those in regulated industries like defense, finance, and healthcare, feeding proprietary or sensitive or PII (Personally Identifiable Information) data into these public models poses a significant security risk. CISOs (Chief Information Security Officers) are rightly wary, often blocking interactions with such models entirely.

The safer path for enterprises involves hosting LLMs in a private cloud or on-premise, fully locked down behind the firewall. The rise of powerful open-source models like Llama 4 or Mistral Nemo which can be deployed securely in-house, is a welcome trend. This shift is so significant that a Barclays CIO survey last year indicated 83% plan to repatriate some workloads from the public cloud, largely driven by AI considerations.

The Real Driver: People and Process

Most AI projects fail not due to the technology, but because of people, process, security, and data issues. Lack of buy-in, poor strategy, inadequate data, and insufficient change management and user education are common pitfalls.

Deploying AI chatbots without teaching users about:

  • Hallucinations
  • The need to verify outputs
  • Effective prompting
  • Crucially, what tasks not to use GenAI for

...will lead to frustration and project failure.

Start with the business problem you need to solve, then map the appropriate technology to that job. Don't just chase the "shiny new tech". Define your goals, measure success (both quantitative and qualitative), and involve end-users early.

When evaluating vendors, look beyond captivating demos. Ask pointed questions about accuracy, repeatability, data handling, security posture, and their understanding of your specific use cases and industry needs. Always try before you buy and vet vendors carefully. Be wary of vendors who overpromise or claim GenAI can do everything.

In summary, popular AI chatbots offer exciting capabilities, but they are not magic. They are powerful tools with significant limitations. Successful businesses will adopt a pragmatic, thoughtful approach: understanding GenAI's non-deterministic nature, applying it strategically to appropriate tasks (like creative drafting), leveraging hybrid applications, investing in data quality and security, and crucially, focusing on the people and processes required for effective adoption and change management.

This is the path to truly unlocking AI's value.

I tried 70+ best AI tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

PayPal's new cross-border payments platform looks to make sending money easier for 2 billion users

Wed, 07/23/2025 - 05:02
  • PayPal World will enable users to pay using their domestic wallets
  • PayPal/Venmo, NPCI, Tenpin and Mercado Pago join forces
  • The system works with open source APIs, so should be easily expandable

PayPal has launched a new platform in the hope of simplifying cross-border commerce by connecting major digital wallets and payment system to facilitate multi-currency transactions.

With PayPal World, users will be able to pay internationally using their domestic wallets and payment methods.

With over two billion users targeted globally, the new platform is expected to land in Fall 2025, but only select countries will be able to benefit from easier cross-border payments at launch.

PayPal World

From launch, PayPal World will work with Mercado Pago (Mexico), NPCI International Payments (India), PayPal (US), Tenpin Global (China) and Venmo (US).

"For much of the world’s population, international shopping and money transfers are not just difficult, at times they are impossible," the company noted in a press release.

The platform uses open source APIs to make it easy for more wallets to get onboard and boost interoperability in the future, but PayPal didn't mention any companies that could be joining its World platform beyond the initial launch partners.

"The challenge of moving money across borders is incredibly complex, and yet this platform will make it so simple for nearly two billion consumers and businesses," PayPal CEO Alex Chriss noted.

PayPal gave some examples of how global customers could use its new platform, including international visitors in China being able to scan their PayPal app with a merchant that accepts Weixin Pay, and UPI users in India being able to pay with their local wallet on an American ecommerce site.

NPCI International Payments CEO Ritesh Shukla welcomed the new platform, adding that it "aligns with [NPCI's] vision to make cross-border payments more seamless, secure, and inclusive."

"In addition to payments, Tenpay Global will deepen its collaboration with PayPal World in remittances," Tenpin Global CEO Wenhui Yang added.

You might also like
Categories: Technology

Windows 11 migration is still causing lots of headaches for some firms

Wed, 07/23/2025 - 04:38
  • Running an old OS like Windows 10 could soon pose cybersecurity issues
  • Upgrading could reveal software compatibility issues, report notes
  • The clock is now ticking to avoid a rushed Windows 11 migration

New research has uncovered some of the finer details around why many businesses are still being cautious with their approach to Windows 11 migration, with security threats and financial impacts proving to be major hurdles.

The report from Panasonic found nearly two-thirds (62%) of devices need replacing or upgrading for Windows 11 compatibility, highlighting the scale of the problem – a figure that rises to 76% among larger organizations with 5,000+ employees.

However, despite migration-related concerns, the study claims many organizations still recognize the benefits of upgrading from Windows 10 and older operating systems.

Businesses still have some concerns about upgrading Windows

Panasonic found 94% fear increased ransomware and malware risks if they don't upgrade, with 93% also concerned about data breaches. But two in three noted overall higher costs associated with migrating to Windows 11, with 55% stating that it could add to cybersecurity expenses.

Nearly half also noted software compatibility issues (47%) and productivity loss during downtime (45%), and for many (25%), hardware upgrades come with software upgrades, compounding the financial impact of OS upgrades.

However, with Microsoft estimating that ESU could cost around £320,000 over three years for 1,000 devices, the need to upgrade is clear.

Around a third each acknowledge that upgrading will give them better performance and processing power (36%), a more future proof ecosystem (36%) and access to AI features like Microsoft Copilot (34%).

Panasonic TOUGHBOOK Europe Head of Go-to-Market Chris Turner commented: "The window is closing for organisations to make a well-planned, measured and cost-effective transition to Windows 11 and start unlocking its benefits."

"Organisations that are still to undertake Windows 11 migration need support to ensure their deployment is not rushed and risky," Turner added.

You might also like
Categories: Technology

The Google Pixel 10 Pro series has been pictured in four shades – some of which we like far more than others

Wed, 07/23/2025 - 04:32
  • Leaked renders have shown the Pixel 10 Pro and Pixel 10 Pro XL in four shades
  • These include Obsidian, Porcelain, Moonstone, and Jade
  • Two of these are far more interesting than the other two

It’s looking likely that you’ll be able to buy the Google Pixel 10 Pro and the Pixel 10 Pro XL in a choice of Obsidian (black), Porcelain (white), Moonstone (slate blue-gray), and Jade (a soft pistachio green with gold accents), as not only have some of these Pixel 10 colors been mentioned before, but now all four have been shown off in leaked renders.

Android Headlines has shared what it claims are official renders of the Pixel 10 Pro and the Pixel 10 Pro XL in these four shades, and while we’d take this leak with a pinch of salt, these certainly look to be high-quality images, so they may well be official.

If these renders are accurate, then the Pro models will be available in two fairly plain, ordinary shades, in the case of Obsidian and Porcelain, since they’re basically just black and white. But the other two options are a bit more interesting.

Image 1 of 2

Leaked renders of the Pixel 10 Pro in four colors (Image credit: Android Headlines)Image 2 of 2

Leaked renders of the Pixel 10 Pro XL in four colors (Image credit: Android Headlines)A bit more color

There’s Moonstone, which we’ve actually seen the Pixel 10 Pro in already via an official teaser. This is rather understated, but the hint of blue in it makes this more interesting than a pure gray option.

The highlight, though, is arguably Jade – it’s a soft, delicate shade that still somewhat fits with the rest of the color options, but is a bit brighter and more unusual. Really, we’d like to see more of this sort of thing, rather than top-end phones defaulting to plain shades, but at least there’s one option here for those who want a splash of color.

We’ll find out how accurate this color leak is soon, as Google is set to unveil the Pixel 10 series on August 20. We’re expecting to see the Pixel 10 itself along with the Pixel 10 Pro, the Pixel 10 Pro XL, and the Pixel 10 Pro Fold, so there should be a lot to look forward to.

You might also like
Categories: Technology

Microsoft seemingly confirms Chinese hackers behind SharePoint server attacks

Wed, 07/23/2025 - 04:25
  • Microsoft names three Chinese hacking groups it claims were abusing recently discovered flaws in SharePoint
  • Hackers were apparently able to access sensitive data
  • The company is confident the attacks will keep coming until the systems are patched

At least three major Chinese hacking groups were abusing recently discovered vulnerabilities to target businesses using Microsoft SharePoint, the company has said.

Microsoft recently released an urgent patch to fix two zero-day vulnerabilities affecting on-premises SharePoint servers, tracked as CVE-2025-49704 (a remote code execution bug), and CVE-2025-49706 (a spoofing vulnerability), which were being abused in the wild.

Now, Microsoft is saying that the groups targeting the flaws are Chinese state-sponsored groups - namely Linen Typhoon, Violet Typhoon, and Storm-2603.

Get Keeper's Personal Password Manager plan for just $1.67/month

Keeper is a password manager with top-notch security. It's fast, full-featured, and offers a robust web interface. The Personal Plan gets you unlimited password storage across all your devices, auto-login & autofill to save time, secure password sharing with trusted contacts, biometric login & 2FA for added security.View Deal

Two typhoons and a storm

The first two are part of the larger “typhoon” operation, counting at least half a dozen organizations, including Brass Typhoon, Salt Typhoon, Volt Typhoon, and Silk Typhoon.

In the last couple of years, these groups were attributed with breaches into critical infrastructure organizations, government, defense, and military firms, telecom operators, and similar businesses, across the western world and NATO members.

Some researchers are saying that these groups were tasked with persisting in the target networks, in case the standoff between the US and China over Taiwan escalates into actual war. That way, they would be able to disrupt or destroy critical infrastructure, eavesdrop on important conversations, and thus gain the upper hand in the conflict.

At least seven major telecommunications operators in the United States have recently confirmed discovering Typhoon operatives on their networks and removing them from the virtual premises.

"Investigations into other actors also using these exploits are still ongoing," Microsoft said in a blog post, stressing that the attackers will definitely continue targeting unpatched systems.

SharePoint Server Subscription Edition, SharePoint Server 2019, and SharePoint Server 2016 were said to be affected. SharePoint Online (Microsoft 365) was secure.

Microsoft recommends customers to use supported versions of on-premises SharePoint servers with the latest security updates immediately, and says users should ensure their antivirus and endpoint protection tools are up to date.

You might also like
Categories: Technology

Supercharge your phone with the ultimate wireless power-up

Wed, 07/23/2025 - 03:46

What's better than wireless charging? Even faster wireless charging. The latest Qi2.2 wireless charging standard makes wireless power much faster, much smarter and even more useful – and while several brands have recently obtained Qi2.2 certification, Baseus is the first to publicly release visuals and detailed specifications of three certified devices. So while others make promises, Baseus is already making Qi2.2 products.

That means Baseus customers will be among the very first people to get a massive wireless power-up.

The AM52 is a super-slim power bank with speedy 25W wireless charging (Image credit: Baseus)Why Qi2.2 is brilliant news for you

Qi2.2 is the very latest version of the world's favourite wireless charging standard. Qi charging is supported by all the big names in smartphones and accessories, delivering convenient and safe wireless charging for all kinds of devices. And the latest version is the best yet. Qi2.2 is much faster, even more efficient and even safer.

There are three key parts to Qi2.2: supercharged wireless power, smarter heat control and magnetic precision. The first means that instead of maxing out at 15W of power like existing wireless chargers do, Qi2.2 can push the limit to 25W. That means much faster charging and less time waiting: Qi2.2 can charge your phone up to 67% faster than Qi2.0.

Wireless charging generates heat, and Qi2.2 keeps that down with next-generation thermal regulation, stricter surface temperature limits and improved coils. And the new Magnetic Power Profile (MPP) built into the standard ensures more precise alignment with your phone, reducing energy waste and improving charging efficiency by 15% whether you're charging in the car, at home or on the go.

The powerful PicoGo AM61 comes with its own USB-C cable so you can charge wired and wirelessly at the same time. (Image credit: Baseus)Qi2.2 is made for everything everywhere

Qi2.2 is made to work across all kinds of devices from the iPhone 12 and endless Androids to future models that haven't even been made yet. And while it's focused on the future it's also fully backwards compatible: your Baseus Qi2.2 power bank or charger will happily power up a device made for older Qi standards, and Qi phone cases can add wireless charging capability to older phones that weren't built with wireless charging inside.

Baseus is the industry leader in Qi2.2 charging, and it's just launched three new products that take full advantage of Qi2.2's extra power and improved efficiency: two powerful PicoGo magnetic power banks for any device and a really useful foldable 3-in-1 PicoGo charger for your phone, earbuds and smartwatch.

The two magnetic power banks are the PicoGo AM61 Magnetic Power Bank and the PicoGo AM52 Ultra-Slim Magnetic Power Bank. Both versions deliver a massive 10,000mAh of power, both have a 45W USB-C charging port so you can charge two things at once, and both can charge your device wirelessly at up to 25W via the new Qi2.2 standard without any danger of overheating.

The AM52's ultra-slim design features a graphene and aluminium shell for heat dissipation and smart temperature control that protects all of your devices while charging, and the slightly larger AM61includes a built-in USB-C cable for extra convenience.

If you're looking for a super-speedy compact charger, you'll love the PicoGo AF21 foldable 3-in-1 wireless charger. It delivers the same super-fast 25W wireless charging as its siblings, and with a total 35W of power across its three modules it can wirelessly power up not just your phone but your earbuds and smartwatch too.

That makes it an ideal bedside charger as well as a great travel charger: it’s extremely small at just 75.5 x 80 x 38.11am and it’s highly adjustable for optimal viewing and charging. You can rotate the watch panel 180º, adjust the phone panel through 115 degrees and adjust the base bracket too.

The PicoGo AF21 foldable 3-in-1 wireless charger is super-portable and extremely adjustable. (Image credit: Baseus)Ride the next wireless wave with Baseus' brilliant power-ups

Baseus is setting the standard for Qi2.2 wireless charging, and whether you grab the powerful dual-charging PicoGo AM61, the super-slim PicoGo AM52 or the multi-talented PicoGo AF21 charger you're getting the latest, greatest and fastest charging for your phone. With Qi2.2 Baseus isn't just riding the next wireless wave. It's shaping it.

The Baseus PicoGo AM61 Magnetic Power Bank, PicoGo AM52 Magnetic Power Bank and PicoGo AF21 3-in-1 Foldable 3-in-1 Wireless Charger will all be available this August, and you'll be able to order them directly from Baseus’s website and from major retailers such as Amazon.

Categories: Technology

Secure your supply chain with these 3 strategic steps

Wed, 07/23/2025 - 03:38

Third-party attacks are one of the most prominent trends within the threat landscape, showing no signs of slowing down, as demonstrated by recent high-profile cyber incidents in the retail sector.

Third-party attacks are very attractive to cybercriminals: threat actors drastically increase their chances of success and return on investment by exploiting their victims’ supplier networks or open-source technology that numerous organizations rely on.

A supply chain attack is one attack with multiple victims, with exponentially growing costs for the those within the supply chain as well as significant financial, operational and reputational risk for their customers.

In a nutshell, in the era of digitization, IT automation and outsourcing, third-party risk is impossible to eliminate.

Global, multi-tiered and more complex supply chains

With supply chains becoming global, multi-tiered and more complex than they have ever been, third-party risks are increasingly hard to understand.

Supply chain attacks can be extremely sophisticated, hard to detect and hard to prevent. Sometimes the most innocuous utilities can be used to initiate a wide-scale attack. Vulnerable software components that modern IT infrastructures run on are difficult to identify and secure.

So, what can organizations do to improve their defenses against third-party risk? We have outlined three areas organizations can take to build meaningful resilience against third-party cyber risk:

1. Identify and mitigate potential vulnerabilities across the supply chain

Understanding third-party risk is a significant step towards its reduction. This involves several practical steps, such as:

i) Define responsibility for supply chain cyber risk management ownership. This role often falls between two stools - the internal security teams who will focus primarily on protecting the customer, while the compliance and third-party risk management programs who own responsibility for third party risk and conduct, but don’t feel confident addressing cyber risks given their technical bias.

ii) Identify, inventory and categorize third parties, to determine the most critical supplier relationships. From a cyber security perspective, it is important to identify suppliers who have access to your data, access into your environment, those who manage components of your IT management, those who provide critical software, and – last but not least – those suppliers who have an operational impact on your business.

This is a challenging task, especially for large organizations with complex supply chains, and often requires security teams to work together with procurement, finance and other business teams to identify the entire universe of supplier relationships, then filter out those out of scope from a cyber security perspective.

Assess risk exposure by understanding the security controls suppliers deploy within their estate or the security practices they follow during the software development process, and highlight potential gaps. It is important to follow this up with agreement on the remediation actions acceptable to both sides, and to work towards their satisfactory closure. The reality is that suppliers are not always able to implement the security controls their clients require.

Sometimes this leads to client organizations implementing additional resilience measures in-house instead – often dependent on the strength of the relationship and the nature of the security gaps.

Move away from point-in-time assessments to continuous monitoring, utilizing automation and open-source intelligence to enrich the control assessment process. In practice, this may involve identifying suppliers’ attack surfaces and vulnerable externally-facing assets, monitoring for changes of ownership, identifying indicators of data leaks and incidents affecting critical third parties, and monitoring for new subcontractor relationships.

2. Prepare for supply chain compromise scenarios

Regrettably, even mature organizations with developed third-party risk management programs get compromised.

Supply chain attacks have led to some of the most striking headlines about cyber hacks in recent years and are increasingly becoming the method of choice for criminals who want to hit as many victims as possible, as well as for sophisticated actors who want to remain undetected while they access sensitive data.

Preparedness and resilience are quickly becoming essential tools in the kit bag of organizations relying on critical third parties.

In practice, the measures that organizations can introduce to prepare for third-party compromise include:

i) Including suppliers in your business continuity plans. For important business processes that rely on critical suppliers or third-party technology, understand the business impact, data recovery time and point objectives, workarounds, and recovery options available to continue operating during a disruption.

ii) Exercising cyber-attack scenarios with critical third parties in order to develop muscle memory and effective ways of working during a cyber attack that may affect both the third party and the client. Ensure both sides have access to the right points of contact – and their deputies – to report an incident and work together on recovery in a high-pressure situation.

iii) Introducing redundancies across the supply chain to eliminate single points of failure. This is a difficult task, especially in relation to legacy suppliers providing unique services or products. However, understanding your options and available substitutes will reduce dependency on suppliers and provide access to workarounds during disruptive events such as a supply chain compromise.

3. Secure your own estate (monitor third-party access, contractual obligations)

Protecting your own estate is as important as reducing exposure to third-party risk. Strengthening your internal defenses to mitigate damage if a third party is compromised involves a number of important good practice measures, including but not limited to:

i) Enhanced security monitoring of third-party user activity on your network,

ii) Regular review of access permissions granted to third-party users across your network, including timely termination of leavers,

iii) Continuous identification and monitoring of your own external attack surface, including new internet-facing assets and vulnerable remote access methods,

iv) Employee security training and social engineering awareness, including implementation of additional security verification procedures to prevent impersonation of employees and third parties.

Security vetting of third-party users with access to your environment or data

As third-party threats evolve and become more prominent, organizations must have a clear view of who they’re connected to and the risks those connections pose. An end-to-end approach to cyber due diligence, encompassing assessment, monitoring, and response capabilities to threats across their supply chains before damage is done.

Third-party risk will remain a challenge for many organizations for years to come, especially as more threat actor groups begin to explore supply chain compromise as an attractive tactic, offering high rewards with relatively low resistance.

Regulators across all sectors are beginning to pay greater attention to supply chain security. Frameworks such as DORA, NIS2 and the Cyber Resilience Act reflect the growing concerns that supply chain security must be a key component of digital strategy. Those who lead on this issue will be best placed to navigate supply chain compromise.

We list the best identity management software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Wikidata’s next leap: the open database powering tomorrow’s AI and Wikipedia

Wed, 07/23/2025 - 01:00

Many people have never heard of Wikidata, yet it’s a thriving knowledge graph that powers enterprise IT projects, AI assistants, civic tech, and even Wikipedia’s data backbone. As one of the world’s largest freely editable databases, it makes structured, license-free data available to developers, businesses, and communities tackling global challenges.

With a gleaming new API, an AI-ready initiative, and a long-standing vision of decentralization, Wikidata is redefining open data’s potential. This article explores its real-world impact through projects like AletheiaFact and Sangkalak, its many technical advances, and its community-driven mission to build knowledge “by the people, for the people,” while unassumingly but effectively enhancing Wikipedia’s global reach.

Wikidata’s impact: from enterprise to civic innovation

Launched in 2012 to support Wikipedia’s multilingual content, today Wikidata centralizes structured data — facts like names, dates, and relationships — and streamlines updates across Wikipedia’s language editions. A single edit (like the name of a firm’s CEO) propagates to all linking pages, ensuring consistency for global enterprises and editors alike. And beyond Wikipedia, Wikidata’s machine-readable format makes it ideal for business-tech solutions and ripe for developer innovation.

Wikidata’s database includes over 1.3 billion structured facts and even more connections that link related data together. This massive scale makes it a powerful tool for developers. They can access the data using tools like SPARQL (a query language for exploring linked data) or the EventStreams API for real-time updates. The information is available in a wide variety of tool-friendly formats like JSON-LD, XML, and Turtle. Best of all, the data is freely available under CC-O, making it easy for businesses and startups to build on.

Wikibase’s robust and open infrastructure drives transformative projects. AletheiaFact, a platform for verifying political claims based in São Paulo, harnesses Wikidata’s records to drive civic transparency, empowering communities with trusted government insights and showcasing open knowledge’s transformative impact. In India, Wikidata was used to create a map of medical facilities in Murshidabad district, color-coded by type (sub-centers, hospitals, etc.) , making healthcare access easier.

In Bangladesh, Sangkalak opens up access to Bengali Wikisource texts, unlocking a trove of open knowledge for the region. These projects rely on a mix of SPARQL for fast queries, the REST API for synchronization, and Wikimedia’s Toolforge platform for free hosting, empowering even the smallest of teams to deploy impactful tools.

A lot of large tech companies also use Wikidata’s data. One example is WolframAlpha, which uses Wikidata through its WikidataData function, retrieving data like chemical properties via SPARQL for computational tasks, or analyzing chemical properties. This integration with free and open data streamlines data models, cuts redundancy, and boosts query accuracy for businesses, all with zero proprietary constraints.

Wikidata’s vision: scaling for a trusted, AI-driven future

Handling nearly 500,000 daily edits, Wikidata pushes the limits of MediaWiki, the software it shares with Wikipedia, and the team is working on various areas of scaling Wikidata. As part of this work, a new RESTful API has simplified data access, thereby energizing Paulina, a public domain book discovery tool, and LangChain, an AI framework with strong Wikidata support. Developers enjoy the API’s responsiveness, sparking excitement for Wikidata’s potential in everything from civic platforms like AletheiaFact to quirky experiments.

The REST API release has had immediate impact. For example, developer Daniel Erenrich has used it to integrate access to Wikidata’s data into LangChain, allowing AI agents to retrieve real-time, structured facts directly from Wikidata, which in turn supports generative AI systems in grounding their output in verifiable data. Another example is the aforementioned Paulina, which relies on the API to surface public domain literature from Wikisource, the Internet Archive and more, a fine demonstration of how easier access to open data can enrich cultural discovery.

Then there is the visionary leap of the Wikibase Ecosystem project, which enables organizations to store data in their own federated knowledge graphs using MediaWiki and Wikibase, interconnected according to Linked Open Data standards. Decentralizing the data reduces strain on Wikidata and lets it go on serving core data. With its vision of thousands of interconnected Wikibase instances, this project could create a global open data network, boosting Wikidata’s value for enterprises and communities.

The potential here is enormous: local governments, enterprises, libraries, research labs, and museums could each maintain their own Wikibase instance, contributing regionally relevant data while maintaining interoperability with global systems. Such decentralization makes the platform more resilient and more inclusive, offering open data stewardship at every scale.

Community events drive this mission. WikidataCon, organized by Wikimedia Deutschland and running from 31 October to 2 November 2025, unites developers, editors, and organizations in an effort to refine tools and data quality. Wikidata Days, local meetups and editathons foster collaboration and offer support for budding projects like Paulina. These events embody Wikidata’s ethos of knowledge built by the people, for the people, and help it remain transparent and community-governed.

Wikidata and AI: the Embedding Project and beyond

The Wikidata Embedding Project is an effort to represent Wikidata’s structured knowledge as vectors, enabling generative AI systems to employ up-to-date, verifiable information. It aims to address persistent challenges in AI — such as hallucinations and outdated training data — by grounding machine outputs in curated, reliable sources. This could render applications like virtual assistants significantly more accurate, transparent, and aligned with public knowledge.

The next decade holds promising opportunities for Wikidata’s continued relevance. As enterprise needs become more complex and interconnected, the demand for interoperable, machine-readable, and trusted datasets will only grow. Wikidata is uniquely positioned to meet this demand — remaining free, open, community-driven, and technically adaptable.

Enterprise IT teams will find particular value in Wikidata’s real-time APIs and its nearly 10,000 external identifiers, which link entries across platforms like IMDb, Instagram, and national library systems. These links reduce duplication, streamline data integration, and bridge otherwise isolated datasets. Whether it’s mapping identities across services or enhancing AI with structured facts, Wikidata provides a scalable foundation that saves time and improves precision.

With AI chatbots and large-language models now woven into everything from enterprise search to productivity software, the need for accurate, real-time information is more urgent than ever. Wikidata’s linked data embeddings could herald a new generation of AI tools — blending the speed of automation with the reliability of human-curated, public knowledge.

As AI reshapes the digital landscape, Wikidata stands out as a beacon of trust and collaboration. By empowering developers, enterprises, and communities alike through projects like AletheiaFact and Sangkalak, it supports transparency, civic innovation, and educational equity. With the Embedding Project improving AI accuracy, the Wikibase Ecosystem enabling federated knowledge networks, and events like WikidataCon and Wikidata Days sparking global collaboration, Wikidata is building an accountable future full of open data. More than a knowledge graph, it’s a people-powered infrastructure for the trustworthy web.

I tried 70+ best AI tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

I am a privacy expert and this is why I believe user personalization is the future of privacy

Wed, 07/23/2025 - 00:38

Personalized content is now a fact of life – what was once considered innovative is now standard for online marketing. As anybody who has indulged in a bit of online retail therapy can tell you, websites are now surprisingly accurate in what they recommend, with promotions appearing at just the right time and content adapting as if by magic.

Whilst that’s the super power of personalization, it’s also a bit… disconcerting? As convenient as it might be to see exactly the right product at exactly the right time, this also raises a lot of questions: where does all of this information actually come from? Exactly how much does this company know about me? Did I really consent to sharing all of this data?

These questions are only becoming more frequent as consumers become more aware of the value of their data. A recent Deloitte study showed over two-thirds of smartphone users worry about data security and privacy on their devices, whilst in the US 86% of consumers are more worried about their data privacy than the state of the economy.

These are sobering statistics and beg the question: if consumers are crying out for better data protection, how can businesses enact a privacy-first approach to data-driven personalization?

Thinking strategically about personalization

The first important part of making personalization fit for our privacy-conscious age is ensuring that it’s done with purpose. Thinking strategically about personalization, as opposed to just considering the technical aspects of it, is crucial to building a model which is both useful to a business and respects data privacy demands from consumers.

Personalizing without a clear goal risks losing consumer trust: just because a business can collect a certain piece of data or display content to a specific target group, it doesn’t mean they should. Over-personalization or irrelevant suggestions can cause rejection – especially when it’s unclear where the information comes from, so it is always better to personalize with purpose.

This also applies to the data that businesses collect. Even with consent, users today expect to decide what information they share. The starting point shouldn’t be a tracking script, but a deliberate content strategy: Which data is truly necessary? What do we want to achieve with it? And how can we explain it clearly and understandably?

Doing this properly brings two benefits: the data is legally secure and often significantly better in quality. Transparency also builds trust – which is more important than ever in digital marketing. Instead of asking for a full set of personalization data upfront, businesses should consider asking for smaller data points like a postcode to show local offers. This approach creates value for both sides and, crucially, builds consumer trust.

Segments rather than individuals

Advances in technology now mean that personalization can be really granular – but is that always desirable? In a privacy-conscious world, definitely not.

Not every user wants to be individually addressed, and not every website needs to do so. Often, it’s more effective to tailor content for groups with similar interests, behavior, or needs. Common segments include first-time visitors vs. return users, mobile vs. desktop users, regional audiences, or browsers who never add items to their cart.

Targeting these groups allows for impactful content variation – without the complexity of individual personalization. Privacy preferences can also be respected: cautious users are addressed neutrally, while opt-in users get a more personal experience.

Flexibility is key

Many companies struggle to reconcile data protection and personalization – often because they see them as contradictory. But the opposite is true: taking data protection seriously builds trust and allows for better personalization.

Take consent banners as an example: one which clearly differentiates data types and allows easy management of preferences is more transparent and, so consistent data shows, reduces bounce rates.

The key is to recognize that flexibility on what consumers expect is king. Personalization is not a one-time project and, just as regulation is continuously evolving, so are user expectations. Successful privacy-first personalization means regularly reviewing and adapting content, processes, and technology.

The bottom line is that personalization is not an end in itself. Rather, it’s meant to help deliver the right content to the right audience at the right time – without crossing lines. Focusing on what users truly need and are willing to share often leads to better results than collecting as much data as possible.

A privacy-first approach to personalization isn’t an oxymoron, it’s a necessity in the modern world. Personalization shouldn’t just be a technical concept, but one that places consumers at the heart of what a business does and offers – not just relevant content, but a brand built on clarity, consistency and respect for consumer attitudes towards privacy.

We list the best Linux distro for privacy and security.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Spotify had to pull an AI-generated song that claimed to be from an artist who passed away 36 years ago

Tue, 07/22/2025 - 22:30
  • AI-generated songs by deceased artists, like Blaze Foley, have been falsely uploaded to Spotify
  • The streaming service is taking them down as they are spotted
  • The tracks slipped past Spotify’s content verification processes through platforms like SoundOn

Last week, a new country song called “Together” appeared on Spotify under the official artist page of Blaze Foley, a country artist shot and killed in 1989. The ballad was unlike his other work, but there it was: cover art, credits, and copyright information – just like any other new single. Except this wasn't an unearthed track from before his death; it was an AI-generated fake.

After being flagged by fans and Foley's label, Lost Art Records, and reported on by 404 Media, the track was removed. Another fake song attributed to the late country icon Guy Clark, who passed away in 2016, was also taken down.

The report found that the AI-generated tracks carried copyright tags listing a company named Syntax Error as the owner, although little is known about them. Stumbling across AI-made songs on Spotify isn't unusual. There are entire playlists of machine-generated lo-fi beats and ambient chillcore that already rake in millions of plays. But, those tracks are typically presented under imaginary artist names and usually have their origin mentioned.

The attribution is what makes the Foley case unusual. An AI-generated song uploaded to the wrong place and falsely linked to real, deceased human beings is many steps beyond simply sharing AI-created sounds.

Synthetic music embedded directly into the legacy of long-dead musicians without permission from their families or labels is an escalation of the long-running debate over AI-generated content. That it happened on a giant platform like Spotify and didn't get caught by the streamer's own tools is understandably troubling.

And unlike some cases where AI-generated music is passed off as a tribute or experiment, these were treated as official releases. They appeared in the artists’ discographies. This latest controversy adds the disturbing wrinkle of real artists misrepresented by fakes.

Posthumous AI artists

As for what happened on Spotify's end, the company attributed the upload to SoundOn, a music distributor owned by TikTok.

“The content in question violates Spotify’s deceptive content policies, which prohibit impersonation intended to mislead, such as replicating another creator’s name, image, or description, or posing as a person, brand, or organization in a deceptive manner,” Spotify said in a statement to 404.

“This is not allowed. We take action against licensors and distributors who fail to police for this kind of fraud and those who commit repeated or egregious violations can and have been permanently removed from Spotify.”

That it was taken down is great, but the fact that the track appeared at all suggests an issue with flagging these problems earlier. Considering Spotify processes tens of thousands of new tracks daily, the need for automation is obvious. However, that means there may be no checking into the origins of a track as long as the technical requirements are met.

That matters not just for artistic reasons, but as a question of ethics and economics. When generative AI can be used to manufacture fake songs in the name of dead musicians, and there’s no immediate or foolproof mechanism to stop it, then you have to wonder how artists can prove who they are and get the credit and royalties they or their estates have earned.

Apple Music and YouTube have also struggled to filter out deepfake content. And as AI tools like Suno and Udio make it easier than ever to generate songs in seconds, with lyrics and vocals to match, the problem will only grow.

There are verification processes that can be used, as well as building tags and watermarks into AI-generated content. However, platforms that prioritize streamlined uploads may not be fans of the extra time and effort involved.

AI can be a great tool for helping produce and enhance music, but that's using AI as a tool, not as a mask. If an AI generates a track and it's labeled as such, that's great. But if someone intentionally passes that work off as part of an artist’s legacy, especially one they can no longer defend, that’s fraud. It may seem a minor aspect of the AI debates, but people care about music and what happens in this industry could have repercussions in every other aspect of AI use.

You might also like
Categories: Technology

Apple TV+ teases new series from Better Call Saul creator – and it’s a smiley face in a Petri dish

Tue, 07/22/2025 - 21:00
  • In a mysterious post on X, Apple TV+ is teasing a series from the Breaking Bad creator
  • Not much is currently known about the show from Vince Gilligan
  • It'll star Better Call Saul's Rhea Seehorn, and we'll know more about it in a few days

In short order after Apple TV+ shared our first look at the set of Ted Lasso season 4, one of the best streaming services is wasting no time setting its eye on other content that’s in the pipeline.

Now, we’ve already known that Vince Gilligan, the creator of Breaking Bad and Better Call Saul, was working on a new show for Apple TV+, but now we have a countdown, which will hopefully bring us even more information.

Tagged “From The Creator Of Breaking Bad” in thin black text over a yellow background, and next to a jar with a smiley face drawn in a Petri dish via a Q-Tip, the image is attached to a post on X (formerly Twitter) that reads, “Happiness is Contagious.”

Happiness is Contagious. pic.twitter.com/izGKiHgPItJuly 22, 2025

It’s certainly a nod to Gilligan’s project, and likely hints that a formal show title, description, full casting, and maybe even a first look or trailer are on the horizon. We already knew that it would star Rhea Seehorn, who appeared on Better Call Saul, but a countdown clock is now visible on the Apple TV+ YouTube channel, pointing to a reveal on Saturday, July 26, 2025.

We do know that the show will be a mix of science fiction and drama, but not much else is known about it. Maybe it’ll feature a Petri dish with a smiley face, though. Gilligan previously teased way back in 2023 that the show has no overlap with Better Call Saul, but will be in Albuquerque, just a very different Albuquerque.

At the time, Gilligan told Variety that it’s heavy science-fiction, and noted, “It’s the modern world – the world we live in – but it changes very abruptly. And the consequences that that reaps hopefully provide drama for many, many episodes after that.”

It’s been a long time coming, but we’ll finally know more when this countdown hits zero on July 25 – just don’t expect Apple TV+ to drop every episode on that date. Still, I’ll be keeping an eye on the comments on the YouTube livestream countdowns, and on social media for some theories on this one.

You might also like
Categories: Technology

Millions at risk as new study highlights unused Zombie accounts which could be exploited by criminals — here's how to stay safe

Tue, 07/22/2025 - 16:31
  • Most people forget their old accounts, but criminals never forget how to exploit them, report warns
  • Zombie accounts are digital weak spots just waiting for password reuse to ruin everything
  • Platforms like Groupon and Pandora are packed with logins that no one’s watching anymore

Forgotten accounts for apps you no longer use might not seem like your most pressing security concern, but new research has claimed they can be far more than digital clutter.

A study by Secure Data Recovery found 94% of respondents admitted to having one or more zombie accounts - accounts left unused for at least 12 months.

These neglected profiles often remain active and vulnerable, giving cybercriminals a quiet back door into users’ digital lives.

Pandora, Groupon, and Shutterfly lead the list of forgotten services

Pandora tops the list of abandoned services, with 40% of respondents admitting they still have unused accounts, with Groupon and Shutterfly following closely, reflecting a wider trend of users drifting away from once-popular platforms.

“That account you haven’t logged into for over a year? It’s still there,” the study notes, warning that abandoned profiles are ripe for hijacking.

These unused accounts aren’t limited to music or shopping, as photo-sharing platforms like Dropbox, Tumblr, and Flickr are also frequently forgotten - and the trend even extends to more sensitive categories, with dating apps such as Tinder, OkCupid, and Bumble ranking highest in abandonment. In the financial space, Acorns, Mint, and YNAB are often left idle, despite potential access to personal or financial information.

Many users simply forget these accounts exist, assuming that inactivity means deletion. In other cases, disinterest drives abandonment.

Facebook ranks highest in dissatisfaction, followed by Twitter/X and Amazon Prime Video. Some platforms failed to keep up with expectations, while others, like Prime Video, alienated users by adding ads.

Interestingly, Prime Video also appears on the list of most-missed services, suggesting users are divided in their views.

The consequences of ignoring these accounts go well beyond clutter.

Reusing passwords across sites, especially between zombie accounts and work or banking logins, creates serious risk.

Secure Data Recovery warns: “Having the same login for that eight-year-old Tumblr account and your active work email might not be in your best interest.”

How to stay safe
  • To reduce risk, review the services you’ve signed up for - if you no longer use an app or website, delete the account.
  • Never reuse passwords. A compromised old account using the same login as your current one can put your data at risk.
  • Create strong, unique passwords for every account. A password manager can help you keep track of them.
  • Also, check the privacy settings on accounts you still use. Some may be sharing more than you think. Adjust those settings to limit how much information is visible.
  • Whenever possible, enable two-factor authentication for extra protection.
  • Finally, use antivirus tools, especially on Android phones.
  • A good free antivirus can warn you about unsafe apps and detect if your device has been compromised.
You might also like
Categories: Technology

The PS5 Pro is rumored to be the only way to get 60fps in GTA 6 – but I'm absolutely not buying one for $700

Tue, 07/22/2025 - 15:30
  • Recent rumors suggest Rockstar Games' GTA 6 will run at 60fps on Sony's PS5 Pro
  • Sony and Rockstar are reportedly working closely together for the game's optimization on PS5
  • 60fps on the base PS5 isn't completely out of the woods yet

The countdown clock to Rockstar Games' Grand Theft Auto 6 feels like it's ticking faster than ever, with a release date set for May 26, 2026 – and in the meantime, a new rumor may spell great news for PS5 Pro owners.

According to reputable leaker Detective Seeds on X, GTA 6 will run at 60fps on PS5 Pro as Sony engineers are reportedly working closely with Rockstar to help achieve the performance target. This comes from the Oblivion remake leaker, so it's safe to say there's a level of credibility here.

Detective Seeds suggests that there will be multiple graphical settings, but will reportedly only be available on the PS5 Pro, and not the base configuration. It doesn't sound completely far-fetched either, as it's evident that Sony and Rockstar have maintained a strong marketing partnership over the years, and that's rumored to continue leading up to GTA 6's launch.

Based on the leak, there are clear hints that 60fps on the base PS5 isn't completely off the cards; rumors also hint at Sony and Rockstar optimizing other titles for 60fps as well, which rings a bell, surrounding Red Dead Redemption 2.

Fans have been requesting a 60fps patch for the critically acclaimed title, so it would be surprising if this wasn't aimed at the base PS5 (especially since it has already been achieved via console exploits). The visual fidelity in GTA 6 is arguably vastly superior to Red Dead Redemption 2's, but the two are still in similar ballparks – so, if the base PS5 gets a 60fps patch for the 2018 title, could that mean the same for GTA 6?

(Image credit: Rockstar Games)Analysis: 60fps or not, I'm not paying $700 for the PS5 Pro

Surely, I'm not the only one who doesn't really care whether or not GTA 6 runs at 60fps on console or not? I mean, don't get me wrong, I'd love to see it available in some capacity, and this isn't me saying '30fps is perfectly fine, stop complaining. ' However, you better believe I'm not paying $700 for a PS5 Pro just to achieve that performance target.

I'd argue that Rockstar Games' GTA 6 is one of the only titles where I'd happily settle with high-quality visuals at 4K 30fps over 60fps (only if optimization for 60fps wasn't possible) on console.

Perhaps you could say that's my excitement for its eventual launch on PC speaking, since I know much higher frame rates will inevitably be available – but if I could play Final Fantasy XVI on PS5 on its quality graphics mode, a fast-paced action RPG game, without it ruining the experience, then I can easily do the same with the arguably the most anticipated game of all-time.

Again, I must stress that 60fps should become a priority for developers on console, but I don't think it will be the end of the world if that doesn't happen for GTA 6 on the base PS5.

You might also like
Categories: Technology

Huge data breach at Australian fashion giant - 3.5 million users at risk, here's what we know so far

Tue, 07/22/2025 - 15:29
  • Security researcher find unencrypted database belonging to Australian fashion brand
  • It contained names, email addresses, phone numbers, and more, of at least 3.5 million people
  • SABO is warning users to be on their guard

Australian fashion brand SABO leaked sensitive data on millions of its customers by keeping an unencrypted, non-password-protected database on the internet, available to anyone who knew where to look.

Jeremiah Fowler, a security researcher known for discovering these types of leaks found a 292 GB archive, containing 3,587,960 .PDF documents containing names, physical addresses, email addresses, phone numbers, and other personally identifiable information (PII) belonging to both retail and corporate SABO customers.

The number of entities whose information was leaked could be around 3.5 million, but it could also be - fifty times as many.

Locking the database down

“In one single PDF file, there were 50 separate order pages, indicating that the total number of potential customers is higher than the total number of PDF files in the database,” Fowler explained.

The information was generated via an internal document management storage system, designed to track sales and returns, as well as the corresponding domestic and international shipping documents.

Since the file dates range from 2015 to 2025, it is safe to assume that some of the information is outdated, and some is highly relevant.

Fowler reached out to SABO with the information, and the database was locked down “within hours”. However, the company never replied to the researcher’s email, so we don’t know for how long the database remained open, who maintained it, or if someone managed to find and exfiltrate the information before he did.

SABO is an Australian fashion brand, designing and selling exclusive collections of clothes, shoes, swimwear, sleepwear, and formal attires. It is primarily an Australian brand, operating in the country. However, it also sells its products online and allows for worldwide shipments.

It currently has three stores in the country and has reported an annual revenue of $18 million for 2024.

You might also like
Categories: Technology

Want to turn your MacBook into a weighing scale? Me neither, but an app that gives the trackpad this ability looks impressively accurate

Tue, 07/22/2025 - 14:25
  • A new app turns Apple's trackpad into a weighing scale
  • The results with the TrackWeight app are surprisingly accurate
  • There are certainly limitations here, though, including the need to keep a fingertip on the trackpad while weighing an object

If you ever need a set of weighing scales in a pinch, it's possible to use your MacBook, believe it or not.

Tom's Hardware noticed a new app for macOS that turns the humble MacBook trackpad into a compact weighing scale, one that is surprisingly accurate, as illustrated in a demo video clip posted on X (see below).

You can turn your Mac trackpad into a weighing scale pic.twitter.com/KxbHrVfag3July 21, 2025

Krish Shah developed the app called TrackWeight, which uses Apple's Force Touch sensors to give you an approximate weight for any object placed on the trackpad.

Now, there's a caveat in that as you can see in the video, it's necessary to rest your finger on the trackpad while weighing - because as Shah explains, trackpad pressure recordings are only generated when capacitance is detected by the MacBook (meaning your finger, or any other conductive object).

The obvious drawback here is that the weight of your fingertip is going to register in the reading provided, too - so rest it on the trackpad as lightly as possible. By all accounts, the weight given is still pretty accurate - though I wouldn't recommend taking it as an exact reading, given the above catch.

The app uses the Open Multi-Touch Support library to tap into trackpad events in macOS, which includes the crucial pressure readings from the pad. Interestingly, Shah explains (on GitHub) that "the data we get from Multi-Touch Support is already in grams" which is handy.

Analysis: other caveats and compatibility

(Image credit: TechRadar)

This is a neat little trick for MacBooks, but there are some limitations, including, as observed, the accuracy, which is not going to be spot-on, but looks close enough to be a good estimation.

Also, weighing metal objects is problematic (due to their conductivity, they'll likely be detected as a finger press), so they will require a small piece of cloth (or paper) to break contact with the trackpad (again, potentially interfering with the reading slightly).

Clearly, you can't weigh large items on a trackpad, either, though the developer of the app claims to have successfully weighed a 3.5kg object without damaging the MacBook. Which is good going - I wouldn’t try that myself, mind, or indeed weighing luggage as the dev warns us against in tongue-and-cheek fashion.

If you're wondering about compatibility, you'll need a Force Touch trackpad on your Apple laptop, which means a MacBook from 2016 or newer (or a MacBook Pro from 2015). You'll also need to be running at least macOS 13 (to have the necessary Multi-Touch Support library) and have App Sandbox disabled (to grant low-level access to the trackpad data). As ever, install any third-party software at your own risk, should you regard this project as anything more than a curiosity.

Interestingly, old iPhones with 3D Touch could also be used to weigh objects (capacitive ones) - and seemingly very accurately in that case.

You might also like
Categories: Technology

Remember the doomed AI nation ship? A shipping giant is now planning a real, moving, floating data center that could power thousands of AI GPUs

Tue, 07/22/2025 - 14:07
  • A 120-meter ship could soon host thousands of AI GPUs with direct seawater cooling
  • The project depends on reused ships to cut both building costs and environmental damage
  • MOL and Kinetics promise flexibility, mobility, and power abundance through powerships and offshore renewables

The idea of putting an AI-powered facility on a ship used to sound like science fiction - and not long ago, there was even a failed attempt by Del Complex to build a floating "AI nation" that would run itself using artificial intelligence.

Now, shipping heavyweight Mitsui O.S.K. Lines (MOL) and Kinetics, the energy transition unit of Karpowership, are aiming to realize something far more grounded.

The companies are working together to build a mobile floating data center that could house thousands of AI GPUs while addressing digital infrastructure bottlenecks.

MOL and Kinetics outline plans for a floating AI data center

The two firms recently signed a Memorandum of Understanding to develop what they describe as “the world’s first integrated floating data center platform.”

The structure will be hosted aboard a retrofitted vessel, supported by a power supply that includes power ships, floating power plants developed by Karpowership, as well as other sources like solar farms, offshore wind, and onshore grids.

“This project represents a major step toward our vision at Kinetics, delivering innovative, efficient, and sustainable infrastructure solutions that meet the energy needs of today and tomorrow,” said Mehmet Katmer, CEO of Kinetics.

“By pairing mobile power generation with floating data infrastructure, we are addressing critical market bottlenecks while enabling faster, cleaner, and more flexible digital capacity expansion.”

The data center is projected to offer between 20 and 73MW of capacity, cooled by direct water systems drawing from seas or rivers.

It would be mounted on a 120-meter-long ship, with network plans that include submarine cables and land-based internet exchanges.

“This MOU represents an important step forward in using the MOL Group's assets and extensive expertise in ship operations to rapidly build digital infrastructure while minimizing environmental impact,” said Tomoaki Ichida, Managing Executive Officer of MOL.

"Moving forward, we will continue to expand a diverse range of social infrastructure businesses centered on the shipping industry.”

A mobile, sea-cooled, power-rich platform that bypasses land constraints and permitting headaches offers an attractive alternative to overburdened terrestrial data centers.

The flexibility is notable, but the scale of the ambition raises questions, and this warrants skepticism.

Although the idea sounds perfect on paper, its real-world execution could face the same issues that has plagued similar utopian infrastructure concepts.

The MOU promises operations by 2027, contingent on “successful feasibility studies and ongoing technical developments.”

Those feasibility studies will need to prove that issues like network latency, physical relocation risks, regulatory uncertainty, and long-term maintenance can be reliably addressed.

Cost and sustainability arguments hinge heavily on the reuse of existing ships.

“In addition to reducing construction costs,” the project claims, “the use of existing onboard systems... is expected to reduce initial investment costs.”

By avoiding new builds, the developers believe they can also cut the environmental toll of raw material extraction.

The practical advantages, such as speed of deployment, mobility, and independence from strained land-based grids, are not in doubt.

“Even in areas experiencing power shortages, offshore data centers can begin operations immediately,” the developers note.

But whether this system will prove reliable, scalable, and economically sound in the long term remains to be seen.

You might also like
Categories: Technology

Finally, Minisforum set to launch its own AI Max+ 395 Mini PC - benchmarks of a 128GB RAM beast emerge on Geekbench

Tue, 07/22/2025 - 12:32
  • Powerful Ryzen AI Max+ 395 APU could debut in new Minisforum mini-PC
  • Up to 128GB RAM expected in Strix Halo-based Minisforum X1 series device
  • Unannounced mini-PC surfaces in Geekbench with familiar naming conventions

Minisforum appears to be working on a new high-end mini PC powered by AMD's latest Strix Halo chip.

The device, which has yet to be officially named, is expected to launch as part of the company’s AI X1 series.

It reportedly features the Ryzen AI Max+ 395, a powerful APU which builds on the Zen 5 architecture. The chip combines strong CPU performance with a potent iGPU and may be paired with up to 128GB of RAM.

Geekbench scores

As spotted by ITHome, several entries referring to “Micro Computer (HK) Tech Limited AI Series” have surfaced on Geekbench, suggesting a Minisforum link.

Benchmark results from Geekbench show multi-core scores above 21,000 and single-core results around 2,900. These place the device ahead of Minisforum’s existing AI 9 HX 370-based models, which typically score closer to 19,000 and 2,300 respectively.

There are already a number of mini-PCs powered by the Ryzen AI Max+ 395, including the Colorful Smart 900, Beelink AI Mini, and AOOSTAR NEX395. So far, most of the mini PCs have come from lesser known or regional brands, rather than big names like Dell, Asus, and MSI.

Minisforum’s current AI X1 and N5 Pro lines already offer strong CPU performance. But the iGPU in the 395 delivers a more noticeable jump in graphics tasks.

That gap is especially clear when compared with models like the Ryzen AI Max+ 395 powered GMKtec EVO-X2, which is already on sale.

Although Minisforum has not confirmed anything officially, the leaked device names closely match those used across the AI X1 Pro series.

We’re excited to see what Minisforum comes up with, as its devices are consistently among the best mini PCs you can buy. We've previously seen hints of a 2U rackmount server powered by AMD’s Ryzen AI Max+ 395 processor from Minisforum, but the MS-S1 Max is an entirely different beast.

Via Notebookcheck

You might also like
Categories: Technology

Pages