Error message

  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /home/cay45lq1/public_html/includes/common.inc).
  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in menu_set_active_trail() (line 2405 of /home/cay45lq1/public_html/includes/menu.inc).

Technology

New forum topics

DNA computing could solve AI's single biggest systemic problem

TechRadar News - Mon, 06/02/2025 - 05:25

As the quest for AI’s breakthrough use case is ongoing, the ubiquity of AI tools is already clear—embedded in our personal devices and set to transform all aspects of our lives. Yet this rise collides with the stark reality the computing sector faces: exponential energy demands that global energy production cannot keep up with.

The computing power needed for AI is doubling every 100 days, while computational capacity is reaching an “extinction event” curtailed by the energy supply that will eventually force a plateau in computational growth. In response, big tech companies are turning to nuclear energy to power rapidly growing AI systems.

The slowdown of Moore’s Law further exacerbates this crisis as conventional device scaling approaches physical limits. Unless we build innovative technology that allows energy-efficient computing, the growth of computing power will inevitably stagnate. Instead of focusing solely on incremental optimizations of current architectures, breakthrough innovation sourcing from different technology sectors will be needed to maintain sustainable progress.

Convergence of Technologies as a Path Forward

The solution lies in the convergence of technologies, particularly new computing paradigms from unconventional areas like biology, chemistry, and optics. As we move further into the 21st century, we increasingly recognize the power of biology and the inspiration we can draw from it for radical technological innovation.

This year’s Nobel Prize in physics underlined this importance by awarding it to inventions and discoveries enabling AI that were fundamentally inspired by the brain's structure.

The next generation of computing

As we continue to explore biologically inspired architectures, we should note that the human brain’s efficiency per unit of power when performing cognitive tasks is 10,000 times greater than that of generative AI. On a molecular scale, this is driven by complex cellular architectures and biochemical reactions that surpass silicon-based operations in energy efficiency while also being massively parallel.

For example, a modern supercomputer can perform approximately one quintillion operations per second. A human cell performs approximately 1 billion biochemical reactions per second, with trillions of cells in the body. This scales to a sextillion reactions per second. Despite these staggering numbers, the energy needed to sustain a human body is orders of magnitude lower than that needed to power a supercomputer.

While this comparison is not computationally equivalent, it underscores the remarkable complexity and energy efficiency of biological systems, which inspire the development of emerging technologies like biological and neuromorphic computing.

More practically, biological computing can utilize synthetic DNA as a medium for storage and computation. DNA offers massive data storage density—the volume of a sugar cube could store the entire Library of Congress—and long-term durability, potentially reducing the need for energy-intensive cooling systems. Computing on DNA can use various breakthroughs that allow assembling, manipulating, storing, and reading the DNA, which the biotechnology industry is continuing to improve rapidly.

Other breakthrough technologies, such as neuromorphic computing, organoid intelligence, and photonic computing, hold similar promise. Neuromorphic systems are silicon-based and designed to mimic the brain’s architecture, achieving highly energy-efficient processing by replicating synaptic connections.

Organoid (a simplified version of an organ grown in the lab) intelligence—a field still in its infancy—also seeks to leverage the brain’s architecture parallel processing capabilities with entirely new biological hardware made from cerebral organoids.

Photonic computing, on the other hand, utilizes light to perform faster, lower-power operations than electronic counterparts. All these approaches are still in their early stages and face technical challenges that need to be overcome. Still, they provide routes to sustainable computing that move beyond the energy limitations of traditional architecture and highlight the importance of early-stage research and development.

In contrast to incremental improvements in existing systems, they offer the potential for a step-change in energy efficiency that could facilitate a Cambrian explosion in applications for the next generation of AI.

Overcoming Challenges To Convergence

Despite its potential, technology convergence faces challenges, including technological maturity of its components, economic feasibility, potential regulatory and human factors.

For new technologies to achieve large-scale adoption, they must demonstrate maturity along with clear value propositions that are financially viable to implement. Organizations may hesitate to fundamentally be rethinking their process due to cost of hiring, training, and investments in new infrastructure, especially if the initial market is too small.

Additionally, some emerging technologies, like organoid intelligence, may raise ethical considerations. In these cases, educating the public and ensuring transparency around ongoing research can help mitigate concerns. For instance, in DNA computing, proactive measures such as screening DNA sequences for biosafety not only addresses potential regulatory concerns but also builds trust in this emerging innovation.

A Vision for the Future

To truly harness the potential of technological convergence, innovation must move beyond simply optimizing existing systems and focus on building entirely new architectures that are both scalable and energy efficient.

These new systems should not be expected to replace or surpass current technologies immediately. Nor should they be viewed as comprehensive in their computational operations. After all, the semiconductor industry has had decades to innovate and optimize existing technologies. Instead, they should be viewed as complementary, finding initial applications in specialized domains that offer unique advantages and can be tested at scale.

The energy crisis in computing presents a daunting challenge, but it also creates a pivotal opportunity for transformative innovation. By prioritizing convergence and breakthrough architectures, we can achieve scalable, sustainable AI and computing solutions.

The next era of computing will be driven by innovation, not incremental improvements. The path forward lies in radical shifts that leverage the synergies of multiple fields, ensuring that the digital age continues to evolve in harmony with our planet's energy realities.

LINK!

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

UK military building “pioneering battlefield system” with new Cyber and Electromagnetic Command

TechRadar News - Mon, 06/02/2025 - 05:16
  • The UK Government is investing in cyber defences and capabilities
  • £1 billion investment includes a new Cyber and Electromagnetic Command
  • “Digital Targeting Web” looks to bolster cyber defences and national security

The UK Government has announced plans to invest over £1 billion into a new pioneering “Digital Targeting Web” to bolster cyber defences and national security.

Alongside this, a new Cyber and Electromagnetic Command will aim to ”put the UK at the forefront of cyber operations,” with enhanced targeting capabilities and digital defences.

The investments will look to “spearhead battlefield engagements” by applying lessons learnt from Ukraine to the UK’s weapons systems, enabling faster and more accurate battlefield decisions and better connected military weapons systems.

Digital capabilities

Cybersecurity and defence is a key priority for this administration, with Prime Minister Kier Starmer committing to an increase in defence spending to 2.5% of GDP, “recognising the critical importance of military readiness in an era of heightened global uncertainty.”

In 2024, the UK announced the establishment of a laboratory dedicated to security research, and invited its allies to collaborate to combat the “new AI arms race” - investing millions into improving cybersecurity capabilities.

The new Command wants to give the British military the upper hand in the race for military advantage by degrading command and control, jamming signals to missiles or drones, and intercepting enemy communications, for example.

The Government warns that cyberattacks are threatening the foundations of the economy and daily life, and with critical infrastructure sustaining 13 cyberattacks per second, the dangers are certainly apparent.

“The hard-fought lessons from Putin’s illegal war in Ukraine leave us under no illusions that future conflicts will be won through forces that are better connected, better equipped and innovating faster than their adversaries,” warns Defence Secretary John Healey.

“We will give our Armed Forces the ability to act at speeds never seen before - connecting ships, aircraft, tanks and operators so they can share vital information instantly and strike further and faster.”

You might also like
Categories: Technology

How to Turn Your Pet's Pictures Into Emoji on Your iPhone

CNET News - Mon, 06/02/2025 - 05:00
Your pet deserves to be their own emoji, on top of all the treats and belly rubs in the world.
Categories: Technology

The Samsung Galaxy S26 series could have Perplexity AI baked in

TechRadar News - Mon, 06/02/2025 - 04:47
  • Samsung is reportedly close to finalizing a deal with Perplexity
  • The deal could be announced this year and see Perplexity replace Gemini as the Galaxy S26's default AI assistant
  • What this would mean for Gemini on Samsung phones is unclear

Right now, Google Gemini is the standard AI assistant on Android phones, and Samsung in particular has heavily incorporated Gemini into its devices. But that partnership might not last much longer.

According to a paywalled report on Bloomberg (via Android Police), Samsung is close to finalizing a deal with Perplexity, which would see the latter’s AI assistant integrated into Samsung’s phones.

Reportedly, the deal could be announced later this year, but it sounds like Perplexity won't appear on the best Samsung phones until early 2026, with Samsung apparently aiming to ship it as the default AI assistant on the Samsung Galaxy S26 series.

(Image credit: Perplexity)Deep integration

The deal would reportedly see Samsung pre-installing the Perplexity app on these phones, as well as integrating its features into Samsung Internet. Apparently, there are even discussions to incorporate Perplexity tech into Samsung's Bixby assistant, though it sounds like there’s less certainty that it will end up being part of the deal.

Beyond that, Samsung and Perplexity have apparently discussed building AI-powered operating systems with AI agents "that can tap into functionality from Perplexity and a range of other AI assistants." That does, however, sound further off if it happens at all.

What this deal would mean for Samsung’s partnership with Google is unclear. If Perplexity is shipped as the default option on the Samsung Galaxy S26 series – and presumably other Samsung phones too – then at a minimum, Gemini would be a bit sidelined.

But that doesn’t necessarily mean the Gemini features we’ve seen on the Samsung Galaxy S25 series will be absent; you might instead have a choice of multiple AI services.

If Samsung really is set to announce this partnership this year, then we should have a clearer idea before too long.

You might also like
Categories: Technology

Stranger Things season 5 release dates have been revealed, and Netflix has turned my 2025 festive season plans upside down

TechRadar News - Mon, 06/02/2025 - 04:46
  • Stranger Things season 5's multiple release dates have been revealed
  • It'll arrive in three parts between late November and New Year's Eve/New Year's Day
  • Netflix is hoping it'll be the biggest festive TV hit of 2025

Good news, everyone! Stranger Things season 5's release date has finally been revealed. Unfortunately, you'll have to tweak your 2025 holiday season plans if you want to stream it as soon as it arrives.

We already knew that Stranger Things 5 was set to be released in 2025 and, according to a major online leak, it was suggested that Stranger Things' final season would arrive this November. Well, that turned out to be partly true.

Announced towards the end of Netflix Tudum 2025, the smash hit show's final season will launch on the world's best streaming service in not one, not two, but three parts. That's the first time that Netflix has chosen to release a new series, or the latest season of one of its TV Originals, on three separate dates.

A post shared by Stranger Things Netflix (@strangerthingstv)

A photo posted by on

As the above Instagram post confirms, Stranger Things season 5 volume 1 will air on November 26 at 5pm PT / 8pm ET in the US. That's the first of three US holidays that the incredibly popular Netflix series' final chapter will land on too – indeed, Thanksgiving 2025 in the US will take place on November 27.

Clearly, Netflix is hoping volume 1, which comprises four episodes, will be the most-watched TV show over US Thanksgiving weekend.

That's not the only major holiday Netflix is targeting, though. Volume 2 of Stranger Things 5, which contains three episodes, will debut on Christmas Day (aka December 25) at 5pm PT / 8pm ET in the US.

Lastly, the final-ever episode (aka volume 3) of Stranger Things will hit the service on New Year's Eve (December 31) in the US at the same time that season 5's other installments are due to be released.

Why Stranger Things 5's release format will turn people's Holiday season plans upside down

I suspect many fans reacted like this when season 5's release format was announced (Image credit: Netflix)

I fully understand why Netflix is dropping new episodes in this way. The streaming titan wants the final season of one of its most successful series to dominate the TV landscape. It makes sense, then, to release the forthcoming season's eight episodes, all of which are movie-length according to Stranger Things star Maya Hawke, at a time when people will have plenty of downtime over the festive season.

The problem I have with this release format, though, is that it's going to turn many people's festive plans *ahem* upside down.

Take me, for instance. I live in the UK and, considering the eight hour time difference between the US' Pacific Time Zone and the UK's, new installments of Stranger Things 5 won't land on the platform until 1am GMT.

That means I, along with many other British fans, will have a very late night if we stay up to watch new episodes as soon as they arrive. If we don't, we face the prospect of having to avoid major spoilers online or from family and friends who might have seen the latest episodes before us.

Holding back those season 5 finale spoilers like... (Image credit: Netflix)

The same is true of fans in other European nations, the Middle East, Asia, and countries like Australia and New Zealand.

Stranger Things season 5's finale might air in the US at 5pm PT / 8pm ET on December 31, so American viewers have the chance to stream it before they welcome in 2026. Many of us won't have that opportunity, though.

Do we cut short our New Year's Eve plans with family and/or friends to head home and stream it straight away to avoid spoilers? Or do we ring in 2026, stay off social media until we watch it, and then stream one of the best Netflix shows' last-ever episode – potentially with an almighty hangover?

I get that the world's various time zones mean that somebody is going to be unhappy about staying up late or getting up early if they want to watch their favorite show's new season ASAP. Nevertheless, season 5's release structure, coupled with the unusual times that new episodes will air – Netflix usually releases new shows and/or seasons at 12am PT – is a, well, strange thing to do.

I guess I'll be staying off social media (and the booze!) over the Christmas holidays until I find the time to stream season 5's final four episodes.

You might also like
Categories: Technology

Fresh DJI Osmo 360 leaks may have given us a sneak preview of the 360-degree camera and its specs

TechRadar News - Mon, 06/02/2025 - 04:44
  • More DJI Osmo 360 images have leaked online
  • The 360-degree camera could launch in July
  • It's said to be similar to the Insta360 X5 in specs

Rumors around a 360-degree camera from DJI have been swirling since October, and now we have some fresh leaks that supposedly give us a look at the DJI Osmo 360 – as well as hinting at some of the specifications it'll bring with it.

Tipster @GAtamer (via Notebookcheck) has posted some pictures of the DJI Osmo 360, showing off the compact camera, the two lenses on the front and back of the device, the small integrated touchscreen, and what looks like an accessory mount.

According to the same source, the specs of the DJI Osmo 360 are going to be "almost the same as the X5", referring of course to the Insta360 X5 that launched in April – another 360-degree camera that the DJI Osmo 360 will be challenging head on.

Have a read through our Insta360 X5 review and you'll see it's a very, very good 8K camera indeed – one we awarded five stars to. The two cameras have 1.28-inch sensors inside, bigger than those in the X4, so it seems we can expect something similar from DJI.

Coming soon?

The technical specifications are almost the same as the X5. pic.twitter.com/7HlC9JQHbPMay 31, 2025

The @GAtamer post was actually a follow-up to another image leaked by @Quadro_News, which seems to show the DJI Osmo 360 in some kind of packaging. Again, we can see one of the camera lenses and the shape of the upcoming gadget.

That's just about all we can glean from these latest DJI Osmo 360 leaks, and we don't get any information here about a launch date or potential pricing. It seems likely that the camera will be appearing sooner rather than later, however.

Just a few days ago we got word that the DJI Osmo 360 would be launching in July 2025, so there's not that much longer to wait. We have already seen leaked images of the camera, which match the pictures that have just shown up.

We've also heard that a super-small DJI Osmo Nano could be launched alongside the DJI Osmo 360. If these new devices are as good as the cameras in the current range, including the DJI Osmo Action 5 Pro, then there's a lot to look forward to.

You might also like
Categories: Technology

The security debt of browsing AI agents

TechRadar News - Mon, 06/02/2025 - 03:56

At 3 a.m. during a red team exercise, we watched customer’s autonomous web agent cheerfully leak the CTO’s credentials - because a single malicious div tag on internal github issue page told it to. The agent ran on Browser Use, the open source framework that just collected a headline-grabbing $17 million seed round.

That 90-second proof-of-concept illustrates a larger threat: while venture money races to make large-language-model (LLM) agents “click” faster, their social, organizational, and technical trust boundaries remain an afterthought. Autonomous browsing agents now schedule travel, reconcile invoices, and read private inboxes, yet the industry treats security as a feature patch, not a design premise.

Our argument is simple: agentic systems that interpret and act on live web content must adopt a security-first architecture before their adoption outpaces our ability to contain failure.

Agent explosion

Browser Use sits at the center of today’s agent explosion. In just a few months it has acquired more than 60,000 GitHub stars and a $17 million seed round led by Felicis with participation from Paul Graham and others, positioning itself as the “middleware layer” between LLMs and the live web.

Similar toolkits - HyperAgent, SurfGPT, AgentLoom - are shipping weekly plug-ins that promise friction-free automation of everything from expense approval to source-code review. Market researchers already count 82 % of large companies running at least one AI agent in production workflows and forecast 1.3 billion enterprise agent users by 2028.

But the same openness that fuels innovation also exposes a significant attack surface: DOM parsing, prompt templates, headless browsers, third-party APIs, and real-time user data intersect in unpredictable ways.

Our new study, "The Hidden Dangers of Browsing AI Agents" offers the first end-to-end threat model for browsing agents and provides actionable guidance for securing their deployment in real-world environments.

To address discovered threats, we propose a defense in depth strategy incorporating input sanitization, planner executor isolation, formal analyzers, and session safeguards. These measures protect against both initial access and post exploitation attack vectors.

White-box analysis

Through white-box analysis of Browser Use, we demonstrate how untrusted web content can hijack agent behavior and lead to critical cybersecurity breaches. Our findings include prompt injection, domain validation bypass, and credential exfiltration, evidenced by a disclosed CVE and a working proof of concept exploit - all without tripping today’s LLM safety filters.

Among the findings:

1. Prompt-injection pivoting. A single off-screen element injected a “system” instruction that forced the agent to email its session storage to an attacker.

2. Domain-validation bypass. Browser Use’s heuristic URL checker failed on unicode homographs, letting adversaries smuggle commands from look-alike domains.

3. Silent lateral movement. Once an agent has the user’s cookies, it can impersonate them across any connected SaaS property, blending into legitimate automation logs.

These aren’t theoretical edge cases; they are inherent consequences of giving an LLM permission to act rather than merely answer, which acts a root cause for the outlined exploit above. Once that line is crossed, every byte of input (visible or hidden) becomes potential initial access payload.

To be sure, open source visibility and red team disclosure accelerate fixes - Browser Use shipped a patch within days of our CVE report. And defenders can already sandbox agents, sanitize inputs, and restrict tool scopes. But those mitigations are optional add-ons, whereas the threat is systemic. Relying on post-hoc hardening mimics the early browser wars, when security followed functionality, and drive-by downloads became the norm.

Architectural problem

Governments are beginning to notice the architectural problem. The NIST AI Risk-Management Framework urges organizations to weigh privacy, safety and societal impact as first-class engineering requirements. Europe’s AI Act introduces transparency, technical-documentation and post-market monitoring duties for providers of general-purpose models rules that will almost certainly cover agent frameworks such as Browser Use.

Across the Atlantic, the U.S. SEC’s 2023 cyber-risk disclosure rule expects public companies to reveal material security incidents quickly and to detail risk-management practices annually. Analysts already advise Fortune 500 boards to treat AI-powered automation as a headline cyber-risk in upcoming 10-K filings. Reuters: “When an autonomous agent leaks credentials, executives will have scant wiggle room to argue that the breach was “immaterial.”

Investors funneling eight-figure sums into agentic start-ups must now reserve an equal share of runway for threat-modeling, formal verification, and continuous adversarial evaluation. Enterprises piloting these tools should require:

Isolation by default. Agents should separate planner, executor and credential oracle into mutually distrustful processes, talking only via signed, size-bounded protobuf messages.

Differential output binding. Borrow from safety-critical engineering: require a human co-signature for any sensitive action.

Continuous red-team pipelines. Make adversarial HTML and jailbreak prompts part of CI/CD. If the model fails a single test, block release.

Societal SBOMs. Beyond software bills of materials, vendors should publish security-impact surfaces: exactly which data, roles and rights an attacker gains if the agent tips. This aligns with the AI-RMF’s call for transparency regarding individual and societal risks.

Regulatory stress tests. Critical-infrastructure deployments should pass third-party red-team exams whose high-level findings are public, mirroring banking stress-tests and reinforcing EU and U.S. disclosure regimes.

The security debt

The web did not start secure and grow convenient; it started convenient, and we are still paying the security debt. Let us not rehearse that history with autonomous browsing agents. Imagine past cyber incidents multiplied by autonomous agents that work at machine speed and hold persistent credentials for every SaaS tool, CI/CD pipeline, and IoT sensor in an enterprise. The next “invisible div tag” could do more than leak a password: it could rewrite PLC set-points at a water-treatment plant, misroute 911 calls, or bulk-download the pension records of an entire state.

If the next $17 million goes to demo reels instead of hardened boundaries, the 3 a.m. secret you lose might not just embarrass a CTO - it might open the sluice gate to poison supplies, stall fuel deliveries, or crash emergency-dispatch consoles. That risk is no longer theoretical; it is actuarial, regulatory, and, ultimately, personal for every investor, engineer, and policy-maker in the loop.

Security first or failure by default for agentic AI is therefore not a philosophical debate; it is a deadline. Either we front-load the cost of trust now, or we will pay many times over when the first agent-driven breach jumps the gap from the browser to the real world.

We feature the best AI chatbot for business.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Why hacking yourself first is essential for proactive cybersecurity

TechRadar News - Mon, 06/02/2025 - 02:50

In an increasingly complex cybersecurity landscape, the concept of "hacking yourself first" is not new as such. Organizations have long been engaging white hat hackers to simulate attacks and identify vulnerabilities before malicious actors can exploit them.

However, the traditional approach to red teaming, which typically involves selecting a few trusted individuals to test a system, is no longer sufficient.

More open and competitive red teaming

The issue lies in scale and diversity. A small, internal team will always be limited by their own experiences and perspectives, while cybercriminals operate in a global, decentralized environment. To stay ahead, security testing has to reflect that same breadth and depth of capability.

We believe that this is where a more open and competitive red teaming model comes into its own. Rather than relying on a fixed set of internal engineers or external consultants, organizations are increasingly turning to decentralized architectures.

These invite skilled professionals from around the world to solve specific, targeted challenges. The best talent is incentivized to respond, and the organization benefits from rapid, high-quality insights tailored to the specific threats it faces.

In practice, this model offers two significant advantages to the ‘standard white hacking’ exercise. First, it ensures that the right expertise is applied to the right challenge. Not every engineer is equipped to uncover flaws in VPN detection or anti-fingerprinting solutions. A decentralized approach enables organizations to source the most relevant skill sets directly, without needing to retrain or reallocate internal teams.

Secondly, the incentive mechanism encourages speed and transparency. Contributors are motivated to share findings immediately so that they can claim rewards. This reduces and even eliminates delays and ensures that critical information reaches defenders quickly.

Traditional methods

The benefits of this approach are already being realized. In sectors such as fintech and Web3, attacks discovered through decentralized red teaming have been observed in the wild months later. This lead time allows businesses to prepare and adapt before those attacks gain traction in broader markets.

It’s important to recognize that decentralized red teaming is not about replacing traditional methods entirely. Conventional penetration testing still plays a valuable role in improving baseline security. But as threats evolve and attackers become more sophisticated, organizations need a more dynamic and scalable way to test their defenses.

Proactive security

Ultimately, the shift from reactive to proactive security cannot be achieved through periodic exercises alone. It requires continuous, adaptive engagement with the threat landscape, and a willingness to invite external expertise into the process. By embracing a more competitive and decentralized approach to red teaming, businesses can significantly improve their resilience and stay one step ahead of attackers.

Cybersecurity is no longer about responding to yesterday’s threats. It is about anticipating tomorrow’s, and making sure your defenses are ready today.

We feature the best business VPNs.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Categories: Technology

Today's NYT Mini Crossword Answers for Monday, June 2

CNET News - Sun, 06/01/2025 - 23:49
Here are the answers for The New York Times Mini Crossword for June 2.
Categories: Technology

9 New Movies on Netflix We Can't Wait to Watch This June

CNET News - Sun, 06/01/2025 - 17:00
This June on Netflix, check out Taraji P. Henson in Straw, plus Trainwreck: The Astroworld Tragedy and more.
Categories: Technology

Quantum computing startup wants to launch a 1000-qubit machine by 2031 that could make the traditional HPC market obsolete

TechRadar News - Sun, 06/01/2025 - 15:32
  • Nord Quantique promises quantum power without the bulk or energy drain
  • Traditional HPC may fall if Nord’s speed and energy claims prove real
  • Cracking RSA-830 in an hour could transform cybersecurity forever

A quantum computing startup has announced plans to develop a utility-scale quantum computer with more than 1,000 logical qubits by 2031.

Nord Quantique has set an ambitious target which, if achieved, could signal a seismic shift in high-performance computing (HPC).

The company claims its machines are smaller and would offer far greater efficiency in both speed and energy consumption, thereby making traditional HPC systems obsolete.

Advancing error correction through multimode encoding

Nord Quantique uses “multimode encoding” via a technique known as the Tesseract code, and this allows each physical cavity in the system to represent more than one quantum mode, effectively increasing redundancy and resilience without adding complexity or size.

“Multimode encoding allows us to build quantum computers with excellent error correction capabilities, but without the impediment of all those physical qubits,” explained Julien Camirand Lemyre, CEO of Nord Quantique.

“Beyond their smaller and more practical size, our machines will also consume a fraction of the energy, which makes them appealing for instance to HPC centers where energy costs are top of mind.”

Nord’s machines would occupy a mere 20 square meters, making them highly suitable for data center integration.

Compared to 1,000–20,000 m² needed by competing platforms, this portability further strengthens its case.

“These smaller systems are also simpler to develop to utility-scale due to their size and lower requirements for cryogenics and control electronics,” the company added.

The implication here is significant: better error correction without scaling physical infrastructure, a central bottleneck in the quantum race.

In a technical demonstration, Nord’s system exhibited excellent stability over 32 error correction cycles with no measurable decay in quantum information.

“Their approach of encoding logical qubits in multimode Tesseract states is a very effective method of addressing error correction and I am impressed with these results,” said Yvonne Gao, Assistant Professor at the National University of Singapore.

“They are an important step forward on the industry’s journey toward utility-scale quantum computing.”

Such endorsements lend credibility, but independent validation and repeatability remain critical for long-term trust.

Nord Quantique claims its system could solve RSA-830, a representative cryptographic challenge, in just one hour using 120 kWh of energy at 1 MHz speed, slashing the energy need by 99%.

In contrast, traditional HPC systems would require approximately 280,000 kWh over nine days. Other quantum modalities, such as superconducting, photonic, cold atoms, and ion traps, fall short in either speed or efficiency.

For instance, cold atoms might consume only 20 kW, but solving the same problem would take six months.

That said, there remains a need for caution. Post-selection - used in Nord’s error correction demonstrations, required discarding 12.6% of data per round. While this helped show stability, it introduces questions about real-world consistency.

In quantum computing, the leap from laboratory breakthrough to practical deployment can be vast; thus, the claims on energy reduction and system miniaturization, though striking, need independent real-world verification.

You might also like
Categories: Technology

Today's NYT Connections: Sports Edition Hints and Answers for June 2, #252

CNET News - Sun, 06/01/2025 - 15:00
Hints and answers for the NYT Connections: Sports Edition puzzle, No. 252, for June 2.
Categories: Technology

Today's NYT Strands Hints, Answers and Help for June 2, #456

CNET News - Sun, 06/01/2025 - 15:00
Here are hints and answers for the NYT Strands puzzle No. 456 for June 2.
Categories: Technology

Today's NYT Connections Hints, Answers and Help for June 2, #722

CNET News - Sun, 06/01/2025 - 15:00
Hints and answers for Connections for June 2, #722.
Categories: Technology

Seagate CEO hints at 150TB hard drives thanks to novel 15TB platters, but notes it won't happen for another decade

TechRadar News - Sun, 06/01/2025 - 13:34
  • Seagate’s HAMR roadmap could deliver 150TB hard drives - but not before 2035
  • Mozaic platform now enables 4TB platters, paving the way to 10TB disks by 2028
  • Mozaic 4 to ship in 2026, while Mozaic 5 aims for late 2027 qualifications

At Seagate’s recent 2025 Investor and Analyst Conference, CEO Dr. Dave Mosley and CTO Dr. John Morris outlined the company’s long-term roadmap for hard drive innovation.

This hinted at the possibility of 150TB hard drives, the largest HDD ever, by groundbreaking 15TB platters, but cautioned that this milestone remains at least a decade away.

The foundation of this future lies in Seagate’s HAMR (Heat-Assisted Magnetic Recording) technology, currently being deployed through the company’s Mozaic platform.

10TB per platter on track for 2028

“We have high confidence in our product roadmap through Mozaic 5. And notably, the design space for granular iron platinum media that's in Mozaic 3 looks very viable to get us up to 10 terabytes per disk,” said Dr. Morris

That 10TB-per-disk benchmark is expected to be reached by 2028. “We do have confidence that we can provide a path to 10 terabytes per disk in roughly this time frame,” Morris added, explaining that spin-stand demonstrations of new technologies typically take five years to reach product qualification.

Looking beyond 10TB, Seagate is exploring how to extend the capabilities of its Iron Platinum media.

“We believe that there's another level of extension of that granular iron platinum architecture that could theoretically get as high as 15 terabytes per disk,”

Such an achievement would pave the way for 150TB hard drives by stacking 10 platters per unit. However, he warned, “beyond 15 terabytes per disk is going to require some level of disruptive innovation.”

Seagate’s CEO, Dave Mosley, echoed this long-range vision, noting, “We now know how we can get to 4 and 5 and beyond. As a matter of fact, we have visibility... beyond 10 terabytes of disk with the HAMR technology.”

“It’s not going to be easy, but I’m convinced that’s going to keep us on a competitive cost trajectory that no other technology is going to supplant in the next decade, probably beyond.”

The company’s confidence is backed by recent milestones. Mozaic 3, which delivers 3TB per platter, is now in volume production, and Mozaic 4 (4TB per platter) is scheduled to enter customer qualification next quarter.

Seagate expects to begin volume shipments of Mozaic 4 drives in the first half of 2026. Meanwhile, Mozaic 5, targeting 5TB per platter, is planned for customer qualification in late 2027 or early 2028.

Still, Seagate made it clear that 150TB drives based on 15TB platters are not imminent. As Morris emphasized, “This is just one other element in the work that we do to underpin our strategy... it will take time. There’s still a lot of work in front of us to get there.”

You might also like
Categories: Technology

We may have some information on incoming smartwatches from Android phone and tablet maker HMD

TechRadar News - Sun, 06/01/2025 - 11:30
  • Details of two HMD smartwatches have emerged
  • Both wearables are said to be running Wear OS
  • One of the models comes with a 2MP camera attached

It appears we may soon get a couple of new contenders for our best smartwatches list. HMD (perhaps best known for releasing Nokia-branded phones in recent years) is rumored to be working on two smartwatches, both running Wear OS, and with a camera fitted to one of them.

This comes from tipster @smashx_60 (via Notebookcheck), and while we can't guarantee the accuracy of the claim, smartwatches would be a sensible next step for HMD – which already makes phones, tablets, earbuds, and the HMD OffGrid.

According to the leak, the first smartwatch will be the HMD Rubber 1, with a 1.85-inch OLED screen, a 400 mAh battery, and heart rate and spO2 tracking. There's also, apparently, a 2-megapixel camera on board this model.

Then there's the HMD Rubber 1S, which comes with a smaller 1.07-inch OLED display, a smaller 290 mAh battery, and no camera – though the heart rate and SpO2 tracking features are still included. It sounds as though this will be the cheaper choice.

For adults or kids?

HMD RUBBER 1- oled 1.85" display - 5ATM Waterproof - BT5.3, WiFi, NFC, Accelerometer, heart rate, SpO2- 2MP CAM- Wear OS- 400mAh, USB-C, QiHMD RUBBER 1S- oled 1.07" - 5ATM Waterproof - BT5.0, WiFi, Accelerometer, heart rate, SpO2- Wear OS- 290mAh, USB-C, QiMay 29, 2025

The camera on the HMD Rubber 1 is interesting, as this would be something we haven't seen on a Wear OS watch before. While it's not clear how the camera would be integrated, presumably it would allow photos and videos to be captured from your wrist, with or without a phone connected.

There's some speculation in the Notebookcheck article that these smartwatches may be intended for kids to use, rather than adults – something along the lines of the Samsung Galaxy Watch for Kids that launched at the start of the year, perhaps.

The leak also mentions that these smartwatches will come with 5 ATM waterproofing, which is good for depths of up to 50 meters. That suggests they'll have a relatively robust casing around the internal components.

We'll have to wait and see what HMD might have in store, though as yet there's been nothing official from the company. In the meantime, we're patiently waiting for the arrival of Wear OS 6, which is expected to be pushed out in the next month or two.

You might also like
Categories: Technology

Best Internet Providers in Eugene, Oregon

CNET News - Sun, 06/01/2025 - 10:15
While the broadband options in Eugene are a bit limited, residents can still access fixed wireless, fiber and cable internet in the Emerald City. Our CNET experts have rounded up the top choices to help you find the best ISP for your home.
Categories: Technology

Elden Ring Nightreign Director Interview Part Two: Why There's No Poison Swamp and Future DLC

CNET News - Sun, 06/01/2025 - 08:00
Our conversation with director Junya Ishizaki continues as we dive into his favorite FromSoftware game and who he plays in Nightreign.
Categories: Technology

Highlights From World's First Humanoid Robot Kickboxing Tournament

CNET News - Sun, 06/01/2025 - 07:00
Several Unitree G1 robots remotely operated by human beings punched, kicked and kneed one another in a fight to the top.
Categories: Technology

I'd Keep Hulu on My Streaming Bingo Card in June and These Other Services, Too

CNET News - Sun, 06/01/2025 - 07:00
Make The Bear your business this month.
Categories: Technology

Pages

Subscribe to The Vortex aggregator - Technology