When it comes to cleaning habits, opinions vary massively. I manage all the floorcare articles on TechRadar, and while I have some of the very best vacuum cleaners in my cupboard, and spend a lot more time thinking about vacuuming than the average person, that doesn't mean I'm actually doing it that regularly.
To find out if my cleaning habits are up to scratch, I decided to find out – from a cleaning expert – exactly how often we should all be vacuuming our floors. I spoke to Kiril Natov, a carpet and upholstery cleaning technician and CEO of cleaning company Premium Clean, with 18 years of experience in the industry. Here's what he had to say…
How often should you vacuum carpets?"Aim to vacuum carpets at least twice a week, even if they don't appear dirty," suggests Natov. If you have pets or children, he recommends upping that frequency.
Does it matter what kind of vacuum you use? "For cut pile carpets, use a vacuum with a beater bar or brush to enhance cleaning," Natov advises. A beater bar is just a rotating roller within the cleaner head, often with bristles, which helps agitate the carpet fibers. When the TechRadar team is recommending the best vacuums for carpet, we also look for models that have enough suction to pull up dust and debris that's wedged deeper into the carpet.
Natov also suggests you could use a robovac. My experience is that even today's best robot vacuums don't have enough suction to really deep clean carpet, but they are great for staying on top of regular, light cleans.
A beater bar will help agitate the carpet fibers (Image credit: Future)How often should you vacuum hard floor?"We recommend vacuuming hardwood floors once a week," says Natov. He suggests using your vacuum's hard floor setting if it has one.
What kind of vacuum should you use for hard floors? "Any vacuum can suck dust, hair, and crumbs off your hardwood, tile, or vinyl floors, but some models do it better than others," says Natov. "To avoid scattering debris or possibly damaging delicate flooring, look for a vacuum that either lets you switch off the spinning brush roll or has a special cleaner head with soft bristles."
My own top picks for the best vacuum cleaners for hardwood floor are models that include a dedicated fluffy floor head – as Natov says, these are perfect for cleaning dust and debris from the surface of even delicate hard floors. Examples include the Dyson V15 Detect and the Dreame R20.
For hard floors, look for a vacuum with a dedicated soft roller – Dyson's version comes with a laser to highlight dust (Image credit: Future)How often should you vacuum your sofa?"We recommend vacuuming the furniture and upholstery fabric once a month," says Natov, noting that pet owners might want to increase that to fortnightly. He suggests paying special attention to your sofa – it's one of the most high-traffic areas in your home, so it's bound to pick up dirt and dust.
When it comes to cleaning furniture and upholstery, it's all about the attachments. If you're buying a cordless vacuum, look for one with a mini motorized cleaner head – kind of like a shrunk-down version of the main floor head. This will help you cover larger, uneven surfaces like sofa cushions.
A mini motorized tool is ideal for cleaning sofas and other soft furnishings (Image credit: Future)Natov's final hot tip for ensuring a thorough clean is to stay on top of vacuum cleaner maintenance. Clean your filters regularly (and make sure they're completely dry before putting them back into the machine – or your vacuum will start to smell), replace filters in line with manufacturer guidelines, stay on top of emptying the dust cup, and don't forget to clean the attachments, too.
You might also like...The Wisconsin State Supreme Court ruled on Wednesday that an 1849 law does not amount to an abortion ban, keeping access to abortion in the state in place.
The future of ransomware threats lies in Generative Artificial Intelligence (GenAI), as hackers are increasingly using the nascent technology to improve and streamline their coding processes, experts have warned.
The latest State of Ransomware report from Kaspersky’s Global Research and Analysis Team (GReAT) analyzed FunkSec, a relatively new ransomware group, first spotted in late 2024.
Despite its junior status, FunkSec already made a name for itself, “quickly surpassing many established actors by targeting government, technology, finance and education sectors across Europe and Asia,” Kaspersky said.
Get 55% off Incogni's Data Removal service with code TECHRADAR
Wipe your personal data off the internet with the Incogni data removal service. Stop identity thieves
and protect your privacy from unwanted spam and scam calls.View Deal
Analyzing the code in its products, the researchers determined that the group is actively using GenAI.
Telltale signs include generic placeholder comments (for example “placeholder for actual check”) and technical inconsistencies (commands for different operating systems that don’t align), they said.
Furthermore, they observed declared but unused functions such as modules included upfront but never utilized, which is something large language models are apparently used to doing.
“More and more, we see cybercriminals leveraging AI to develop malicious tools. Generative AI lowers barriers and accelerates malware creation, enabling cybercriminals to adapt their tactics faster. By reducing the entry threshold, AI allows even less experienced attackers to quickly develop sophisticated malware at scale,” said Marc Rivero, Lead Security Researcher at Kaspersky’s GReAT.
AI-powered attacks will probably require AI-powered defenses, as well. Today, many of the best antivirus and endpoint protection services use AI and machine learning, mostly to detect threats that traditional signature-based methods would miss.
Companies like CrowdStrike, SentinelOne, Sophos, Microsoft Defender for Endpoint, Palo Alto Networks, and many others, are vocal about their AI/ML capabilities, often emphasizing speed, accuracy, and lower false positives compared to legacy solutions.
In this report, Kaspersky recommended users enable ransomware protection for all endpoints, keep everything updated, and focus defense strategies on detecting lateral movements and data exfiltration, among other things.
You might also likeMicrosoft has once again made significant cuts to its gaming division, as Xbox cancels a handful of games, closes a studio, and performs yet another round of mass layoffs.
As reported by Bloomberg, Xbox has closed down studio The Initiative and subsequently canceled the upcoming Perfect Dark reboot. Rare's Everwild has also been canceled, as well as an unannounced online title from The Elder Scrolls Online developer Zenimax Online Studios.
Matt Booty, head of Xbox Game Studios, has said in an email to Xbox staff that the cuts “reflect a broader effort to adjust priorities and focus resources to set up our teams for greater success within a changing industry landscape.”
To facilitate these adjustments, Xbox has reportedly laid off staff across a number of its studios. These include ZeniMax, Candy Crush maker King, Call of Duty support studios Raven Software and Sledgehammer Games, Forza Motorsport developer Turn 10 Studios, and Halo Studios which is responsible for the wider Halo franchise.
In total, it seems that a harrowing 9,000 staff members have been affected by the cuts. Back in May 2025, Microsoft also cut an estimated 6,000 to 7,000 employees from its workforce, meaning the company has potentially let around 15,000 staff go in this year alone - a terrifyingly high number.
It's not the first time Xbox has elected to close entire studios, either. Last year, the company shuttered Arkane Austin, as well as Alpha Dog Games, Roundhouse Games, and perhaps most controversially, Tango Gameworks. Thankfully, the Hi-Fi Rush and The Evil Within developer was able to be rehoused under South Korean publisher Krafton, but not every studio was so lucky.
You can also check out...At this point, we’ve seen numerous leaked renders of the Samsung Galaxy Z Fold 7, but so far we haven’t seen much in the way of photos. One leaked photo showed the back of the phone, but that was about it – until now.
Today, @Jukanlosreve – who has a great track record for leaks – has shared photos showing the front, back, and sides of the Samsung Galaxy Z Fold 7, and while the details are no different to what we’ve already seen in renders, the phone looks much better in the flesh.
You can see that it’s in a blue shade here, which could be the ‘Blue Shadow’ that we’ve previously heard might be one of the Samsung Galaxy Z Fold 7 colors – though from the name we’d expect that to be darker.
Z Fold 7 pic.twitter.com/h8EhC7LbTPJuly 3, 2025
In any case, you can also see a triple-lens camera, and one of the photos provides a good look at just how slim this phone might be.
Previous leaks disagree on exactly how thin it will be, with sources pointing to anything from 3.9mm to 4.5mm when unfolded, but anywhere in that range would make it a lot thinner than the 5.6mm thick Samsung Galaxy Z Fold 6.
A dust disappointmentFold7 IP48 pic.twitter.com/o9icyqDTmfJuly 2, 2025
In other Samsung Galaxy Z Fold 7 news a leaked energy rating label shared by @MysteryLupin lists it as having an IP48 rating. That’s the same as the current model, and is at odds with some earlier leaks that pointed to better dust resistance.
If the Samsung Galaxy Z Fold 7 does have this rating, then while it will be able to survive submersion in water to depths of 1.5 meters for up to 30 minutes, it will only have minimal dust resistance, so that would be disappointing.
We suspect this leaked label is correct though, since foldable phones always struggle with dust resistance, and since leaker @PandaFlashPro also recently claimed it has an IP48 rating.
We should find out for sure soon, as the Samsung Galaxy Z Fold 7 is set to launch on July 9, alongside the Samsung Galaxy Z Flip 7.
You might also likeHouse Republicans cleared a final procedural hurdle early Thursday and are now one vote away from passing President Trump's sweeping tax cut and spending bill before a self-imposed July 4 deadline.
(Image credit: Kayla Bartkowski)
Another hardcoded credential for admin access has been discovered in a major software application - this time around it’s Cisco, who discovered the slip-up in its Unified Communications Manager (Unified CM) solution.
Cisco Unified CM is an enterprise-grade IP telephony call control platform providing voice, video, messaging, mobility, and presence services. It manages voice-over-IP (VoIP) calls, and allows for the management of tasks such as user/device provisioning, voicemail integration, conferencing, and more.
Recently, Cisco found login credentials coded into the program, allowing for access with root privileges. The bug is now tracked as CVE-2025-20309, and was given a maximum severity score - 10/10 (critical). The credentials were apparently used during development and testing, and should have been removed before the product was shipped to the market.
Get 55% off Incogni's Data Removal service with code TECHRADAR
Wipe your personal data off the internet with the Incogni data removal service. Stop identity thieves
and protect your privacy from unwanted spam and scam calls.View Deal
Cisco Unified CM and Unified CM SME Engineering Special (ES) releases 15.0.1.13010-1 through 15.0.1.13017-1 were said to be affected, regardless of the device configuration. There are no workarounds or mitigations, and the only way to address it is to upgrade the program to version 15SU3 (July 2025).
“A vulnerability in Cisco Unified Communications Manager (Unified CM) and Cisco Unified Communications Manager Session Management Edition (Unified CM SME) could allow an unauthenticated, remote attacker to log in to an affected device using the root account, which has default, static credentials that cannot be changed or deleted," Cisco said.
At press time, there was no evidence of abuse in the wild.
Hardcoded credentials are one of the more common causes of system infiltrations. Just recently Sitecore Experience Platform, an enterprise-level content management system (CMS), held a hardcoded password for an internal user. It was just one letter - ‘b’ - which was super easy to guess.
Roughly a year ago, security researchers from Horizon3.ai found hardcoded credentials in SolarWinds’ Web Help Desk.
Via BleepingComputer
You might also likeThe food assistance program known as SNAP could face significant reductions if President Trump's tax and spending bill passes the House.
(Image credit: Alina Selyukh/NPR)
NPR's Leila Fadel speaks with Rep. Chris Deluzio, D-Penn., about the budget and tax reconciliation process in the House and how Democrats might capitalize on it.
In 2017, when President Trump tried to repeal Obamacare and roll back Medicaid coverage, Republican governors rallied against it. Now, as Trump tries again to scale back Medicaid, they've gone quiet.
(Image credit: Andrew Harnik)
After a meal, some people experience high spikes in blood sugar followed by crashing lows. This can cause fatigue, anxiety and trigger overeating. Learning how to manage your blood sugar can help.
(Image credit: filadendron)
The hip-hop mogul's legal saga has reached an uneasy outcome. Despite a tainted legacy and severed business ties, does his acquittal on the most serious charges leave room for a return?
(Image credit: Thaddaeus McAdams/WireImage)
President Trump will give a speech in Iowa Thursday night as the official start to a year of events marking the country's 250th anniversary. It comes at a crucial time for his domestic policy agenda.
(Image credit: Jim Watson)
Record numbers of Americans are expected to fly around the July Fourth holiday, posing a big test for America's fragile air travel system — and for Newark Liberty International Airport in particular.
(Image credit: Spencer Platt)
As AI models grow larger and more capable, the supporting infrastructure must evolve in tandem. AI’s insatiable appetite has Big Tech going as far as restarting nuclear power plants to support massive new datacenters, which today account for as much as 2% of global electricity consumption, or more than the entire country of Germany.
But the humble power grid is where we need to start.
Constructing the computing superstructure to support AI tools will significantly alter the demand curve for energy and put increasing strain on electrical grids. As AI embraces more complex workloads across both training and inference, compute needs – and thereby power consumption – are expected to increase exponentially. Some forecasts suggest that datacenter electricity consumption could increase to as much as 12% of the global total by 2030.
Semiconductors form the cornerstone of AI computing infrastructure. The chipmaking industry has focused primarily on expanding renewable energy sources and delivering improvements in energy-efficient computing technologies. These are necessary but not sufficient – they cannot sustainably support the enormous energy requirements demanded by the growth of AI. We need to build a more resilient power grid.
Moving from Sustainability to Sustainable AbundanceIn a new report, we call for a different paradigm – sustainable energy abundance – which will be achieved not by sacrificing growth, but by constructing a holistic energy strategy to power the next generation of computing. The report represents the work of major companies across the AI technology stack, from chip design and manufacturing to cloud service providers, as well as thought leaders from the energy and finance sectors.
The foundational pillar of this new strategy is grid decarbonization. Although not a new concept, in the AI era it requires an approach that integrates decarbonization with energy abundance, ensuring AI’s productivity gains are not sidelined by grid constraints. In practical terms, this entails embracing traditional energy sources like oil and gas, while gradually transitioning toward cleaner sources such as nuclear, hydro, geothermal, solar and wind. Doing this effectively requires understanding of the upgrades needed for the electricity grid to enable rapid integration of existing and new energy sources.
Consuming electricity from the grid naturally assumes the emissions profile of the grid itself. It should come as no surprise that emissions related to the grid represent the single biggest component of the emissions bill facing any given company. In the conventional approach to sustainability, companies focused more on offsetting emissions derived from the grid rather than sourcing the grid with cleaner (or carbon-free) energy. To support the coming scale-out of AI infrastructure, access to a clean grid will be one of the most important aspects in reducing carbon footprint.
Strategically selecting locations for datacenters and semiconductor fabs will be critical. Countries and regions have a varying mix of clean energy in the power grid, which impacts their carbon emission profile. For example, the United States and France generate a similar percentage of their overall electricity from renewable sources. However, the United States has a significantly higher country emission factor, which represents the direct carbon emission per kilowatt-hour of electricity generated.
This is because most of the electricity in France is generated through nuclear power, while the United States still gets a significant percentage of electricity supplied through coal and natural gas. Likewise, there could be significant differences within a country such as the United States, with states like California having a higher mix of renewables compared to some other states.
Driving Innovation in Semiconductor TechnologyA truly resilient grid strategy must start with expanded capacity for nuclear, wind, solar, and traditional forms of energy, while driving a mix shift to cleaner sources over time. However, to achieve this enhanced capacity, it will be necessary to invest in disruptive innovations. Transmission infrastructure must be modernized, including upgraded lines, substations and control systems. Likewise, the industry must take advantage of smart distribution technologies, deploying digital sensors and AI-driven load management techniques.
Semiconductors have an important role to play. Continued growth of GPUs and other accelerators will drive corresponding growth in datacenter power semiconductors, along with increasing semiconductor content in other components such as the motherboard and the power supply.
We forecast that the datacenter power semiconductor market could reach $9 billion by 2030, driven by an increase in servers as well as the number of accelerators per server. Approximately $7 billion of the opportunity is driven by accelerators, with the rest coming from the power supply and other areas. As the technology matures, we believe gallium nitride will play an important role in this market, given its high efficiency.
As the grid incorporates increasing levels of renewables, more semiconductors will be needed for energy generation. Silicon carbide will be important for solar generation and potentially wind as well. We estimate that renewable energy generation could grow to more than a $20 billion market for semiconductors by 2030. A similar opportunity exists for smart infrastructure such as meters, sensors and heat pumps.
Shifting Incentives for Sustainable GrowthRestructuring the power grid offers the single biggest opportunity to deliver sustainable, abundant energy for AI. Modernizing the power grid will require complex industry partnerships and buy-in from company leadership. In the past, sustainability initiatives were largely regarded as a compliance checkbox item, with unclear ties to business results. A new playbook is needed to enable the growth of AI while shifting business incentives toward generation, transmission, distribution and storage of clean energy and modernization of the power grid.
To truly harness the transformative productivity and prosperity potential of AI, we need a comprehensive sustainability strategy that expands clean energy capacity, modernizes energy infrastructure, and maintains diverse energy generation sources to ensure stable, abundant power for continued technological innovation. When combined with progress in energy-efficient computing and abatement measures, this holistic approach can realistically accelerate the pursuit of sustainability while mitigating the risk of curtailing growth due to insufficient energy resources.
We list the best IT infrastructure management service.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Where we are today is not hybrid cloud rebranded. Hybrid was a transition strategy. Distributed is an entirely new operating environment, where cloud infrastructure and services are physically located in multiple, dispersed environments: on-premise data centers, multiple public clouds, edge locations, and sovereign zones. Yet they are managed as a single, cohesive system. Unlike centralized or hybrid approaches, distributed cloud treats geographic and architectural diversity as a feature, not a compromise.
This shift happened gradually. Organizations reacted to new regulatory frameworks like GDPR and FedRAMP, which enforce data locality and privacy standards that centralized architectures can’t always support. Meanwhile, latency-sensitive applications, like real-time analytics, pulled compute closer to the user, pushing cloud computing infrastructure to the edge. And cost became a concern: 66% of engineers report disruptions in their workflows due to lack of visibility into cloud spend, with 22% saying the impact is equivalent to losing a full sprint.
Distributed cloud addresses all of these challenges, enabling businesses to comply with regulations, improve performance, localize deployments, and maintain operational continuity in one architectural shift. But managing it to ensure that a distributed framework actually reaches its full potential requires serious rethinking. Infrastructure has to be modular and versioned by design, not patched together.
Dependencies need to be explicit, so changes don’t cascade unpredictably. Visibility should extend beyond individual cloud providers, and governance has to follow workloads wherever they run. Yet most organizations today operate without these principles, leaving them struggling with fragmentation, limiting their scalability, opening the door to security and competitive threats, and slowing innovation.
Old Tools, New ProblemsThere’s growing evidence to show just how widespread the shift toward distributed cloud has become: 89% of organizations now use a multi-cloud strategy, with 80% relying on multiple public clouds and 60% using more than one private cloud. The reasons are strategic: reducing vendor lock-in, complying with data localization laws, and improving performance at the edge.
But the consequences are operational. Fragmentation creates chaos. Teams struggle with version control, lifecycle inconsistencies, and even potential security lapses. Infrastructure teams become gatekeepers, and developers lose confidence in the systems they rely on.
Most organizations are still applying traditional centralized cloud management principles to a distributed world. They rely on infrastructure as code (IaC), stitched together with pipelines and scripts that require constant babysitting. This approach doesn’t scale across teams and regions. IaC also introduces new dependencies between layers that are invisible until they break.
All in all, the approach is problematic: 97% of IaC users experience difficulties, with developers often viewing IaC as a “necessary evil” that slows down application deployment. The result is a kind of paralysis: any change carries too much risk, so nothing changes at all.
A New Operating Model for a Fragmented WorldSolving this requires more than another tool. It requires a new operating model and mindset. Infrastructure should be broken into modular, composable units with clear boundaries and pre-defined dependencies. Teams should be able to own their layer of the stack without impacting others. Changes should be trackable, auditable, and safe to automate.
Platforms that offer a single control plane across environments can make this possible. They turn complexity from a liability into a strategic asset: one that offers flexibility without sacrificing control. This is where emerging approaches like blueprint-based infrastructure management offer a compelling path forward. Instead of expecting AI or DevOps teams to connect workflows, infrastructure can be transformed into modular components. Think Lego bricks, except it’s a chunk of code that’s versioned, pre-approved, and reusable.
In this model, automation doesn’t mean giving up control. It means enabling teams to move faster within guardrails that are defined by architecture. The result is a system that scales, even across regulatory zones, business units, and tech stacks. This kind of shift doesn’t eliminate complexity, but it makes it manageable, legible, and strategic.
And we’re already seeing the rise of blueprint-based and modular infrastructure strategies across the industry—from Cisco’s UCS X-Series, which decouples hardware lifecycles for flexible upgrades, to Microsoft Azure Local’s unified control plane for hybrid deployments, and the growing adoption of platform engineering as a discipline for building reusable, scalable systems. It’s an evolution that just makes sense.
The Strategic Advantage of Managing WellDistributed cloud isn’t something organizations opt into. It's the state they're already operating in. The real differentiator now is how well it's managed. Scaling infrastructure has always been achievable; distributing infrastructure, by contrast, demands a different kind of investment: in architecture, workflows, and operational discipline.
Without a system built specifically for elasticity, decoupling, and visibility across environments, complexity quietly erodes both speed and trust. Infrastructure becomes harder to change, risks accumulate invisibly, and innovation slows to a crawl.
The right foundation turns that story around. Distributed infrastructure, when managed deliberately, doesn’t have to become a barrier. It becomes a catalyst.
Elastic systems allow teams to localize deployments without fragmenting control. Decoupled architectures enable parallel innovation across business units, cost centers, and regions without introducing instability. Unified visibility makes governance a continuous function, not an afterthought. Managing complexity isn’t about eliminating it; it’s about structuring it so that distributed systems can scale sustainably, adapt safely, and operate predictably.
In that context, infrastructure becomes a lever for scale instead of a source of drag. Managing it well isn’t just an operational need. It’s a strategic advantage.
We list the best cloud database.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The ferry sank almost half an hour after leaving Ketapang port in the East Java town of Banyuwangi late Wednesday, bound for Bali's Gilimanuk port, a 30-mile trip.
(Image credit: BASARNAS/AP)
An estimated 90% of the capital Port-au-Prince is now under control of criminal groups who are expanding attacks not only into surrounding areas but beyond into previously peaceful areas.
(Image credit: Odelyn Joseph)