Complex problems often demand simple answers. When we overcomplicate things, whether in life or business, we almost always end up worse off. Later, we look back and think: if only I’d kept it simple.
Cybersecurity is no different, though the source of that simplicity may lie in unexpected places.
With the National Cyber Security Centre (NCSC) now sounding the alarm on quantum-era threats and AI-powered malware, it’s clear the risks are evolving fast. These threats adapt, mutate and inject themselves into systems at alarming speed. It’s no wonder business leaders are extremely concerned about the risk of existing cyber strategies and deployed solutions being overwhelmed.
Outspending the problem isn’t workingA recent McKinsey report reveals that cybersecurity spending surged to $200 billion in 2024—up from $140 billion in 2020—yet breaches keep rising.
To confront these rising risks, organizations are doubling down on complex cybersecurity stacks, layering tools in the belief that more technology equals more protection.
But what if that logic is flawed? What if, instead of boosting your system resilience, complexity increases and hides your vulnerabilities? In truth, we’re stuck in a complexity trap.
Organizations are drowning in software solutions that promise the world but deliver confusion. Each new tool might address a specific threat vector, but the resulting patchwork of platforms often leads to fragmented visibility and hidden blind spots.
In short, we risk opening more doors that attackers can walk through.
By trying to guard against every threat, we become entangled in complexity and exposed to its consequences—creating a false sense of security in the process.
Simplicity solves complexityWhen you strip back your cybersecurity layers and concentrate on a back-to-basics approach that’s founded on clarity, control and isolation, you achieve better protection than any complex software stack.
Now, this isn’t about throwing out digital defenses. It’s about recognising their limits and rethinking where real resilience comes from.
Software alone, no matter how smart, is still vulnerable to manipulation. And with AI supercharging attacks in real time—learning from failed breach attempts, mimicking user behavior and exploiting every crack in the system at an accelerating pace—this has never been truer.
That’s why physical isolation has stepped back into the conversation. It’s not just a legacy idea from a pre-cloud era; it’s the critical missing idea in modern cyber strategy.
The case for physical network isolationHighly motivated threat actors and AI-powered malware have the ability to think and spread without human input. With devastating precision, it targets high-value assets, adapting mid-attack.
This calls for a defense that is unhackable by nature.
Hardware-based network isolation is exactly that. When systems are physically segmented—truly separated from the internet – remote infection becomes impossible. The key to modern deployment of this traditional airgap method lies in being able to control it, at will, on demand.
If malware can't make contact, it can't compromise. It’s that simple.
Even if a system is somehow breached, physical segmentation allows businesses to readily contain the threat. When you isolate systems from one another with hardware, not just firewalls or virtual LANs, you prevent lateral movement, stop data exfiltration and drastically reduce the blast radius of any attack.
This is especially critical for operational technology, critical infrastructure and sensitive research environments, where uptime is essential and downtime is catastrophic.
An overdue shift in thinkingThe complexity trap is reflected in how we spend. According to industry research, 65% of cyber budgets now go to third-party tools and services, outpacing investment in in-house capability.
But security is not just a tech problem; it’s a strategic design challenge. Businesses today react to new threats by accumulating more tools. What’s needed instead is a clear, layered security plan that’s built with purpose, not patched together.
That begins with rethinking how much of your infrastructure truly needs to be online. In a hyperconnected world, we’ve defaulted to keeping everything on all the time.
But always-on equals always-vulnerable. If certain data or systems don’t require constant internet access, why expose them?
By selectively disconnecting key assets, at the right time, you can regain control of your business.
The future starts with hardwareLet’s be clear: this isn’t a step backward. It’s a step toward resilience. Software-based security remains essential. But as threats evolve, our defenses must too.
Layered protection that starts with hardware-based control is the only viable way forward. It combines the speed and scale of software with the unbreachable foundations of physical isolation.
Think of it like a bank vault. The digital defenses are the alarms, cameras and motion detectors. But the vault? That’s your hardware-based barrier. Even the smartest thief can’t crack it from a distance.
Protecting your systems isn’t just about keeping up with the latest threats. It’s about doing what works, what’s reliable and proven.
Because just like in life, the clearest answers are often the strongest ones.
And in cybersecurity, simplicity is the ultimate advantage.
We list the best endpoint protection software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Social media professionals have grown increasingly dependent on artificial intelligence, with new research finding more than half saying they now can’t imagine performing their roles without it.
A survey from Hootsuite claims this growing reliance is not matched by results despite heavy investments in AI technologies.
The firm's research reveals 88% of senior marketing leaders are encouraging their teams to use AI tools, yet 81% admit budgets are being wasted on tools not fit for purpose.
Manual work persists despite automation promisesDelving deeper, Hootsuite found many marketers find themselves trapped in a time-consuming cycle of manual labor and subpar outcomes, revealing a deep disconnect between expectations and the actual utility of generative AI tools in marketing.
A significant proportion of social media managers still spend up to three full working days each week verifying AI-generated content and manually gathering insights from online platforms.
This lag not only drains staff time but also affects campaign performance.
As trends shift rapidly, marketers often find their content outdated by the time it is published, which may explain why over half of senior marketers feel their campaigns consistently underperform.
The financial implications are just as troubling. Budgets for AI tools continue to rise, yet for some, the wasted investment exceeds 20% of their entire marketing budget.
"This should be a wake up call to all marketers: traditional AI isn’t as sophisticated as you think it is," noted Irina Novoselsky, CEO at Hootsuite.
"With five billion people spending up to five hours a day online, social is one of the richest sources of real-time data sources available and yet, traditional AI tools still can’t harness it, leaving the insights marketers truly need hiding in plain sight."
(Image credit: NPowell/Flux)With rising pressure from executive leadership to justify every expense, marketers are finding it increasingly difficult to defend investments in AI tools that fail to deliver tangible returns.
A critical weakness in current generative AI systems lies in their reliance on outdated datasets.
These tools often fail to capture the dynamic nature of real-time audience behavior, meaning that their insights may be out of sync with the present moment.
While 64% of senior leaders believe their AI tools offer real-time insights, only 39% of social media managers agree, a clear signal that confidence in AI’s real-world performance is uneven across organizational levels.
In response to these challenges, Hootsuite has introduced OwlyGPT, a generative AI assistant trained on live social data.
The company says, this tool delivers up-to-the-minute insights tailored to brand voice and cultural context.
Considering the issues with AI static data, this move appears promising, but it's good to approach it with some skepticism. After all, businesses have been led to believe in AI’s transformative power before, only to confront disappointing results.
You might also likeFrom my vantage point, I see the legacy mainframe landscape as both a testament to decades of reliable operation and a critical juncture demanding strategic evolution. The global economy’s reliance on these systems is undeniable – they are the silent workhorses powering a significant majority of business transactions.
However, the accelerating pace of technological advancement, coupled with the realities of hardware lifecycles and a shifting talent pool, calls for a proactive and thoughtful approach to their eventual end-of-life. The question is no longer if we modernize, but how we navigate this complexity without disrupting the very core of operations.
Cost and complexityA primary hurdle is the significant cost and inherent complexity of these transformations. Mainframe modernization isn't a simple tech refresh; it demands substantial investment, time, and meticulous planning. Decades of accumulated technical debt, often manifested as undocumented code and intricate dependencies, require a phased and strategic approach.
Carving the application portfolio into thin, business-aligned slices, is a great approach to deliver tangible value in shorter cycles. Prioritizing initiatives with clear and early ROI, such as migrating non-critical workloads, builds momentum and stakeholder confidence.
The shrinking pool of mainframe-skilled professionals presents another critical challenge. The reality is that the workforce with deep expertise in these legacy systems is nearing retirement, creating a potential knowledge vacuum. To mitigate this, we advise for codification of tribal knowledge.
This involves leveraging tools to harvest specifications from production logs and source code analytics, while also pairing retiring experts with cross-skilled engineers. Investing in upskilling programs that bridge the gap between COBOL literacy, modern cloud-native and observability skills is paramount to building a future-ready workforce capable of managing both legacy and target environments during transition.
Data migrationMigrating petabytes of critical, often poorly documented, business data and its embedded logic to modern platforms is a high-stakes challenge, with severe risks of data loss or corruption. A recommended strategy involves inverting data gravity: implement an API façade over shared datasets and incrementally replicate data to the target platform using event streaming, thereby minimizing disruption.
Employing anti-corruption layers ensures a clean decoupling of legacy systems, aligning migration with modern architectures while safeguarding core business processes. Another technique we support is behavior equivalence and leveraging data seams to integrate into the origin system while the architecture is going through its evolution.
Beyond the technical aspects, organizational misalignment is a frequent stumbling block. Modernization is a business transformation, not just an IT project, and one of the most reliable indicators of success is a courageous, well-supported leader who can steer the program through the inevitable stumbles and issues that arise. A clear business vision, tied to measurable outcomes like improved customer experience or reduced operational risk, is essential.
Culture of changeFostering a culture of change through transparent communication, targeted training, and deliberate capability-building is crucial: the destination team must be fully trained and capable of operating a platform of this criticality and complexity, which goes far beyond a typical N-Tier architecture. Such preparation helps overcome internal resistance and ensures everyone understands, and can realize, long-term benefits.
Finally, integration and observability gaps can derail even the most well-intentioned modernization efforts. Legacy systems are often deeply embedded within the broader IT ecosystem, so updating core components can surface unforeseen integration challenges.
To counter this, we advocate for enhancing observability from day one, including baselining the performance of existing mainframe jobs and screen transactions; these metrics establish a benchmark to keep the modernized environment aligned with current service levels.
This early telemetry is paired with modern monitoring solutions and real-time dashboards that provide comprehensive insights into system behavior. Prioritizing API-first integration ensures seamless communication between legacy and new architectures, while automated testing at integration points minimizes disruption risk during the transition.
The advent of AIGenerative AI is accelerating mainframe modernization by offering powerful opportunities to analyze legacy systems and streamline transformations, delivering greater agility and resilience. This shift is mirrored by modernization spending moving from defensive capital expenditure to growth-focused operational expenditure.
Success in this evolving landscape hinges on disciplined execution, continuous measurement, and transparent communication, rather than merely relying on tools. Furthermore, enhanced cloud platforms now provide flexible and secure migration paths, significantly boosting the strategic importance of these efforts due to heightened regulatory scrutiny on operational resilience.
Here’s a modernization manifesto to bear in mind:
In conclusion, mainframe end-of-life is not an event to be feared, but a strategic imperative to be navigated with diligence and foresight. It’s a long-term commitment to ensuring our critical value streams remain adaptable and resilient.
We list the best laptop for programming.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
OpenAI will be holding onto all of your conversations with ChatGPT and possibly sharing them with a lot of lawyers, even the ones you thought you deleted. That's the upshot of an order from the federal judge overseeing a lawsuit brought against OpenAI by The New York Times over copyright infringement. Judge Ona Wang upheld her earlier order to preserve all ChatGPT conversations for evidence after rejecting a motion by ChatGPT user Aidan Hunt, one of several from ChatGPT users asking her to rescind the order over privacy and other concerns.
Judge Wang told OpenAI to “indefinitely” preserve ChatGPT’s outputs since the Times pointed out that would be a way to tell if the chatbot has illegally recreated articles without paying the original publishers. But finding those examples means hanging onto every intimate, awkward, or just private communication anyone's had with the chatbot. Though what users write isn't part of the order, it's not hard to imagine working out who was conversing with ChatGPT about what personal topic based on what the AI wrote. In fact, the more personal the discussion, the easier it would probably be to identify the user.
Hunt pointed out that he had no warning that this might happen until he saw a report about the order in an online forum. and is now concerned that his conversations with ChatGPT might be disseminated, including “highly sensitive personal and commercial information.” He asked the judge to vacate the order or modify it to leave out especially private content, like conversations conducted in private mode, or when there are medical or legal matters discussed.
According to Hunt, the judge was overstepping her bounds with the order because “this case involves important, novel constitutional questions about the privacy rights incident to artificial intelligence usage – a rapidly developing area of law – and the ability of a magistrate [judge] to institute a nationwide mass surveillance program by means of a discovery order in a civil case.”
Judge Wang rejected his request because they aren't related to the copyright issue at hand. She emphasized that it's about preservation, not disclosure, and that it's hardly unique or uncommon for the courts to tell a private company to hold onto certain records for litigation. That’s technically correct, but, understandably, an everyday person using ChatGPT might not feel that way.
She also seemed to particularly dislike the mass surveillance accusation, quoting that section of Hunt's petition and slamming it with the legal language equivalent of a diss track. Judge Wang added a "[sic]" to the quote from Hunt's filing and a footnote pointing out that the petition "does not explain how a court’s document retention order that directs the preservation, segregation, and retention of certain privately held data by a private company for the limited purposes of litigation is, or could be, a “nationwide mass surveillance program.” It is not. The judiciary is not a law enforcement agency."
That 'sic burn' aside, there's still a chance the order will be rescinded or modified after OpenAI goes to court this week to push back against it as part of the larger paperwork battle around the lawsuit.
Deleted but not goneHunt's other concern is that, regardless of how this case goes, OpenAI will now have the ability to retain chats that users believed were deleted and could use them in the future. There are concerns over whether OpenAI will lean into protecting user privacy over legal expedience. OpenAI has so far argued in favor of that privacy and has asked the court for oral arguments to challenge the retention order that will take place this week. The company has said it wants to push back hard on behalf of its users. But in the meantime, your chat logs are in limbo.
Many may have felt that writing into ChatGPT is like talking to a friend who can keep a secret. Perhaps more will now understand that it still acts like a computer program, and the equivalent of your browser history and Google search terms are still in there. At the very least, hopefully, there will be more transparency. Even if it's the courts demanding that AI companies retain sensitive data, users should be notified by the companies. We shouldn't discover it by chance on a web forum.
And if OpenAI really wants to protect its users, it could start offering more granular controls: clear toggles for anonymous mode, stronger deletion guarantees, and alerts when conversations are being preserved for legal reasons. Until then, it might be wise to treat ChatGPT a bit less like a therapist and a bit more like a coworker who might be wearing a wire.
You might also likeAt the recent Display Week 2025 event, Chinese firm BOE showed off the first-ever 31.5-inch 8K monitor capable of running at 120Hz.
The CR3000 offers a contrast ratio of 8000:1, a color gamut of 99% DCI-P3, and also supports 240Hz in 4K mode.
BOE, which is the largest panel maker in the world and was also a sponsor of the show, told 8K Association it expects to begin mass production later in 2025, although details on pricing and final product integration are still unknown.
Other 8K panels on showDisplay Week often serves as a glimpse into where display tech may be headed rather than where it currently is. That pattern continued this year with a number of other 8K panels on show.
TCL/CSOT brought an inkjet-printed OLED 8K TV panel, a project built partly from its acquisition of JOLED, and SEL surprised attendees with an 8.3-inch 8K LCD panel that offered over 1,000ppi, making it the sharpest full-color LCD shown to date.
As well as its 8K 120Hz beast, BOE had a number of other products on show. These included the latest version of its miniLED UB Cell 4.0 ADS Pro TVs, which aim to challenge OLED with deeper contrast and better efficiency, and an 85-inch 4K panel with an RGB backlight system running in a filterless mode that could one day reduce power usage and complexity, especially in 8K applications.
It also had a 3D display prototype with eye-tracking based on a 16K development. Although still early-stage, the image quality and parallax control impressed those who got to see it in action.
Still ahead of its timeBOE's CR3000 panel arrives at a time when the broader market is still catching up to high refresh 4K gaming, let alone 8K.
While PC gamers have begun to see mainstream GPUs offer stable 4K60 gameplay, pushing that four times over in resolution and double in refresh raises some difficult questions. Upscaling and frame generation may be more of a necessity than a feature if such a panel is to be usable for gaming or creative work.
While I can't help but be impressed by BOE's 8K 120Hz monitor, it feels like it’s ahead of its time. The hardware to drive it effectively doesn’t exist at scale, and most buyers likely aren’t ready for what would surely be a high-cost niche product.
8K monitors were expected to hit the mainstream a few years ago, but that didn’t happen. This latest panel might be technically impressive, but I for one am not convinced the world is ready for it.
You might also likeWhether for aesthetic reasons or to cut down on screen time, having a TV in the bedroom isn’t for everyone. I didn’t factor a TV in when I designed my bedroom, as it wasn’t worth sacrificing the space when I’ve already got one in my living room, but after a while, I found I missed having the option to curl up in bed and binge-watch my comfort shows on Netflix.
Having not always had the luxury of separate living spaces, I’d put a lot of work into curating my bedroom into a calming and visually pleasing environment, so the idea of sticking a big black rectangle in the middle wasn’t going to do my zen any favors. Therefore, I knew I had to think of an alternative solution that could cure my content cravings without taking up valuable space.
The concept of using a projector to watch shows in bed wasn’t new to me, as I’d racked up plenty of hours watching movies on the Anker Nebula Cosmos 4K SE. Sadly, though, as impressive as that projector is, it proved impractical for bedroom use as it was a bit big and loud for the shelf above my headboard and, as I’m yet to find a tripod that can handle its weight, it just wasn’t the bedfellow I was looking for.
Thankfully, I found the perfect alternative in the Anker Nebula Capsule 3 1080p Mini Google TV Projector, which has a list price of $529.99 / £499.99 / AU$1,599, so it doesn’t cost any more than a decent budget TV. The Nebula Capsule 3 uses the same Google TV operating system that I found so effortless to use with the Cosmos 4K SE, but this time in a conveniently compact package.
Below, you’ll find the reasons why I believe the Anker Nebula Capsule 3 1080p Mini Google TV Projector makes for an amazing alternative – and one reason why opting for a projector over a TV may not be the brightest move.
HighlightsPerfect placement isn't paramountFinding space for a TV set can be tricky, but choices are far from limited when it comes to finding a home for the Anker Nebula Capsule 3.
Its dinky diameter of just 3.1 inches / 78mm makes it conveniently compact and easy to fit on shelves or tabletops, and it has a super convenient tripod mount thread on the base, so it’s easy to find a place for it even if surface space is limited.
And for those times when it isn’t possible to get the angle of the projection spot-on, the Nebula Capsule 3 will automatically adapt its settings to ensure it projects a well-focused image within the space provided, adjusting the keystone positions and avoiding any obstacles along the way.
I can go big and go homeThe beauty of a projector like the Nebula Capsule 3 is that I can change the screen size to suit what I’m watching, and I can do so in a matter of moments.
This means that it’s super easy for me to go from watching TV on a 49-inch projection on the wall at the side of my bed to a projection of around 80 inches on my free-standing projector screen, if I’m in the mood for some big-screen entertainment, simply by rotating the Capsule 3 90 degrees and waiting for the settings to auto adapt based on the new position.
Pleasantly portable projectionWhether you’re staying at a friend's or going camping, the compact dimensions and light weight of 1.9lb / 850g combined with a built-in rechargeable battery make the Capsule 3 satisfyingly easy to pack up and take away.
These features can prove useful even if you don’t plan to take it away from home, especially if your bedroom is anything like mine, with its awkwardly located power outlets. While the 15,000 mAh battery only allows for about two and a half hours of screen time, the USB-C charging cable does mean that one of the best power banks could be used to stretch this duration a bit further.
Despite the space-saving and versatility on offer from the Anker Nebula Capsule 3 1080p Mini Google TV Projector, you’re going to be sacrificing deep blacks and the details in darker scenes if you opt for one over one of the best TVs.
This issue isn’t uncommon, even amongst some of the best projectors on the market, but it’s something to keep in mind if you want your shows to look picture-perfect when watching in the daytime without efficient blackout blinds.
With that being said, this hasn’t been a deal breaker for me personally, because as comfy as my bed is, it’s not my primary place for watching shows in the daytime. And during the times I’ve needed to curl up when it’s still light outside, I’ve learned that I can tolerate the picture looking a little washed out when the compromise is that I essentially have a pocket-sized 50-inch TV.
As AI continues to reshape how we work and live, the promise of regaining time is attracting growing interest.
New research from Lloyds Bank has claimed emerging technologies could help people reclaim up to 110 minutes of free time per day.
A focus on automating daily routines, such as chores, shopping, and travel, could help free up time, but the benefits appear skewed toward high earners. AI tools, including AI assistants, autonomous drones, and driverless vehicles, are framed as part of this shift toward a more efficient daily life, but these are not cheap.
AI tools free up time, but at a costThe bank found in the UK, 86% of adults say having more time is important, rising to 99% among those earning over £100,000.
While 60% of the wider population is open to using new technologies to save time, this jumps sharply among affluent individuals, with nearly all saying they are willing to adopt such tools.
“We know life is hectic, with work, family, and personal commitments all vying for attention,” said Adam Rainey, Director of Mass Affluent at Lloyds.
“But our research shows people are becoming more comfortable with using technology to handle daily tasks.”
The most time-consuming responsibilities, according to the study, are cleaning, cooking, and managing finances.
Almost half (47%) of respondents identified household chores as their primary time drain, while 31% pointed to financial admin.
AI is being promoted as the solution through smart home devices or personal AI agents. These tools promise to handle repetitive work.
Yet many of the best AI tools come with steep costs or require a level of digital skill that remains out of reach for some.
Banking apps continue to lead among accessible time-saving tech, with 48% of adults relying on them. However, the gap widens when it comes to advanced tools; 49% of high earners are now using AI assistants, and 92% agree that wealth enables more free time.
It’s a compelling idea that could integrate everyone, but also one that raises the question - who has the means to work smarter?
As with the story of the Mexican fisherman, it’s worth asking whether we’re overengineering the pursuit of a simpler life some may already have, just without the premium subscription.
You might also like