Fifty years ago, it was heists like the one that hit the Baker Street Bank that had the power to shock the nation. Now, in the digital world, heists look starkly different and cybersecurity threats are constant, with banks like NatWest facing a “continuous arms race” with around 100 million cyber-attacks every month. What used to be gangs of robbers digging tunnels and smuggling deposit boxes full of cash are now groups of hackers sending phishing emails and holding some of the most notable companies to ransom for hundreds of millions of dollars.
This transition from physical to digital theft is evident. No longer confined to vaults and getaway cars, today's high-stake heists are executed remotely, by online threat actors. These modern-day criminals operate across borders, targeting vulnerabilities in systems and human behavior to extract data and money. The sheer volume and relentless nature of these digital assaults, as exemplified by financial institutions battling millions of cyber-attacks monthly, highlight a new era of crime.
The growing problem of cyber-attacksCyber-attacks are a growing problem, amongst a growing number of sectors, and confronting this escalating issue is vital. It’s not just banks that are facing the constant threat of cyber-attacks; cyber threats are growing at an exponential rate, while becoming increasingly sophisticated and targeted.
Data breaches have hit a myriad of industries: from luxury brands like Dior and supermarkets like M&S, to cryptocurrency exchange Coinbase and UK government organization Legal Aid.
The dangers to personal data are being felt across all sectors, at all digital touchpoints. Amid this battleground of immediate cyber threats comes a growing demand for robust security solutions that address company concerns.
From advanced antivirus technologies to endpoint backup software, AI-powered security is evolving rapidly to stay ahead of such attacks - and it’s essential that companies invest in these defenses in order to stay more than one step ahead.
Evolution of technologyAs technology evolves at a rapid pace, companies must keep up with advancements made by cyber-attackers. As businesses of all sizes continue to embrace digital transformation, the need to strengthen their cybersecurity grows increasingly critical.
The UK Government’s recently published Cyber Governance Code of Practice highlights that management of cyber risks is vital for modern businesses to function, and effective management requires collective input from across an organization. This Code of Practice and governance framework package guides boards and directors in managing digital risks and safeguarding their businesses and organizations from cyberattacks.
The framework encourages companies to take four employee-focused actions: foster a cybersecurity culture; ensure clear policies support a positive cybersecurity culture; improve their own cyber literacy through training; and use suitable metrics to verify the organization has an effective cybersecurity training, education, and awareness program.
The report is a clear reminder that the human firewall, that is, the employees who encounter an attack and respond, is just as important as technological defenses.
More than a simple fix, a culture shift is neededIt’s not enough to roll out generic training. The reality is that in today’s world, one wrong click can bring a business to a complete halt. According to the latest insights, the approximate amount of ransoms paid globally in 2024 reached $813.55 million.
When requested to pay a ransom, companies know that refusing to do so runs the risk of their customers’ personal information being leaked publicly, which would additionally require them to pay the associated financial penalties and legal payouts, not to mention reputational damage.
Addressing the threat of cyber-attacks must be embedded in a company’s culture, given the fact that if threat actors are successful, the impact of their actions would be felt not only company-wide but also by the ecosystem within which the organization operates.
Leadership and securityOrganizations can bolster their security by cultivating strong leadership, providing tailored training, and building a proactive security culture to create a ‘human firewall’ of colleagues armed with know-how.
Employees of all skillsets and seniorities should undergo comprehensive and ongoing cyber awareness training, whatever their role and seniority, to drive the defenses forward and cultivate a mindful culture.
When employees are provided with the knowledge and tools to maintain awareness of the dangers their company is facing, they can be the most effective method to keep the business secure.
Building a mindful cultureBuilding a mindful culture can be complemented by a Zero Trust approach, which creates a robust defense against evolving cyber threats. This strategic approach mandates rigorous verification for all access requests, irrespective of their origin or the user's location within the network, thereby yielding exceptionally strong results that effectively eliminate a significant portion of potential threats.
For example, when an employee receives an email requesting sensitive information or a link to a suspicious website, they should be trained to recognize it as a potential phishing attempt right away, verify the sender's identity, and report the email to the IT department for further investigation.
This proactive stance, ingrained through a Zero Trust philosophy and continuous education, significantly reduces the likelihood of successful breaches. It’s better safe than sorry, and in the realm of cybersecurity, this means being diligent about taking the extra steps to fortify an organization's digital defenses.
Don't stop at basic protectionsDon't stop at basic protections, make ongoing training a priority. Defenses can’t stop at antivirus technology and endpoint protection, and training isn’t a one-time solution. While these are the necessities, they are simply not enough for the twenty-first century heist as businesses continue to battle millions of cyber-attacks each month.
As threats advance or teams become complacent, ongoing phishing simulations, tests and education are key in maintaining a robust human firewall. Companies must invest in technology and ongoing training to equip employees across all roles and levels with the skills and awareness to stay alert. A company’s greatest weapon can be its workforce, if leveraged.
Cybersecurity needs tech, but it's nothing without people who are well trained to understand the latest attack methods and protect against the digital transition's inherent risks.
We list the best ransomware protection.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
An old proverb famously states, "If you want to go fast, go alone. If you want to go far, go together."
This is especially true when it comes to artificial intelligence, where breakneck advances happen seemingly every day. And while individual companies are rapidly fielding their own AI-powered chatbots and analysis tools, real long-term improvement and innovation in this new scientific frontier often requires broad collaboration in developing open and trusted AI systems that produce accurate, reliable, and safe outputs.
Early conventional wisdom held that only so-called 'closed' AI systems controlled by one company could be safe and trusted. Some argued that open models would inevitably undermine safety or lead to misuse. But experience is quickly showing that open source models and the collaboration they bring are a powerful tool for promoting security and trust.
The power of collaborationCollaboration is a powerful force for AI advancement because it fosters diverse perspectives and capabilities. When it comes to AI, collaboration can, in many cases, be optimized by leveraging open source to reduce bias, increase transparency, gain greater control over our data, and ultimately, accelerate time to innovation.
According to McKinsey, organizations that view AI as essential to their competitive advantage are far more likely to use open source AI models and tools than organizations that do not. Open source AI models, tools, and frameworks enable developers and researchers to build upon existing work, rather than starting from scratch, to achieve higher-quality outputs more quickly.
The open source software approach thrives on community contributions, bringing together individuals, companies, and organizations from around the globe to collaborate on shared goals. This is where organizations like the AI Alliance—which was spearheaded by IBM and others, and is comprised of technology creators, developers, and adopters collaborating to advance safe and responsible AI—play a crucial role.
By pooling resources and knowledge, the AI Alliance provides a platform for sharing and developing AI innovations. This meritocracy yields immediate value, both for the broader technology ecosystem and the world at large.
Why the AI Alliance matters todayThere are many practical and ethical reasons for such broad-based AI partnerships. AI research and development require substantial resources, including data, computing power, and expertise. The availability of open source models keeps costs down, broadening choices and helping to prevent the concentration of the AI industry in the hands of a few major players.
The AI Alliance also offers a forum to hold honest conversations among like-minded organizations about AI-related legislation and its impacts on greater innovation and adoption.
In a short time, the AI Alliance has blossomed into a vibrant ecosystem, bringing together a critical mass of data, tools, and talent. Today, more than 140 organizational members from 23 countries collaborate through the alliance to address some of the most pressing challenges in AI.
Open source is particularly critical to members of the alliance, including Databricks, which has long championed the democratization of AI. We’ve open sourced many critical big data processing and analytics projects, like the Delta Lake, MLflow, and Unity Catalog tools that underpin many large data and AI deployments today.
When it comes to today’s AI ecosystem, we need to ensure that everyone, including academics, researchers, non-profits, and beyond, can access and understand the best AI tools and models. The more we all understand these models and how to utilize them, the more we can share ideas on how to safely shape the future of AI and subsequently use it to solve today’s toughest challenges.
But we can’t do it alone.
Collaborate, code, and create the future of AIWe established a policy working group within the Alliance to focus not only on advocacy but also on developing responses to government requests that could impact open-source AI development. For example, last year, we contributed to the landmark National Telecommunications and Information Administration study examining potential benefits and risks of open weight frontier AI models.
The final NTIA report strongly underscored the valuable role of open models in today’s AI ecosystem, while also highlighting the need for vigilant monitoring and ongoing evaluation of policies to manage emerging risks in the future.
Our intention is to ensure that AI regulation is thoughtfully crafted so that open source AI thrives. Organizations like the AI Alliance have laid a solid foundation for international cooperation, but it's just the beginning.
If you work at a business that prioritizes artificial intelligence, you too can be part of this important work. Start by developing educational programs, workshops, and training sessions – and joining AI-related projects and communities – to share knowledge and build tools that benefit others.
You can create and share your own open source projects, such as datasets, pre-trained models, or utilities, which build on a foundation of AI fairness, transparency, and accessibility to ensure the benefits of AI are widely distributed. Check out GitHub or Hugging Face to look for AI/ML projects that align with your skills and interests.
The advent of AI is a pivotal moment in our collective human history. Experience shows that collaboration will be key to our success in advancing AI innovation with safety and trust. We must move into this promising future with open arms and open software models and tools, adequately prepared for the challenges ahead. Let's go far—together.
We list the best IT Automation software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The promise of AI-integrated homes has long included convenience, automation, and efficiency, however, a new study from researchers at Tel Aviv University has exposed a more unsettling reality.
In what may be the first known real-world example of a successful AI prompt-injection attack, the team manipulated a Gemini-powered smart home using nothing more than a compromised Google Calendar entry.
The attack exploited Gemini’s integration with the entire Google ecosystem, particularly its ability to access calendar events, interpret natural language prompts, and control connected smart devices.
From scheduling to sabotage: exploiting everyday AI accessGemini, though limited in autonomy, has enough “agentic capabilities” to execute commands on smart home systems.
That connectivity became a liability when the researchers inserted malicious instructions into a calendar appointment, masked as a regular event.
When the user later asked Gemini to summarize their schedule, it inadvertently triggered the hidden instructions.
The embedded command included instructions for Gemini to act as a Google Home agent, lying dormant until a common phrase like “thanks” or “sure” was typed by the user.
At that point, Gemini activated smart devices such as lights, shutters, and even a boiler, none of which the user had authorized at that moment.
These delayed triggers were particularly effective in bypassing existing defenses and confusing the source of the actions.
This method, dubbed “promptware,” raises serious concerns about how AI interfaces interpret user input and external data.
The researchers argue that such prompt-injection attacks represent a growing class of threats that blend social engineering with automation.
They demonstrated that this technique could go far beyond controlling devices.
It could also be used to delete appointments, send spam, or open malicious websites, steps that could lead directly to identity theft or malware infection.
The research team coordinated with Google to disclose the vulnerability, and in response, the company accelerated the rollout of new protections against prompt-injection attacks, including added scrutiny for calendar events and extra confirmations for sensitive actions.
Still, questions remain about how scalable these fixes are, especially as Gemini and other AI systems gain more control over personal data and devices.
Unfortunately, traditional security suites and firewall protection are not designed for this kind of attack vector.
To stay safe, users should limit what AI tools and assistants like Gemini can access, especially calendars and smart home controls.
Also, avoid storing sensitive or complex instructions in calendar events, and don’t allow AI to act on them without oversight.
Be alert to unusual behavior from smart devices and disconnect access if anything seems off.
Via Wired
You might also likeIn East Tennessee, a 3D printer arm has been used to build concrete shielding columns for a nuclear reactor.
The work is part of the Hermes Low-Power Demonstration Reactor project, supported by the US Department of Energy, and marks a new direction in how nuclear infrastructure is built, with both 3D printing and AI tools playing major roles.
And according to Oak Ridge National Laboratory (ORNL), large parts of the construction were completed in just 14 days, which could have taken several weeks using conventional methods.
Efficiency gains clash with engineering cautionThe new method uses 3D printers to create detailed molds for casting concrete, even in complex shapes, with the goal of making construction faster, cheaper, and more flexible while relying more on US-based materials and labor.
AI tools also played a role in the project, as ORNL used the technology to guide parts of the design and building process.
These tools may help reduce human error and speed up work, especially when creating difficult or unique parts, but depending heavily on AI also raises questions. How can builders be sure these systems won’t make unnoticed mistakes? Who checks the decisions that are automated?
The project is also a response to rising energy demands - as AI systems and data centers use more power, nuclear energy is seen as a stable source to support them.
Some experts say that future AI tools may end up running on power from reactors they helped design, a feedback loop that could be both efficient and risky.
The use of 3D printing in this project makes it possible to build precise structures faster.
Still, it’s not yet clear how well these 3D-printed parts will hold up over time.
Nuclear reactors need to last for decades, and failure in any part of the structure could be dangerous. Testing and quality checks must keep up with the speed of new building methods.
For now, 3D printing and AI seem to offer powerful tools for the nuclear industry.
But while faster construction is a major benefit, safety must remain the top concern - this “new era” may bring improvements, but it will need close attention and caution at every step.
Via Toms Hardware
You might also likeThe Pixel Watch 4 is almost certainly going to be unveiled alongside the Pixel 10 series and the Pixel Buds 2a on Wednesday, August 20 – though Google has only confirmed the date, not what's being launched – and a new leak gives us more information on the wearable.
Images posted to Reddit (via 9to5Google) show what look to be official marketing slides for the Pixel Watch 4, detailing features such as improved durability, battery life, and activity tracking accuracy – courtesy of a "Gen 3 sensor hub".
That would be an upgrade on the sensors we saw with the Google Pixel Watch 3, and should mean better precision in readings such as heart rate – though we won't know for sure until we've actually had an opportunity to try it out.
We also get another look at the rather unusual side charging system that showed up in an earlier leak, with charge contacts positioned on the side of the watch casing: it would appear this is how you'll be able to charge up the Pixel Watch 4.
'Technological advancements'The Pixel Watch 3 was launched in August 2024 (Image credit: Google)There's plenty of positive phrasing in these marketing materials, as you would expect. The watch apparently brings "significant technological advancements" over its predecessor, together with a "premium crafted design".
The battery life is listed as reaching 30 hours between charges, which is said to be a 25% boost over the current model. Better battery life had already been mentioned in previous leaks, so we're hopeful in that particular department.
There's also mention of the two expected watch sizes, 41 mm and 42 mm, while Gemini integration is mentioned, as well as "dual frequency" GPS – which suggests the wearable will be more accurate and faster in reporting its location.
Together with the rest of the leaked information that's also emerged in recent days, it looks as though the Pixel Watch 4 could be an appealing prospect, when it's finally confirmed – and perhaps worth a spot on our best smartwatches list.
You might also likeOpenAI has released two open-weight models, gpt-oss-120B and gpt-oss-20B, positioning them as direct challengers to offerings like DeepSeek-R1 and other large language learning models (LLMs) currently shaping the AI ecosystem.
These models are now available on AWS through its Amazon Bedrock and Amazon SageMaker AI platforms.
This marks OpenAI’s entry into the open-weight model segment, a space that until now has been dominated by competitors such as Mistral AI and Meta.
OpenAI and AWSThe gpt-oss-120B model runs on a single 80 GB GPU, while the 20B version targets edge environments with only 16 GB of memory required.
OpenAI claims both models deliver strong reasoning performance, matching or exceeding its o4-mini model on key benchmarks.
However, external evaluations are not yet available, leaving actual performance across varied workloads open to scrutiny.
What distinguishes these models is not only their size, but also the license.
Released under Apache 2.0, they are intended to lower access barriers and support broader AI development, particularly in high-security or resource-limited environments.
According to OpenAI, this move aligns with its broader mission to make artificial intelligence tools more widely usable across industries and geographies.
On AWS, the models are integrated into enterprise infrastructure via Amazon Bedrock AgentCore, enabling the creation of AI agents capable of performing complex workflows.
OpenAI suggests these models are suitable for tasks like code generation, scientific reasoning, and multi-step problem-solving, especially where adjustable reasoning and chain-of-thought outputs are required.
Their 128K context window also supports longer interactions, such as document analysis or technical support tasks.
The models also integrate with developer tooling, supporting platforms like vLLM, llama.cpp, and Hugging Face.
With features like Guardrails and upcoming support for custom model import and knowledge bases, OpenAI and AWS are pitching this as a developer-ready foundation for building scalable AI applications.
Still, the release feels partly strategic, positioning OpenAI as a key player in open model infrastructure, while also tethering its technology more closely to Amazon Web Services, a dominant force in cloud computing.
You might also like- Yet to be officially confirmed by Netflix
- Will follow the story of Lizzie Borden
- Whole new cast expected
- No official trailer released yet
- No news on future seasons
Monster season 4 is coming, though the news is yet to be officially confirmed. The true crime anthology series has become a record breaker for Netflix, one of the best streaming services, as season 1 reached one billion hours of viewing in its first 60 days. Monster being one of only four series to have achieved this.
Unsurprisingly, all focus is currently on the upcoming season 3, reportedly dropping on the streamer in October. Season 3 will focus on Ed Gein's story, played by Charlie Hunnam. But, there's still plenty to say about season 4. Such as, how it will turn its attention to Lizzie Borden – an entirely different tale with the show's first female lead.
So, here's what we know so far about the next (next) instalment of Monster from the potential release date, possible cast, news, rumors and more.
Monster season 4: is there a release date?Jeffrey Dahmer was the focus on Monster season 1 (Image credit: Netflix)No, there's not a release date for Monster season 4 just yet, but that's not surprising since season 3 is yet to stream on Netflix.
But, according to What's On Netflix?, creator Ryan Murphy revealed that season 3 is slated to drop in October.
And, for Monster season 4, Variety confirmed (although Netflix hasn't yet) that it is "already in the works" and is "currently prepping for a potential fall shoot".
With seasons 1 and 2 released in September, season 3 with a supposed October release date, I'd predict we won't see season 4 until September/October 2026.
Monster season 4: has a trailer been released? Season 2 was called 'Monsters' focusing on the Menendez brothers (Image credit: Netflix)There's no Monster season 4 trailer to share just yet and that's because filming hasn't even commenced. With production rumored to begin in fall, I'd expect we won't see a trailer until late 2026 in line with the predicted release date.
Monster season 4: predicted castA new cast for each season of Monster (Image credit: Netflix)With each season of the anthology series following a different true crime story, the cast is always entirely new. So, when it comes to predicting the Monster season 4 cast, it's almost impossible.
What we do know is that each season of Monster so far has starred big names in the lead roles. For season 1, Evan Peters was Jeffrey Dahmer. For season 2, the Menendez brothers were played by Cooper Koch and Nicholas Alexander Chavez.
And, as confirmed by Tudum, season 3 will see Charlie Hunnam play Ed Gein with supporting cast Laurie Metcalf, Tom Hollander and Olivia Williams.
For Monster season 4 then, there will be a female lead to play Lizzie Borden. But, who that is, we'll have to wait and see. I'll be sure to update here as soon as I hear more about the casting for this season.
Monster season 4: story synopsis and rumorsIt's not the first time Lizzie's tale has been told (Image credit: Lifetime)Full spoilers for Monster seasons 1 to 3 to follow.
Netflix's Monster depicts true crime stories with each season following a different case. For season 1, it was Jeffrey Dahmer. For season 2, Lyle and Erik Menendez. And for the upcoming season 3, Ed Gein.
And it has already been revealed that Monster season 4 will tell the story of Lizzie Borden. Her life and crimes though are a little different than the three seasons that came before. As the first female lead, Lizzie Borden was actually tried and acquitted for the axe murders of her father and stepmother in 1892.
Now, if you've not heard of Lizzie Borden before, a quick internet search will no doubt give you all the information you need and thus, the plot of Monster season 4. But, in the interest of not ruining the entire season, I won't delve into all the details here.
It's not the first time Lizzie's tale has been told though, which is not entirely surprising considering how prolific a case it was for its time. There's 2015's The Lizzie Borden Chronicles, which saw Christina Ricci in the titular role. Or, 2018's Lizzie with Chloë Sevigny.
For Monster season 4 being a true crime retelling of the story, I imagine it'll be as tense and thrilling as the seasons that came before it.
Will there be more seasons of Monster?Lizzie now, but who next? (Image credit: Roadside Attractions)There's a few reasons why it's hard to speculate on future seasons of Monster, namely that season 3's release date is yet to be confirmed and secondly, while season 4 is reportedly happening, there's actually been no official word from Netflix... yet.
So, with this in mind, it seems unlikely we'll hear about any future seasons of Monster anytime soon. But, as such a resounding success on the streaming platform and with an abundance of prolific true crime stories left to tell, there's always hope that Monster will continue for many more seasons to come.
For more Netflix-based coverage, read our guides to Nobody Wants This season 2, Stranger Things season 5, The Four Seasons season 2, and One Piece season 2.