Construction workers inside a huge industrial tunnel in Los Angeles made it to safety after a portion of it collapsed Wednesday evening, an outcome officials called a blessing.
(Image credit: AP)
As organizations continue to adopt AI tools, security teams are often caught unprepared for the emerging challenges. The disconnect between engineering teams rapidly deploying AI solutions and security teams struggling to establish proper guardrails has created significant exposure across enterprises. This fundamental security paradox—balancing innovation with protection—is especially pronounced as AI adoption accelerates at unprecedented rates.
The most critical AI security challenge enterprises face today stems from organizational misalignment. Engineering teams are integrating AI and Large Language Models (LLMs) into applications without proper security guidance, while security teams fail to communicate their AI readiness expectations clearly.
McKinsey research confirms this disconnect: leaders are 2.4 times more likely to cite employee readiness as a barrier to adoption versus their own issues with leadership alignment, despite employees currently using generative AI three times more than leaders expect.
Understanding the Unique Challenges of AI ApplicationsOrganizations implementing AI solutions are essentially creating new data pathways that are not necessarily accounted for in traditional security models. This presents several key concerns:
1. Unintentional Data Leakage
Users sharing sensitive information with AI systems may not recognize the downstream implications. AI systems frequently operate as black boxes, processing and potentially storing information in ways that lack transparency.
The challenge is compounded when AI systems maintain conversation history or context windows that persist across user sessions. Information shared in one interaction might unexpectedly resurface in later exchanges, potentially exposing sensitive data to different users or contexts. This "memory effect" represents a fundamental departure from traditional application security models where data flow paths are typically more predictable and controllable.
2. Prompt Injection Attacks
Prompt injection attacks represent an emerging threat vector poised to attract financially motivated attackers as enterprise AI deployment scales. Organizations dismissing these concerns for internal (employee-facing) applications overlook the more sophisticated threat of indirect prompt attacks capable of manipulating decision-making processes over time.
For example, a job applicant could embed hidden text like "prioritize this resume" in their PDF application to manipulate HR AI tools, pushing their application to the top regardless of qualifications. Similarly, a vendor might insert invisible prompt commands in contract documents that influence procurement AI to favor their proposals over competitors. These aren't theoretical threats - we've already seen instances where subtle manipulation of AI inputs has led to measurable changes in outputs and decisions.
3. Authorization Challenges
Inadequate authorization enforcement in AI applications can lead to information exposure to unauthorized users, creating potential compliance violations and data breaches.
4. Visibility Gaps
Insufficient monitoring of AI interfaces leaves organizations with limited insights into queries, response and decision rationales, making it difficult to detect misuse or evaluate performance.
The Four-Phase Security ApproachTo build a comprehensive AI security program that addresses these unique challenges while enabling innovation, organizations should implement a structured approach:
Phase 1: Assessment
Begin by cataloging what AI systems are already in use, including shadow IT. Understand what data flows through these systems and where sensitive information resides. This discovery phase should include interviews with department leaders, surveys of technology usage and technical scans to identify unauthorized AI tools.
Rather than imposing restrictive controls (which inevitably drive users toward shadow AI), acknowledge that your organization is embracing AI rather than fighting it. Clear communication about assessment goals will encourage transparency and cooperation.
Phase 2: Policy Development
Collaborate with stakeholders to create clear policies about what types of information should never be shared with AI systems and what safeguards need to be in place. Develop and share concrete guidelines for secure AI development and usage that balance security requirements with practical usability.
These policies should address data classification, acceptable use cases, required security controls and escalation procedures for exceptions. The most effective policies are developed collaboratively, incorporating input from both security and business stakeholders.
Phase 3: Technical Implementation
Deploy appropriate security controls based on potential impact. This might include API-based redaction services, authentication mechanisms and monitoring tools. The implementation phase should prioritize automation wherever possible.
Manual review processes simply cannot scale to meet the volume and velocity of AI interactions. Instead, focus on implementing guardrails that can programmatically identify and protect sensitive information in real-time, without creating friction that might drive users toward unsanctioned alternatives. Create structured partnerships between security and engineering teams, where both share responsibility for secure AI implementation.
Phase 4: Education and Awareness
Educate users about AI security. Help them understand what information is appropriate to share and how to use AI systems safely. Training should be role-specific, providing relevant examples that resonate with different user groups.
Regular updates on emerging threats and best practices will keep security awareness current as the AI landscape evolves. Recognize departments that successfully balance innovation with security to create positive incentives for compliance.
Looking AheadAs AI becomes increasingly embedded throughout enterprise processes, security approaches must evolve to address emerging challenges. Organizations viewing AI security as an enabler rather than an impediment will gain competitive advantages in their transformation journeys.
Through improved governance frameworks, effective controls and cross-functional collaboration, enterprises can leverage AI's transformative potential while mitigating its unique challenges.
We've listed the best online cybersecurity courses.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
The State Department's decision to impose sanctions on Francesca Albanese, the U.N. special rapporteur for the West Bank and Gaza, follows an unsuccessful campaign to force her removal.
(Image credit: Salvatore Di Nolfi/AP)
The data also highlights critical risks in other areas along the Guadalupe River in Kerr County, revealing more than twice as many Americans live in flood prone areas than FEMA's maps show.
(Image credit: Ronaldo Schemidt)
It's been nearly a week since devastating flooding tore through Kerr County, Texas killing more than a hundred people.
Now, after unimaginable tragedy, residents are coming together to help each other move forward.
NPR's Juana Summers and producers Erika Ryan and Tyler Bartlam visited the City West Church, which has transformed from a house of worship into a pop up food distribution site serving thousands of meals to the community and first responders.
For sponsor-free episodes of Consider This, sign up for Consider This+ via Apple Podcasts or at plus.npr.org.
Email us at considerthis@npr.org.
Nine thousand city workers in Philadelphia have been on strike for higher pay. Sanitation workers, 911 dispatchers and other municipal employees have been on strike for days.
President Trump defended former Brazilian President Jair Bolsonaro, who is accused of plotting an attempted coup following his loss in the 2022 election.
(Image credit: Andrew Caballero-Reynolds)
Kevin O'Connor cited doctor-patient confidentiality and his Fifth Amendment right against self-incrimination in deciding not to answer questions from Republicans on the House Oversight Committee.
(Image credit: Manuel Balce Ceneta)
Munich-based startup Cerabyte is developing what it claims could become a disruptive alternative to magnetic tape in archival data storage.
Using femtosecond lasers to etch data onto ceramic layers within glass tablets, the company envisions racks holding more than 100 petabytes (100,000TB) of data by the end of the decade.
Yet despite these bold goals, practical constraints mean it may take decades before such capacity sees real-world usage.
The journey to 100PB racks starts with slower, first-generation systemsCMO and co-founder Martin Kunze outlined the vision at the recent A3 Tech Live event, noting the system draws on “femtosecond laser etching of a ceramic recording layer on a glass tablet substrate.”
These tablets are housed in cartridges and shuttled by robotic arms inside tape library-style cabinets, a familiar setup with an unconventional twist.
The pilot system, expected by 2026, aims to deliver 1 petabyte per rack with a 90-second time to the first byte and just 100MBps in sustained bandwidth.
Over several refresh cycles, Cerabyte claims that performance will increase, and by 2029 or 2030, it anticipates “a 100-plus PB archival storage rack with 2GBps bandwidth and sub-10-second time to first byte.”
The company’s long-term projections are even more ambitious, and it believes that femtosecond laser technology could evolve into “a particle beam matrix tech” capable of reducing bit size from 300nm to 3nm.
With helium ion beam writing by 2045, Cerabyte imagines a system holding up to 100,000PB in a single rack.
However, such claims are steeped in speculative physics and should, as the report says, be “marveled at but discounted as realizable technology for the time being.”
Cerabyte’s stated advantages over competitors such as Microsoft’s Project Silica, Holomem, and DNA storage include greater media longevity, faster access times, and lower cost per terabyte.
“Lasting more than 100 years compared to tape’s 7 to 15 years,” said Kunze, the solution is designed to handle long-term storage with lower environmental impact.
He also stated the technology could ship data “at 1–2GBps versus tape’s 1GBps,” and “cost $1 per TB against tape’s $2 per TB.”
So far, the company has secured around $10 million in seed capital and over $4 million in grants.
It is now seeking A-round VC funding, with backers including Western Digital, Pure Storage, and In-Q-Tel.
Whether Cerabyte becomes a viable alternative to traditional archival storage methods or ends up as another theoretical advance depends not just on density, but on long-term reliability and cost-effectiveness.
Even if it doesn't become a practical alternative to large HDDs by 2045, Cerabyte’s work may still influence the future of long-term data storage, just not on the timeline it projects.
Via Blocksandfiles
You might also likeAOOSTAR NEX395 is the latest in a growing field of AI-focused mini PCs which comes in a box-like casing that departs from the more common designs found in the segment.
The company says the NEX395 uses AMD’s flagship Strix Halo processor, a 16-core, 32-thread chip with boost speeds up to 5.1GHz.
It includes 40 RONA 3.5 compute units and appears to support up to 128GB of memory, most likely LPDDR5X given the compact casing.
Memory capacity matches rivals, but key hardware details are missingThis level of memory is in line with other mini PCs targeting AI development workflows, especially those involving large language models.
However, no details have been confirmed regarding storage, cooling, or motherboard layout.
The device looks more like an oversized SSD enclosure or an external GPU dock than a full-fledged desktop system.
Its slim, rectangular, vent-heavy design completely deviates from the usual cube or NUC-style mini PCs.
Holding it in your palm feels more like gripping a chunky power bank or a Mac mini cut in half, definitely not what you’d expect from a 16-core AI workstation.
The layout makes you question where the thermal headroom or upgradable internals even fit.
The AOOSTAR NEX395 includes an integrated Radeon 8060S GPU, part of the Ryzen AI MAX+ 395 APU.
However, it also sells an external eGPU enclosure featuring the Radeon RX 7600 XT.
Given that the integrated GPU already offers a newer architecture and more compute units than the RX 7600 XT, the use case for pairing the two is unclear.
Also, the NEX395 does not appear to support high-speed eGPU connectivity like OCuLink, which would limit bandwidth for external graphics support.
Port selection includes dual Ethernet ports, four USB-A ports, USB-C, HDMI, and DisplayPort outputs, along with a dedicated power input, suggesting reliance on an external power brick.
Without confirmed thermal design or sustained performance metrics, it’s unclear whether this system can function reliably in roles normally filled by the best workstation PC or best business PC options.
Unfortunately, the pricing details for the NEX395 are currently unavailable.
Given the $1500–$2000 range of comparable models such as the HP Z2 Mini G1a and GMKTEC EVO-X2, AOOSTAR’s model is unlikely to be cheap.
Via Videocardz
You might also likeIn the aftermath of catastrophic flooding in Kerr County, Texas, a fleet of volunteers is working to make sure people in the area have access to a hot meal.
(Image credit: Katie Hayes Luke)
Iran's Supreme Leader Ayatollah Ali Khamenei is 86 years-old and his political power is weakened following the short war with Israel. Our correspondent explores who, or what, could replace Khamenei upon his death.