As businesses realized the potential of artificial intelligence (AI), the race began to incorporate machine learning operations (MLOps) into their commercial strategies. But integrating machine learning (ML) into the real world proved challenging, and the vast gap between development and deployment was made clear. In fact, research from Gartner tells us 85% of AI and ML fail to reach production.
In this piece, we’ll discuss the importance of blending DevOps best practices with MLOps, bridging the gap between traditional software development and ML to enhance an enterprise’s competitive edge and improve decision-making with data-driven insights. We’ll expose the challenges of separate DevOps and MLOps pipelines and outline a case for integration.
Challenges of Separate PipelinesTraditionally, DevOps and MLOps teams operate with separate workflows, tools, and objectives. Unfortunately, this trend of maintaining distinct DevOps and MLOps pipelines leads to numerous inefficiencies and redundancies that negatively impact software delivery.
1. Inefficiencies in Workflow IntegrationDevOps pipelines are designed to optimize the software development lifecycle (SDLC), focusing on continuous integration, continuous delivery (CI/CD), and operational reliability.
While there are certainly overlaps between the traditional SDLC and that of model development, MLOps pipelines involve unique stages like data preprocessing, model training, experimentation, and deployment, which require specialized tools and workflows. This distinct separation creates bottlenecks when integrating ML models into traditional software applications.
For example, data scientists may work on Jupyter notebooks, while software engineers use CI/CD tools like Jenkins or GitLab CI. Integrating ML models into the overall application often requires a manual and error-prone process, as models need to be converted, validated, and deployed in a manner that fits within the existing DevOps framework.
2. Redundancies in Tooling and ResourcesDevOps and MLOps have similar automation, versioning, and deployment goals, but they rely on separate tools and processes. DevOps commonly leverages tools such as Docker, Kubernetes, and Terraform, while MLOps may use ML-specific tools like MLflow, Kubeflow, and TensorFlow Serving.
This lack of unified tooling means teams often duplicate efforts to achieve the same outcomes.
For instance, versioning in DevOps is typically done using source control systems like Git, while MLOps may use additional versioning for datasets and models. This redundancy leads to unnecessary overhead in terms of infrastructure, management, and cost, as both teams need to maintain different systems for essentially similar purposes—version control, reproducibility, and tracking.
3. Lack of Synergy Between TeamsThe lack of integration between DevOps and MLOps pipelines also creates silos between engineering, data science, and operations teams. These silos result in poor communication, misaligned objectives, and delayed deployments. Data scientists may struggle to get their models production-ready due to the absence of consistent collaboration with software engineers and DevOps.
Moreover, because the ML models are not treated as standard software artefacts, they may bypass crucial steps of testing, security scanning, and quality assurance that are typical in a DevOps pipeline. This absence of consistency can lead to quality issues, unexpected model behavior in production, and a lack of trust between teams.
4. Deployment Challenges and Slower Iteration CyclesThe disjointed state of DevOps and MLOps also affects deployment speed and flexibility. In a traditional DevOps setting, CI/CD ensures frequent and reliable software updates. However, with ML, model deployment requires retraining, validation, and sometimes even re-architecting the integration. This mismatch results in slower iteration cycles, as each pipeline operates independently, with distinct sets of validation checks and approvals.
For instance, an engineering team might be ready to release a new feature, but if an updated ML model is needed, it might delay the release due to the separate MLOps workflow, which involves retraining and extensive testing. This leads to slower time-to-market for features that rely on machine learning components. Our State of the Union Report found organizations using our platform brought over 7 million new packages into their software supply chains in 2024, highlighting the scale and speed of development.
5. Difficulty in Maintaining Consistency and TraceabilityHaving separate DevOps and MLOps configurations makes it difficult to maintain a consistent approach to versioning, auditing, and traceability across the entire software system. In a typical DevOps pipeline, code changes are tracked and easily audited. In contrast, ML models have additional complexities like training data, hyperparameters, and experimentation, which often reside in separate systems with different logging mechanisms.
This lack of end-to-end traceability makes troubleshooting issues in production more complicated. For example, if a model behaves unexpectedly, tracking down whether the issue lies in the training data, model version, or a specific part of the codebase can become cumbersome without a unified pipeline.
The Case for Integration: Why Merge DevOps and MLOps?As you can see, maintaining siloed DevOps and MLOps pipelines results in inefficiencies, redundancies, and a lack of collaboration between teams, leading to slower releases and inconsistent practices. Integrating these pipelines into a single, cohesive Software Supply Chain would help address these challenges by bringing consistency, reducing redundant work, and fostering better cross-team collaboration.
Shared End Goals of DevOps and MLOpsDevOps and MLOps share the same overarching goals: rapid delivery, automation, and reliability. Although their areas of focus differ—DevOps concentrates on traditional software development while MLOps focuses on machine learning workflows—their core objectives align in the following ways:
1.Rapid Delivery
2.Automation
3.Reliability
In traditional DevOps, the concept of treating all software components as artefacts such as binaries, libraries, and configuration files, is well-established. These artifacts are versioned, tested, and promoted through different environments (e.g., staging, production) as part of a cohesive software supply chain. Applying the same approach to ML models can significantly streamline workflows and improve cross-functional collaboration. Here are four key benefits of treating ML models as artifacts:
1. Creates a Unified View of All ArtifactsTreating ML models as artifacts means integrating them into the same systems used for other software components, such as artifact repositories and CI/CD pipelines. This approach allows models to be versioned, tracked, and managed in the same way as code, binaries, and configurations. A unified view of all artifacts creates consistency, enhances traceability, and makes it easier to maintain control over the entire software supply chain.
For instance, versioning models alongside code means that when a new feature is released, the corresponding model version used for the feature is well-documented and reproducible. This reduces confusion, eliminates miscommunication, and allows teams to identify which versions of models and code work together seamlessly.
2. Streamlines Workflow AutomationIntegrating ML models into the larger software supply chain ensures that the automation benefits seen in DevOps extend to MLOps as well. By automating the processes of training, validating, and deploying models, ML artifacts can move through a series of automated steps—from data preprocessing to final deployment—similar to the CI/CD pipelines used in traditional software delivery.
This integration means that when software engineers push a code change that affects the ML model, the same CI/CD system can trigger retraining, validation, and deployment of the model. By leveraging the existing automation infrastructure, organizations can achieve end-to-end delivery that includes all components—software and models—without adding unnecessary manual steps.
3. Enhances Collaboration Between TeamsA major challenge of maintaining separate DevOps and MLOps pipelines is the lack of cohesion between data science, engineering, and DevOps teams. Treating ML models as artifacts within the larger software supply chain fosters greater collaboration by standardizing processes and using shared tooling. When everyone uses the same infrastructure, communication improves, as there is a common understanding of how components move through development, testing, and deployment.
For example, data scientists can focus on developing high-quality models without worrying about the nuances of deployment, as the integrated pipeline will automatically take care of packaging and releasing the model artifact. Engineers, on the other hand, can treat the model as a component of the broader application, version-controlled and tested just like other parts of the software. This shared perspective enables more efficient handoffs, reduces friction between teams, and ensures alignment on project goals.
4. Improves Compliance, Security, and GovernanceWhen models are treated as standard artifacts in the software supply chain, they can undergo the same security checks, compliance reviews, and governance protocols as other software components. DevSecOps principles—embedding security into every part of the software lifecycle—can now be extended to ML models, ensuring that they are verified, tested, and deployed in compliance with organizational security policies.
This is particularly important as models become increasingly integral to business operations. By ensuring that models are scanned for vulnerabilities, validated for quality, and governed for compliance, organizations can mitigate risks associated with deploying AI/ML in production environments.
ConclusionTreating ML models as artifacts within the larger software supply chain transforms the traditional approach of separating DevOps and MLOps into a unified, cohesive process. This integration streamlines workflows by leveraging existing CI/CD pipelines for all artifacts, enhances collaboration by standardizing processes and infrastructure, and ensures that both code and models meet the same standards for quality, reliability, and security. As organizations race to deploy more software and models, we need holistic governance.
Currently, only 60% of companies have full visibility into software provenance in production. By combining DevOps and MLOps into a single Software Supply Chain, organizations can better achieve their shared goals of rapid delivery, automation, and reliability, creating an efficient and secure environment for building, testing, and deploying the entire spectrum of software, from application code to machine learning models.
We've compiled a list of the best IT infrastructure management services.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
A dual U.S.-German citizen has been arrested on charges that he traveled to Israel and attempted to firebomb the branch office of the U.S. Embassy in Tel Aviv, officials said Sunday.
(Image credit: Ohad Zwigenberg)
Grant Hardin was the police chief of Gateway, Ark. for about four months in 2016. Corrections officials did not provide any details about how he escaped.
(Image credit: AP)
President Donald Trump said Sunday that the U.S. will delay implementation of a 50% tariff on goods from the European Union from June 1 until July 9 to buy time for negotiations with the bloc.
(Image credit: Manuel Balce Ceneta)
The Last of Us season 2 episode 7 is out now – and, with it, the incredibly popular show's latest installment has come to an end.
Like its predecessor, season 2 of HBO's TV adaptation has been appointment viewing for all of us over the past seven weeks. And, as the dust settles on its near-50-minute finale, I imagine you've got some big questions about what happened and the show's future.
So, how does The Last of Us season 2 end? Are there any end credits scenes? And when do we think season 3 will arrive worldwide? I'll aim to answer those questions below, but bear in mind that full spoilers immediately follow for The Last of Us' season 2 finale. Make sure you've watched it before you proceed.
Who dies in The Last of Us season 2 episode 7?RIP, Jesse (Image credit: HBO)The Last of Us TV show's latest episode contains three big character deaths.
The most unexpected of those, and arguably the most shocking one since Joel's demise in season 2 episode 2, is Jesse's. The close friend of Ellie and Dina's ex-boyfriend (and father of Dina's unborn child) is killed by Abby when she single-handedly storms the Seattle theater that's been Ellie and Dina's base of operations since this season's fourth episode.
Jesse's death probably won't shock those who have played The Last of Us Part II, aka the Naughty Dog video game season 2 is based on. And if you'd been paying attention to the foreshadowing throughout season 2's final episode, such as Jesse constantly expressing his wish to get out of Seattle in one piece, I doubt you would've been stunned by his passing, either.
Mel and Owen are two of three big casualties in The Last of Us season 2 finale (Image credit: HBO)But why does Abby kill him? The reason is simple: Ellie accidentally killed Owen and Mel, two members of Abby's party who helped her track down and murder Joel in episode 2. A vengeful Abby, then, wants revenge for Ellie murdering two of her closest friends.
Having learned of Abby's location from Nora in episode 5 – that being, Seattle's aquarium not too far from the city's unmissable Ferris wheel – Ellie infiltrates the building and encounters Owen and Mel while searching for Abby.
Still traumatized from how much she tortured Nora two episodes ago, Ellie claims she won't shoot Owen and Mel if they tell her where Abby is now. Owen initially refuses, but to buy himself and Mel some time, he eventually agrees to show Ellie where she can find Abby on a map.
However, as Owen approaches the map on a table, he makes a move to grab a handgun to shoot Ellie first. Unfortunately for Owen, Ellie's survival instincts kick in and she shoots him first.
Three down, two to go, eh Ellie? (Image credit: HBO)The bullet passes through Owen's neck, killing him instantly. After exiting the back of Owen's throat, it hits Mel, who's standing behind him. The bullet slices her neck, nicking an artery in the process, which results in Mel collapsing and bleeding out.
Ordinarily, this would be a tragic accident in its own right – after all, Mel was unarmed and made no attempt to harm Ellie. However, Mel makes things even worse for Ellie (and, by proxy, us as viewers) before she dies by revealing she's heavily pregnant.
If Ellie felt incredible guilt and shame over what she'd done to Nora, she feels 50 times worse over not only taking Mel's life, but also that of her innocent unborn child. It's a moment that hits home even harder when you consider how much danger Ellie has put a pregnant Dina in since the pair left Jackson, Wyoming, too.
Abby tracks down Ellie and company to get revenge for Mel and Owen's deaths (Image credit: HBO)Jesse, Owen, and Mel aren't the only casualties of season 2 episode 7 – well, that's what The Last of Us wants you to think. One of the finale's last shots shows Abby pointing her sidearm at an unarmed Ellie, who shouts "no no no!" before the screen cuts to black as a shot is fired.
There's no way that the hit Max show just bumped off another of its main characters in Ellie, right? In short: no, she doesn't die. Ellie is the protagonist of this TV series and The Last of Us Part II. Spoilers notwithstanding, her story is far from over in HBO's live-action adaptation.
So, who fired the shot that we hear? I'm not going to ruin that now. You'll just have to wait for season 3 (more on this later) to arrive. Or, you know, you could watch a playthrough of The Last of Us 2 on YouTube if you want an answer ASAP.
Is there a mid-credits scene in The Last of Us season 2 episode 7?As of season 2 episode 7, Dina is still alive (Image credit: Liane Hentscher/HBO)There's no mid-credits scene to stick around for.
This season's final scene doesn't count as one, either. Sure, it drops a big hint about how season 3 will begin (more on this shortly), but it's a brief scene that takes place before the end credits start to roll. So, it can't be classed as a traditional mid-credits stinger.
Does The Last of Us season 2's final episode have a post-credits scene?Expect to see more of Isaac in The Last of Us' third season (Image credit: Liane Hentscher/HBO)Nope. The Last of Us season 2 doesn't have a post-credits scene, either. Based on how the show's latest episode ends, it doesn't need one.
When will The Last of Us season 3 be released?Trying to get word on when season 3 will make its worldwide debut like... (Image credit: Liane Hentscher/HBO)We don't know. HBO only confirmed that The Last of Us season 2 wouldn't be the hit series' final chapter in April, so it'll be a few years before one of the best Max shows' third season is released.
It's likely that work has been going on behind the scenes on season 3 for some time. Indeed, I'd be surprised if the show's chief creative team hasn't been penning its scripts, location scouting, and conducting other pre-production elements for months at this point.
Nevertheless, with filming yet to begin on The Last of Us season 3, I suspect it'll be mid-2027 at the earliest before it launches worldwide.
What does The Last of Us' season 2 finale tell us about the plot of season 3?Season 3's first few episodes will jump back in time to depict events from Abby's viewpoint (Image credit: HBO)Season 2 episode 7's final scene suggests that next season will give us an entirely different perspective on the events that play out during Ellie and Dina's first 72 hours in Seattle.
After the screen cuts to black in this season's finale, many viewers might have expected the credits to roll, thereby leaving us on a cliffhanger.
Instead, a new scene begins seconds later, reuniting us with Abby as she's woken up by Manny. He tells her that "they" won't be happy if she keeps them waiting, to which Abby replies she'll be there in five minutes.
Once she's fully come to, Abby steps out onto a balcony overlooking a football stadium that's been repurposed as a headquarters for the Isaac-led antagonistic faction known as the Washington Liberation Front (WLF). After she surveys the scene, Abby heads back inside as the words 'Seattle, Day One' appear in the bottom left-hand corner of the screen.
We'll witness Ellie's first 72 hours in Seattle from Abby's perspective next season (Image credit: HBO)This is the same location and time stamp that appeared in season 2 episode 4 when Ellie and Dina first arrive in Seattle. So, The Last of Us season 3's first few episodes, if not the entirety of next season, will travel back in time and cover the same three-day period in the US Pacific Northwest city through Abby's eyes.
That won't be a surprise to those who have played The Last of Us Part II. As the deuteragonist of the aforementioned video game, Abby was a playable character for half of the story depicted in the second entry of Naughty Dog's acclaimed and multi-award-winning game franchise. That means her side of the Seattle-based story, which runs concurrently to Ellie's, will be brought to life in season 3 of HBO's TV adaptation.
There's a lot of ground to cover in the Abby-centric part of the story, too. What were Owen and Mel planning to do before Ellie interrupted them? Who's the father of Mel's baby? How did Abby know where to find Ellie and co. in Seattle? What convinced Isaac to choose Abby as the WLF's new leader? Why does Isaac believe the WLF's current leadership is set to perish during the assault on the Seraphites' main headquarters? And does Manny meet the same fate as Owen, Mel, and Nora at Ellie's or someone else's hands, or is he still alive somewhere?
These questions will need answering in season 3 and beyond if The Last of Us officially ends with its rumored four-season plan. I could provide more details now, but again, I don't want to spoil anything significant about Ellie and Abby's journeys from this point on in the story. So, unless you scour the internet for answers now, you'll have to wait until season 3 arrives for them.
You might also likeThe 28-year-old rocketed past Andretti Global's Marcus Ericsson in the final laps of the contest and held onto the top position until the end.
(Image credit: Michael Conroy)
The bishop of Rome is one of many titles held by the pope. Duties related to the title are usually delegated to an auxiliary or assistant bishop, known as a vicar.
(Image credit: Gregorio Borgia)
At Computex 2025, Maxsun unveiled a striking new entry in the AI hardware space: the Intel Arc Pro B60 Dual GPU, a graphics card pairing two 24GB B60 chips for a combined 48GB of memory.
Servethehomeclaims Maxsun envisions these cards powering dense workstation builds with up to four per system, yielding as much as 192GB of GPU memory in a desktop-class machine.
This development appears to have Intel's implicit approval, suggesting the company is looking to gain traction in the AI GPU market.
A dual-GPU card built for AI memory demandsThe Arc Pro B60 Dual GPU is not designed for gaming. Instead, it focuses on AI, graphics, and virtualization tasks, offering a power-efficient profile.
Each card draws between 240W and 300W, keeping power and thermal demands within reach for standard workstation setups.
Unlike some alternatives, this card uses a blower-style cooler rather than a passive solution, helping it remain compatible with conventional workstation designs. That matters for users who want high-end performance without building custom cases or cooling systems.
Still, the architecture has trade-offs. The card relies on x8 PCIe lanes per GPU, bifurcated from a x16 connector. This simplifies design and installation but limits bandwidth compared to full x16 cards.
Each GPU also includes just one DisplayPort and one HDMI output. That design choice keeps multi-GPU setups manageable and avoids hitting OS-level limits, older Windows versions, for example, may have trouble handling more than 32 active display outputs in a single system.
The card’s most intriguing feature may be its pricing. With single-GPU B60 cards reportedly starting around $375 MSRP, the dual-GPU version could land near $1,000.
If that estimate holds, Maxsun’s card would represent a major shift in value. For comparison, Nvidia’s RTX 6000 Ada, with the same 48GB of VRAM, sells for over $5,500. Two of those cards can push costs north of $18,000.
Even so, Intel’s performance in professional applications remains an open question. Many creative professionals still favor Nvidia for its mature drivers and better software optimization.
You might also likeAfter many months of speculation, Google finally showed off its still-early-day Android XR smart glasses prototype. It was an impressive live demo, with a live translation portion that went off well but not without hitches. Still, it got the crowd at Google I/O going, and right after that opening keynote wrapped, I strolled around the Shoreline Amphitheater to find a pair to try.
Much like my time with Project Moohan, the prototype Android XR headset that Google and Samsung are working on, I only spent about five minutes with these prototype glasses. And no, it wasn’t a sleek frame made by Warby Parker or a wild one from Gentle Monsters – instead, it was the pair Google demoed on-stage, the prototype Android XR glasses made by Samsung.
As you can see above, much like Meta Ray-Bans and unlike Snapchat Spectacles (the first gen), these prototypes look like standard black frames. They're a bit thicker on either the left or right stems, but they’re also loaded with tech – though not in a way that screams it from the outside.
It was a short, pretty rushed demo, but certainly a compelling one.
(Image credit: Jacob Krol/Future)The tech here is mostly hidden – there is a screen baked into the lens, which, when worn, appears as a little box when it’s showing something larger. Otherwise, when I first turned the glasses on, I saw the time and the weather hovering at the top of my field of vision.
When I pressed the button on the right stem to capture a photo, it almost flashed transparently larger in my field of vision. Neat and a bit more present way of capturing than on the screen-less Meta Ray-Bans.
These are both cool, and during the keynote, Google also shared that the screens could be used for messaging, calls, and translating as well, but I didn’t get to try that. While I couldn’t ask for directions myself, a Google rep within my demo was able to toss up what navigation would like, and this feature has me more excited about smart glasses with a screen built-in.
Why? Well, it was that the experience of navigating doesn’t get in the way of my field of view – I can simply still look straight forward and see at the top that in 500-feet or 50-feet that I need to make a right onto a specific avenue. I don’t need to look down at my phone or glance at my wrist, it’s all housed in just one device.
If I need more details or want to see my route, I could glance down to see a mini version of the map, which moved as I moved my head. If I wore these in NYC, I could walk normally and glance at the top to see directions, but when safely stopped and not in the way of others, I could look down to see my full route. That’s pretty neat to me.
(Image credit: Jacob Krol/Future)The projected screen itself had good-enough quality, though I’m not sure how it performs in direct sunlight, as I tested these in a little room that Google had constructed. It’s important to remember that this is still a prototype – Google has several brands onboard to produce these, but there isn’t an exact timeframe. Developers will be able to start developing and testing by the end of the year, though.
This year, the Project Moohan headset, which also runs Android XR, will arrive. Samsung will ship the headset in a to-be-revealed final version, which could build support from third parties and let Google get more feedback on the platform.
Gemini, Google’s very wise AI assistant, blew me away on Project Moohan and was equally compelling on the Android XR glasses. I asked it for the weather, and got it to give me an audio report of the next few days, had it analyze a replica of a painting, and even look at a book, tell me the reviews, and where I could purchase it.
That power of having Gemini in my frame has me really excited for the future of the category – it’s the audio responses, the connection to the Google ecosystem, and how it plays with the onboard screen. It remains to be seen how Samsung’s final design might look, but it will likely sit alongside several other Android XR-powered smart glasses from the likes of Warby Parker, X-Real, and Gentle Monster, among others.
I’ve long worn Meta Ray-Bans and enjoy those for snapping unique shots or recording POVs like walking my dog Rosie or riding an attraction at a Disney Park. Similarly, I really enjoyed the original version of the Snapchat Spectacles, but the appeal wore off. Those both did only a short – or in the case of the Spectacles, very short – list of functions, but Android XR as a platform feels a heck of a lot more powerful, even from a short five-minute window.
While the design didn’t sell me on Samsung’s prototype, I have high hopes for the Warby Parker ones. Seeing how Gemini’s smarts can fit into such a small frame and how a screen can be genuinely useful but not overly distracting really has me excited. I have a feeling not all of the Android XR glasses will appeal to everyone, but with enough entries, I’m sure one of them will pair form with function in a correct balance.
Gemini in glasses feels less like the future, and considering this new entry, my eyes are set to see what Meta's does next and what Apple's much-rumored entry into the world of smart glasses will look like.
You might also like