Marvel has delayed the release of Avengers: Doomsday and its sequel.
In a move that won't come as a surprise to many, the comic titan has pushed back the launch dates for Doomsday and its follow-up Avengers: Secret Wars.
The pair had been slated to land in theaters on May 1, 2026 and May 7, 2027. Now, you can expect to see Doomsday release in theaters worldwide seven months later than planned, with Avengers 5 now set to arrive on December 18, 2026 and Secret Wars' launch pushed to December 17, 2027.
The next two Avengers movies are set to be the biggest undertakings in Marvel Studios' history. Per Deadline, sources close to the production of both films say they're among the most ambitious projects that parent company Disney has ever produced, too. To quote Thanos, then, it was inevitable that Marvel would need more time to make both flicks.
Why Avengers 5 and 6's release-date delays are so significantMarvel hasn't said what impact Doomsday's delayed release will have on its other projects (Image credit: Marvel Studios)Make no mistake, Disney and Marvel have made the right call to delay the release of Doomsday and Secret Wars. The overall response to Marvel Cinematic Universe (MCU) projects since 2019's Avengers: Endgame has been mixed. While some films and Disney+ shows have been critical and commercial successes, others haven't been greeted as enthusiastically or made as much money as Marvel would have hoped.
Disney and Marvel can't afford to fumble the proverbial bag with Doomsday and Secret Wars, especially given the amount of money it'll collectively cost to make them. Add in the talent behind and in front of the camera – Avengers: Doomsday's initial cast alone is 27-deep – and the pressure to deliver two more top-tier Avengers movies is most certainly on.
The release of Spider-Man's next MCU adventure could be pushed back, too (Image credit: Sony Pictures/Marvel Entertainment)Their release date postponements also raise other potential issues.
For starters, Doomsday and Secret Wars' delay could have a significant impact on Spider-Man: Brand New Day. The webslinger's next big-screen adventure was set to arrive between the pair, with its initial launch date penciled in for July 24, 2026. Spider-Man 4 suffered its own release setback in February, but its launch was only delayed by a week to July 31, 2026.
The big question now is whether Brand New Day will swing into cinemas on that revised date. Depending on which online rumors you believe, Spider-Man 4 will either be a multiverse-style movie like Spider-Man: No Way Home was, or a more grounded, street-level flick.
If it's the former, and if Brand New Day's plot is dependent on events that occur in, or run parallel to, Avengers: Doomsday, the next Spider-Man movie's launch date will likely have to be pushed back again.
Should Brand New Day be moved into 2027, we could see a repeat of 2023 when only one MCU film – Deadpool and Wolverine – landed in theaters, with 2026's sole Marvel movie being Doomsday. That's on the basis that Avengers 5, aka the second Marvel Phase 6 film, doesn't suffer another release date setback.
Will Marvel decide to move some of its 2025 Disney+ offerings into early 2026? (Image credit: Marvel Television/Disney Plus)These delays could have a huge knock-on effect for Marvel's small-screen offerings, too.
If Brand New Day keeps its mid-2026 launch date, a whole year will have passed between the final MCU film of 2025 – The Fantastic Four: First Steps, which arrives on July 25 – and Tom Holland's next outing as Peter Parker's superhero alias. That's not necessarily a bad thing, but it means MCU devotees will look to Disney+, aka one of the world's best streaming services, for their Marvel fix.
Fortunately, Marvel has plenty of TV-based MCU content in the pipeline. From Ironheart's release in late June to Daredevil: Born Again season 2's launch next March, there are currently five live-action and animated series set to debut on Disney's primary streamer.
In light of Doomsday's delay, though, will Marvel tweak its Disney+ lineup and further spread out its small-screen content to fill the void?
Right now, Born Again's second season is the only series confirmed to arrive in 2026. There are other shows in the works that are expected to debut next year, but they aren't likely to be ready until mid- to late 2026. To offset a potentially months-long barren spell in the MCU that Doomsday's delayed release has caused, Marvel might opt to push animated series Eyes of Wakanda or Wonder Man, the final live-action MCU TV show of 2025, into early 2026.
I guess we'll find out more about any further release-schedule changes when Marvel takes to the Hall H stage for its now-annual presentation at San Diego Comic-Con, the 2025 edition of which runs from July 24-27.
You might also likeToday, artificial intelligence is revolutionizing virtually every industry, but its rapid adoption also comes with a significant challenge: energy consumption.
Data centers are racing to accommodate the surge in AI-driven demand and are consuming significant amounts of electricity to support High-Performance Computing, cloud computing services, and the many digital products and services we rely on every day.
Why are we seeing such a spike in energy use? One reason is heavy reliance on graphics processing unit (GPU) chips, which are much faster and more effective than processing tasks. More than just an advantage, this efficiency has now made GPUs the new standard for training and running AI models and workloads.
Yet it also comes at a high cost: soaring energy consumption. Each GPU now requires up to four times more electricity than a standard CPU, an exponential increase that is quickly – and dramatically – changing demands for energy in the data center.
For example, consider these recent findings:
The New York Times recently described how OpenAI hopes to build five new data centers that would consume more electricity than the three million households in Massachusetts.
According to the Center on Global Energy Policy, GPUs and their servers could make up as much as 27 percent of the planned new generation capacity for 2027 and 14 percent of total commercial energy needs that year.
A Forbes article predicted that Nvidia’s Blackwell chipset will boost power consumption even further – a 300% increase in power consumption across one generation of GPUs with AI systems increasing power consumption at a higher rate.
These findings raise important power-related questions: Is AI growth outpacing the ability of utilities to supply the required energy? Are there other energy options data centers should consider? And maybe most importantly, what will data center’s energy use look like in both the short- and long-term future?
Navigating Power Supply and Demand in the AI EraDespite growing concerns, AI has not yet surpassed the grid’s capabilities. In fact, some advancements suggest that AI energy consumption could even decreases. Many AI companies expended vast amounts of processing power to train their initial models, but newer players like DeepSeek now claim that their systems operate far more efficiently, requiring less computing power and energy.
However, AI’s sudden rise is only one factor in a perfect storm of energy demands. For example, the larger electrification movement, which has introduced millions of electric vehicles to the grid, and the reshoring of manufacturing to the U.S., is also straining resources. AI adds another layer to this complex equation, raising urgent questions about whether existing utilities can keep pace with demand.
Data centers, as commercial real estate, are also subject to the age-old adage, “location, location, location.” Many power generation sites – especially those harnessing solar and wind – are located in rural places in the United States, but transmission bottlenecks make it difficult to move. That power to urban centers where demand is highest. Thus far, geodiversity and urban demand have not yet driven data centers to these remote areas.
This could soon change. Hyperscalers have already demonstrated their willingness and agility in building data centers in the Arctic Circle to take advantage of natural cooling to reduce energy use and costs. A similar shift may take hold in the U.S., with data center operators eyeing locations in New Mexico, rural Texas, Wyoming, and other rural markets to capitalize on similar benefits.
Exploring Alternative Energy SolutionsAs strain on the grid intensifies, alternative energy solutions are gaining traction as a means of ensuring a stable and sustainable power supply.
One promising development is the evolution of battery technology. Aluminum-ion batteries, for example, offer several advantages over lithium-based alternatives. Aluminum is more abundant, sourced from conflict-free regions, and free from the geopolitical challenges associated with lithium and cobalt mining. These batteries also boast a solid-state design, reducing flammability risks, and their higher energy density enables more efficient storage, which helps smooth out fluctuations in energy supply and demand – often visualized as the daily “duck curve.”
Nuclear energy is also re-emerging as a viable solution for long-term, reliable power generation. Advanced small modular reactors (SMRs) offer a scalable, low-carbon alternative that can provide consistent energy without the intermittency of renewables.
However, while test sites are under development, SMRs have yet to begin generating power and may still be five or more years away from large-scale deployment. Public perception remains a key challenge, as strict regulations often require plants to be situated far from populated areas, and the long-term management of nuclear waste continues to be a concern.
Additionally, virtual power plants (VPPs) are revolutionizing the energy landscape by connecting and coordinating thousands of decentralized batteries to function as a unified power source. By optimizing the generation, storage, and distribution of renewable energy, VPPs enhance grid stability and efficiency. Unlike traditional power plants, VPPs do not rely on a single energy source or location, making them inherently more flexible and resilient.
Securing a Sustainable Power Future for AI and Data CentersWhile it’s hard to predict what lies ahead for AI and how much more demand we’ll see, the pressure is on to secure reliable, sustainable power, now and into the future.
As the adoption of AI tools accelerates, data centers must proactively seek sustainable and resilient energy solutions. Embracing alternative power sources, modernizing grid infrastructure, and leveraging cutting-edge innovations will be critical in ensuring that the power needs of AI-driven industries can be met – now and in the years to come.
We list the best web hosting services.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
For SaaS businesses eyeing a successful exit, particularly when engaging with sophisticated Private Equity (PE) and tech investors, the era of simply showcasing impressive top-line growth is over.
Today, data reigns supreme. It's the bedrock upon which compelling value stories are built, the lens through which operational efficiency and scalability are scrutinized, and ultimately, the key to unlocking those coveted higher valuation multiples.
A robust data strategy, coupled with the ability to extract meaningful insights, is no longer a ‘nice-to-have’ but a fundamental requirement for securing a lucrative exit in today’s competitive landscape.
What investors are looking forSo, what exactly are these discerning investors looking for in the data of a prospective SaaS acquisition? The foundation, without a doubt, remains the ARR bridge, or what can be referred to as the ‘revenue snowball’. This isn't just about presenting a static ARR figure; it’s about demonstrating how that recurring revenue has evolved over time. Investors will dissect this data from every angle – group-wide, segmented by product, customer cohort, and geography.
They want to see the trajectory, understand the drivers of growth and churn, and identify any potential vulnerabilities. Therefore, your ARR bridge needs to be more than just a spreadsheet; it needs to be a dynamic, drillable, and rigorously stress-tested tool that can withstand the intense scrutiny of due diligence.
Beyond the ARR bridge, several other key insights are paramount. Sales pipeline reporting provides a crucial forward-looking perspective. Investors want to see a healthy, well-managed pipeline with clearly defined stages, realistic conversion rates, and accurate forecasting. This demonstrates the predictability and sustainability of future revenue growth. Similarly, classic FP&A reports remain essential, offering a historical view of financial performance, profitability trends, and cost management.
However, some SaaS firms are now also looking to leverage product usage insights to a greater extent than ever before. Understanding how customers are interacting with the platform, identifying power users, and tracking feature adoption provides invaluable insights into customer stickiness, potential for upselling, and overall product value.
Looking aheadLooking ahead, the role of data in shaping SaaS valuations will only intensify. We anticipate that the level of scrutiny and the expectation for data maturity and insightful analysis will continue to rise. Gone are the days of presenting high-level metric summaries; investors will increasingly demand granular insights and a clear understanding of the ‘why’ behind the numbers. When it comes to performance and trends; just saying profitability has grown by X% year on year is now not enough - it needs to be evidenced by granular data and solid analytics.
Investors want to know what’s working now and how your company can scale post-acquisition. By providing the context behind the metrics, it makes it easier to showcase opportunities for further growth, with potential investors being able to leverage these data “assets” to underpin their investment cases. With higher investor expectations, those who fail to do so risk undermining their valuation potential or, worse still, failing to secure the deal.
Furthermore, I believe that companies will need to start demonstrating how they are leveraging data to capitalize on the value that advanced analytics can bring. This could range from using AI-powered analytics to identify at-risk customers to employing machine learning to drive new business growth and customer expansion.
Even while there may be applications of AI tools in the SaaS space that aren’t necessarily tied to a firm’s data, most of these revenue-driving applications of advanced analytics and machine learning are only possible when the fundamentals are already firmly in place.
Building compelling valueSo, how can SaaS firms proactively use data to build a compelling value story that resonates with potential acquirers? It boils down to not just making data a strategic priority but building the data policies, expertise and infrastructure you need into the fabric of your SaaS business.
Everything does not have to be in place from day one, rather you need to create a strategy that will enable you to ramp up to gathering all the critical data points you will need to answer every question an investor will ultimately ask. Doing this also lays the foundations to take advantage of the latest generative AI advances. As mentioned, AI applied to a shaky data foundation is unlikely to get you results, but applied to the right data foundations can transform the value of your business.
Luckily, the data points that PE firms and other potential investors now really value are the same insights that will make a fundamental improvement to how effectively you make decisions as your SaaS startup scales. The important thing to remember with any data project is to start with the questions you want to answer. This means understanding modern investors. Ask yourself, what metrics, beyond simple revenue figures, will tell the story of your company’s success and potential?
Aside from the core metrics already mentioned, it could be there are further opportunities to demonstrate differentiation. It could be the diversity of your customer base - both geographically and by sector. It could be that the cost of serving an additional customer and the automation of key processes can provide compelling evidence of scalability.
When you have a clear picture of where your real strength and USP exists, the next step is to develop the data collection, management and analysis systems and policies that will prove what you know to investors.
Further down the lineFurther down the line it’s likely that there will also be a strong business case for investment in upskilling and retraining staff across the board
This should include everyone, including all senior teams. Even today, it still surprises me how few founders and business owners can understand and interpret their core business data, instead relying on a handful of experts. After all, it’s impossible to know what you don’t know – and a second-hand account of somebody else’s understanding, no matter how advanced it may be, could never substitute for your own personal analysis.
By building up your own expertise now, you and your senior team will be best positioned to demonstrate a compelling equity narrative that results in the highest possible valuation at the point of exit.
We've compiled a list of the best business intelligence platforms.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Enterprises find themselves at a pivotal moment in communication technology, facing a difficult decision: embrace modern technology or protect their investments in existing systems. This has created a divide between all-new cloud solutions and approaches that work with the infrastructure organizations already have in place. Avaya's new Infinity platform solves this dilemma by offering a way to do both.
Bridging Technological DividesThe enterprise communication technology landscape has fragmented into distinct camps. On one side stand cloud-native solutions promising flexibility and innovation but requiring complete system replacement. On the other hand, traditional vendors offer incremental improvements to on-premise systems without fundamentally reimagining their architecture.
Our approach with Avaya Infinity platform targets the substantial middle ground with a hybrid solution for enterprises seeking modernization without abandoning functional infrastructure investments. This hybrid model acknowledges a fundamental reality: most large organizations operate complex technology ecosystems built over decades, making complete replacements impractical regardless of the benefits.
Differentiated ArchitectureWhat differentiates Avaya Infinity platform is its architectural approach. It’s a secure platform that ensures compliance, deployment flexibility, and top-tier performance — a single code base across on-prem, cloud, and hybrid environments. Rather than forcing customers into two distinct choices, Avaya Infinity platform offers:
This architecture addresses the realities enterprises face in the contact center. The vast majority of organizations simply cannot afford operational disruption during technological transformation, yet they’re also unable to ignore competitive pressure to implement AI-powered experiences.
The Strategic BenefitsAvaya Infinity platform offers a hybrid solution that enables organizations to:
For those managing customer experience strategies, this approach transforms the contact center into a connection center ─ connecting channels (voice and digital), connecting insights (data and behavior), connecting technologies (unifying AI, applications and disparate systems), and connecting workflows (delivering hyper personalized experiences). When customer interactions generate not just service outcomes but actionable intelligence, every conversation becomes a source of competitive advantage.
Balancing Innovation and StabilityThe enterprise technology landscape has historically swung between innovation cycles and stability periods. Today's environment is unique in demanding both simultaneously—rapid innovation in customer experience alongside operational stability in core systems.
Avaya Infinity platform embraces this hybrid reality offer a compelling vision: transformation without operational upheaval. Its architecture is enabled by existing investments while enabling future capabilities, indicating that for most enterprises, technology evolution occurs on a continuum rather than through discrete revolutions.
The Path ForwardAvaya Infinity platform supports sustainable transformation strategies using on-premise investments while systematically introducing AI-powered innovations. It delivers what enterprises need today and expect tomorrow.
Watch this video to learn more about Avaya Infinity platform and contact an Avaya expert to request a demo here.
Google's unveiling of a new line of AI-fueled smart glasses built on the Android XR platform was only one of dozens of announcements at Google I/O this year. Even so, one facet in particular caught my eye as more important than it might have seemed to a casual viewer.
While the idea of wearing AI-powered lenses that can whisper directions into your ears while projecting your to-do list onto a mountain vista is exciting, it's how you'll look while you use them that grabbed my attention. Specifically, Google's partnership with Warby Parker and Gentle Monster to design their new smart glasses.
The spectre of Google Glass and the shadow cast by the so-called Glassholes weraring them went unmentioned, but it's not hard to see the partnerships as part of a deliberate strategy to avoid repeating the mistakes made a decade ago. Wearing Google Glass might have said, “I’m wearing the future,” but it also hinted, “I might be filming you without your consent.” No one will think that Google didn't consider the fashion aspect of smart glasses this time. Meta’s Ray-Ban collaboration is based on a similar impulse.
If you want people to wear computers on their faces, you have to make them look good. Warby Parker and Gentle Monster are known for creating glasses that appeal to millennials and Gen Z, both in look and price.
"Warby Parker is an incredible brand, and they've been really innovative not only with the designs that they have but also with their consumer retail experience. So we're thrilled to be partnered with them," said Sameer Samat, president of Google’s Android Ecosystem, in an interview with Bloomberg. "I think between Gentle Monster and Warby Parker, they're going to be great designs. First and foremost, people want to wear these and feel proud to wear them."
Smart fashionWearables are not mini smartphones, and treating them that way has proven to be a mistake. Just because you want to scroll through AR-enhanced dog videos doesn't mean you don't want to look good simultaneously.
Plus, smart glasses may be the best way to integrate generative AI like Google Gemini into hardware. Compared to the struggles of the Humane AI Pin, the Rabbit R1, and the Plaud.ai NotePin, smart glasses feel like a much safer bet.
We already live in a world saturated with wearable tech. Smartwatches are ubiquitous, and wireless earbuds also have microphones and biometric sensors. Glasses occupy a lot of your face's real estate, though. They're a way people identify you far more than your watch. Augmented reality devices sitting on your nose need to be appealing, no matter which side of the lenses you look at.
Combine that with what the smart glasses offer wearers, and you have a much stronger product. They don't have to do everything, just enough to justify wearing them. The better they look, the less justification you need for the tech features.
Teaming up with two companies that actually understand design shows that Google understands that. Google isn’t pretending to be a fashion house. They’re outsourcing style strategies to people who know what they're doing. Google seems to have learned that if smart glasses are going to work as a product, they need to blend in with other glasses, not proclaim to the world that someone is wearing them.
How much they cost will matter, as setting smart glasses prices to match high-end smartphones will slow adoption. But if Google leverages Warby Parker and Gentle Monster’s direct-to-consumer experience to keep prices reasonable, they might entice a lot more people, and possibly undercut their rivals. People are used to spending a few hundred dollars on prescription glasses a reasonably sized extra charge for AI will be just another perk, like polarized prescription sunglasses.
Success here might also ripple out to smaller, but fashionable eyewear brands. Your favorite boutique frame designer might eventually offer 'smart' as a category, like they do with transition lenses today. Google is making a bet that people will choose to wear technology if it looks like something they would choose to wear anyway, and a bet on people wanting to look good is about as safe a bet I can imagine.
You might also like...Anthropic has unveiled Claude 4, the latest generation of its AI models. The company boasts that the new Claude Opus 4 and Claude Sonnet 4 models are at the top of the game for AI assistants with unmatched coding skills and the ability to function independently for long periods of time.
Claude Sonnet 4 is the smaller model, but it's still a major upgrade in power from the earlier Sonnet 3.7. Anthropic claims Sonnet 4 is much better at following instructions and coding. It's even been adopted by GitHub to power a new Copilot coding agent. It's likely to be much more widely used simply because it is the default model on the free tier for the Claude chatbot.
Claude Opus 4 is the flagship model for Anthropic and supposedly the best coding AI around. It can also handle sustained, multi-hour tasks, breaking them into thousands of steps to fulfill. Opus 4 also includes the "extended thinking" feature Anthropic tested on earlier models. Extended thinking allows the model to pause in the middle of responding to a prompt and use search engines and other tools until it has more data and can resume right where it left off.
That means a lot more than just longer answers. Developers can train Opus 4 to use all kinds of third-party tools. Opus 4 can even play video games pretty well, with Anthropic showing off how the AI performs during a game of Pokémon Red when given file access and permission to build its own navigation guide.
(Image credit: Anthropic)Claude 4 powerBoth Claude 4 models boast enhanced features centered around tool use and memory. Opus 4 and Sonnet 4 can use tools in parallel and switch between reasoning and searching. And their memory system can save and extract key facts over time when provided access to external files. You won't have to re-explain what you want on every third prompt.
To make sure the AI is doing what you want, but not overwhelm you with every detail, Claude 4's models also offer what it calls “thinking summaries.” Instead of a wall of text detailing each of the potentially thousands of steps taken to complete a prompt, Claude employs a smaller, secondary AI model to condense the train of thought into something digestible.
A side benefit of the way the new models work is that they're less likely to cheat to save time and processing power. Anthropic said they’ve reduced shortcut-seeking behavior in tasks that tempt AIs to fake their way to a solution (or just make something up).
The bigger picture? Anthropic is clearly gunning for the lead in AI utility, particularly in coding and agentic, independent tasks. ChatGPT and Google Gemini have bigger user bases, but Anthropic has the means to entice at least some AI chatbot users away to Claude. With Sonnet 4 available to free users and Opus 4 bundled into Claude Pro, Max, Team, and Enterprise plans, Anthropic is trying to appeal to both the budget-friendly and premium AI fans.
You might also likeSamsung already doubled down on its Art Mode and Art Store earlier in 2025 by expanding it to nearly its entire TV lineup, well beyond the Frame TV or Frame Pro. And if you’ve ever wished you could pick an iconic piece of art from the Star Wars universe – maybe an AT-AT on Hoth or an X-Wing – or something from the world of Disney like Snow White, Samsung’s answering the call.
Beyond the thousands of art pieces already available on the Art Store, Samsung has now dropped a collection of pieces in partnership with Disney. The collection goes beyond the iconic classic Disney animated films to include Star Wars, Pixar, and National Geographic. All of the pieces, be they animated or a wild shot of nature, are in 4K quality to ensure they’ll look their best on your Samsung TV.
Now, it’s not a free drop – you’ll need to be subscribed to Samsung’s Art Store and have an eligible TV. That membership is either $4.99 a month or $49.99 a year in the United States and lets you access all the pieces, including future drops.
While Art Mode and these works of art will look their best on a Samsung Frame TV or Frame Pro, thanks to the special reflection-blocking, matte finish, you’re not limited to that specific family of TVs.
(Image credit: Samsung)Samsung’s expanded Art Mode support to QLED, Neo QLED 4K, and Neo QLED 8K TVs within the 2025 lineup means you don’t need to opt for a Frame TV or Frame Pro. And that also means you might be able to save a bit, as Samsung’s lifestyle TVs do cost a bit more in some cases.
This also isn’t the first time Disney, Star Wars, Pixar, and National Geographic pieces of art have been available on Samsung’s TVs. In 2023, timed for the Disney 100 anniversary, Samsung dropped the limited-edition The Frame-Disney100 Edition in 55-inch, 65-inch, and 75-inch sizes. It was a standard 4K QLED Frame TV with a special, platinum metal bezel, but the real appeal was that it came with 100 pieces of Disney art ready to go out of the box.
No extra subscription needed as you could look through the collections and pick your favorites, and then set them to your Art Mode.
It remains to be seen how many pieces are included in the Art Store and whether they’re the same as what was previously collaborated on. We’ve reached out to Samsung to ask, but for fans of Pixar – Toy Story, anyone? – Star Wars, National Geographic, and Disney at large, it’s certainly a fun addition.
With this latest drop, Samsung’s Art Store offers over 3,500 pieces of art to pick from, and on TVs with Art Mode, you can set your favorites to be shown when the TV is off and even mat them for a more dramatic effect, if you like.
You might also likeIntel may be preparing to launch an unusual graphics card featuring two Arc B580 GPU chips and 48GB of memory, reports have claimed.
While this isn’t an official Intel product, it appears to be a custom design developed by one of Intel’s board partners, who remains unnamed due to non-disclosure agreements.
What makes this card notable is the return of a dual-GPU layout using consumer-class chips, something the industry hasn’t seen in several years.
48GB of memory hints at AI potentialThis particular model reportedly combines two B580 GPUs, each paired with 24GB of memory, for a total of 48GB on a single card.
The intent doesn't appear to be gaming, which raises questions about the target audience. Given the high memory and compute potential, one possibility is that it’s intended for AI development or other high-throughput workloads.
Although 48GB still falls short of the memory capacity in top-tier professional accelerators, using consumer-grade GPUs could offer a cost-effective alternative for some training scenarios.
Still, without performance benchmarks or detailed architectural information, it’s difficult to determine whether this configuration could compete with even midrange professional GPUs.
For users comparing it against the best GPUs currently available, skepticism is warranted. No other board partners have been linked to similar designs, and it remains unclear whether this is a one-off experiment or part of a broader strategy.
This development may also interest content creators. With such a high memory ceiling, it could appeal to users seeking the best laptops for video editing or for Photoshop, assuming future mobile variants emerge.
But until more technical data is released, this card is best regarded as a curiosity rather than a sure bet.
Via Videocardz
You might also likeGoogle I/O events are an often frustrating glimpse of the near future, with a lot of shiny software toys scheduled to land sometime "in the coming months". That often means a long wait of up to a year, so for Google I/O 2025 we've rounded every new announcement that you can actually try today.
Naturally, some of the features below come with restrictions – a few are only available to try now in the US, while some are restricted to subscribers of Google's AI Pro or AI Ultra tiers. But many have also rolled out worldwide, so there are new features to take for a spin even if you don't currently pay Google a cent.
What's missing from the list below and coming at a later date? Quite a bit actually, including some of the more futuristic ideas like Google Beam and Android XR, and it also isn't clear how long we'll have to wait for a worldwide rollout of AI Mode for Search, Veo 3, Flow, Virtual Try On in the Shopping app, and Google's top-tier AI Ultra plan.
Still, there are quite a few things from Google I/O 2025 to keep us amused in the meantime, so here's a list of the ones that are available to try today...
1. AI Mode in SearchGoogle completely upended its golden goose, Search, at I/O 2025 this week, announcing several new features to stave off the threat of ChatGPT – and the biggest was arguably the US rollout of AI Mode.
If you're in the US and aren't seeing the new tab in Search (or in the search bar of the Google app), it's likely because Google said it'd be a gradual roll-out "over the coming weeks".
We've been using it for a while, though, and have put together a guide on how to master the new AI mode. It shouldn't be your go-to for everything, but we've concluded that "if you’re researching, planning, comparing, or learning, AI Mode can be a real comfort". Google hasn't yet commented on when it'll get a worldwide launch, but we'd imagine it'll be sometime this year.
2. Veo 3Arguably the biggest breakthrough moment at Google I/O 2025, Veo 3 is the first AI video generator that can deliver synchronized audio (including speech) alongside its video creations. And it's available to try now for a lucky few, if you're in the US and on the new Gemini Ultra plan.
Granted, that is a pretty small group of people, but we had to include it in this list because it is actually available today for those lucky peeps, and US enterprise users on the Vertex AI platform.
The amount of processing power required for Veo 3 could mean a relatively slow rollout elsewhere, and Google has hinted as much by also releasing new features for Veo 2 like the ability to give it reference scenes.
3. Google FlowNot sure how to weave all of your AI videos together into a cohesive whole? Google also addressed that issue with a new AI video editor called Flow – and like Veo 3, it's out now for AI Pro and Ultra subscribers in the US.
It's a bit like a Premiere Pro that you can operate entirely with natural language, to avoid learning keyboard shortcuts or complex menus. To get an idea of how it works, check out Google's short tutorial.
Impressively, it goes as far as giving you menus of camera moves like 'dolly out' and 'pan right', so you don't even have to describe them. Google has also at least promised that it's "coming soon" to more countries, so we're hopeful of a wider rollout in 2025.
The big smartphone story of Google I/O 2025 was the full rollout of one of the best AI tools around on Android and iOS – Gemini Live.
Like ChatGPT's Advanced Voice Mode, Gemini Live is an AI assistant that you can chat to using your voice. The most useful part, though, is that you can also give it eyes using your phone's camera to get help with whatever's in front of you or on your screen.
To conjure the assistant, open the Gemini app on iOS or Android, tap the Gemini Live icon (on the far right of the text input box), and start chatting away.
5. Imagen 4Google didn't just level-up its AI-generated video at I/O 2025 – we also got a new Imagen 4 model for whipping up still images in higher resolution (now up to 2K) than before.
The latest Imagen (which is available now in the Gemini app, Whisk, Vertex AI and across Google Workspace) also showed that it's been working hard on one of its main weaknesses – handling text.
This means that scenes involving typography should no longer be a jumbled mess of weird characters and look more realistic. While Imagen 4 is available to use for free, it does come with usage limits – you can expect 10-20 image generations on a free plan, while Gemini subscribers get a more generous 100-150 generations a day.
6. Gemini 2.5 FlashOkay, Gemini 2.5 Flash isn't brand new, but it was given a big upgrade at Google I/O 2025 – and it's now available to everyone to dabble with in the Gemini app.
In fact, Gemini 2.5 Flash is now the default model in Google's Gemini chatbot, because it's apparently the fastest and more cost-efficient one for daily use. Some of the specific improvements, over its 2.0 Flash predecessor, include a greater ability to understand images and text.
Wondering how it compares to ChatGPT 4o? We've already compared the two to help you see which might be the best for you. Spoiler: it's a close call, but Gemini 2.5 Flash is particularly appealing if you live in Google's world of apps and services.
7. JulesNeed a coding assistant to speed up your workflow? Google has just given Jules (first introduced as a Labs experiment last December) a wider public beta rollout, with no waiting lists.
Jules is a bit more than a coding copilot – it can autonomously beaver away on fixing bugs, writing tests and building new features without any input from you. It works 'asynchronously', which means it can work on various tasks without waiting for them to finish.
Google says Jules isn't trained on your private code and that your data stays within its private environment. With autonomous agents on the rise, it certainly looks worth dabbling with if you could do with some coding assistance.
8. Virtual Try-OnGoogle Shopping has had a 'Try On' feature for clothes since 2023, but it got a big upgrade it got at Google I/O 2025. Rather than using virtual models to show you how your chosen clothes might fit, it now lets you upload a photo of yourself – and uses AI to help you avoid the hassle of changing rooms.
Once you've uploaded a full-length photo of yourself, you'll start to see little "try it on" buttons when you click on outfits that are served up in the Shopping tabs search results. We've taken it for a spin and, while it isn't flawless, it does give you a solid idea of what some clothes will look like on you. And anything that helps us avoid real-world shopping is fine by us.
9. Deep Research in GeminiGoogle brought its 'Deep Research' feature to Gemini Advanced subscribers (now Gemini Pro) in late 2024. And now the handy reports tool has given a particularly useful upgrade – the ability to combine its research of public data from the web with any private PDFs or images that you uploads.
Google provided the example of a market researcher uploading their own internal sales figures so they could cross reference them with public trends. Unfortunately, you can't yet pull in docs or data from Google Drive and Gmail, but Google says this is coming "soon".
10. Gemini quizzesGoogle is particularly keen to get students using its Gemini app – not only did it extend its free access to Google AI Pro for school and university students to new countries including the UK, it also added a new quiz feature to help with revision.
To start a quiz, you can ask Gemini to "create a practice quiz" on your chosen subject. The most useful part is that it'll then make a follow-up quiz based on your weaknesses in the previous test. Not that you have to be studying to make use of this feature – it could also be a handy way to sharpen your pub quiz skills.
If you're a student in the US, Brazil, Indonesia, Japan and the UK, you can get your free year of Gemini AI Pro by signing up on Gemini's students page – the deadline is June 30, 2025 and you will need a valid student email address.
11. Google Meet speech translationWe're particularly looking forward to trying out Google Beam this year, with the glasses-free 3D video calls (formerly known as Project Starline) heading to businesses courtesy of HP's new hardware. But a new video calling feature you can try now is Google Meet's near real-time translations.
Available now for AI Pro and Ultra subscribers in beta, the feature will provide an audible translation of your speech (currently in English to Spanish, or vice versa) with a relatively short delay. It isn't seamless, but we imagine the delay will only reduce from here – and Google says more languages are coming "in the next few weeks".
12. Google AI Pro and AI Ultra plansGoogle switched up its AI subscription plans at Google I/O 2025, with 'Gemini Advanced' disappearing and being replaced by AI Pro and new 'VIP' tier called AI Ultra.
The latter is currently US-only (more countries are "coming soon") and costs a staggering $250 a month. Still, that figure does give you "the best of Google AI", according to the tech giant, with AI Ultra including access to Veo 3 with native audio generation, Project Mariner, and the highest usage limits across its other AI products. You also get YouTube Premium and 30TB of storage thrown in.
The AI Pro tier ($20 a month) still gets you access to Gemini, Flow, Whisk, NotebookLM and Gemini in Chrome, but with lower usage limits and cloud storage of a mere 2TB.
If you're an AI power user and like the sound of AI Ultra, Google is currently offering it at 50% off for your first three months. Don't tempt us, Google...
You might also like