PLUS: A helium shortage threatens AI chips and NVIDIA's plan to save the grid

Good morning

OpenAI just secured a staggering $122 billion in new funding, pushing its valuation past $850 billion. The capital is fueling a major strategic shift toward a single, unified "AI superapp" for enterprise and developers.

With an unparalleled war chest aimed at cornering compute resources and dominating the enterprise market, the move positions the company for a landmark IPO. But does this level of funding create an insurmountable lead in the AI race?

In today’s Next in AI:

  • OpenAI's $122B war chest and superapp plan

  • A helium shortage threatens the AI chip supply chain

  • NVIDIA’s plan to turn data centers into grid assets

  • Google's TimesFM forecasting model

OpenAI's $122B War Chest

Next in AI: OpenAI has closed a massive $122 billion funding round, rocketing to an $852 billion valuation to secure its compute needs and accelerate its push to build a unified "AI superapp."

Explained:

- The funding was anchored by strategic partners Amazon, NVIDIA, and SoftBank, with continued participation from Microsoft and a broad coalition of global institutions.

- For the first time, OpenAI opened the round to individual investors, raising over $3 billion through bank channels and securing a spot in several ARK Invest ETFs.

- This capital infusion supports a major strategic shift toward a unified "AI superapp," concentrating resources on developer and enterprise tools while discontinuing projects like the Sora video app.

Why It Matters: This historic funding gives OpenAI an unparalleled war chest to secure the vast compute resources needed to stay ahead in the AI race. The move also signals a clear focus on commercializing its technology through an enterprise-first platform, setting the stage for one of tech's most anticipated IPOs.

The Gas Powering AI

Next in AI: A conflict in Iran is choking off the global helium supply from Qatar, revealing a critical real-world vulnerability in the supply chain for advanced AI chips.

Explained:

- The disruption cuts off about one-third of the world’s helium, which is an irreplaceable gas used for cooling during the semiconductor manufacturing process.

- The shortage has sent prices soaring, with some users seeing costs double while suppliers begin rationing shipments to customers, including major Asian chip makers.

- This puts key manufacturing hubs like South Korea and Taiwan at risk, as liquid helium's short 35-48 day shelf life makes it nearly impossible to stockpile.

Why It Matters: This helium shock exposes a critical vulnerability in the physical infrastructure that underpins AI's growth. The incident is a stark reminder that the future of advanced computing remains deeply dependent on fragile geopolitical stability and real-world supply chains.

NVIDIA's Power Play

Next in AI: NVIDIA is spearheading a collaboration with major energy companies and Emerald AI to transform power-hungry data centers into intelligent, flexible grid assets. This allows AI factories to dynamically interact with and support the electrical grid, addressing AI's growing energy footprint.

Explained:

- The system combines NVIDIA’s Vera Rubin DSX reference design with Emerald AI’s Conductor platform, creating an architecture that can generate AI tokens while dynamically responding to grid conditions.

- A key focus is improving performance per watt, a metric NVIDIA has boosted by over 1 million times from its 2012 Kepler GPU to the latest Vera Rubin platform.

- The initiative brings together an entire ecosystem, with partners like GE Vernova and Schneider Electric developing digital twins and validated designs to integrate power, cooling, and compute systems from day one.

Why It Matters:

This approach directly tackles one of the biggest bottlenecks for AI's continued growth: its enormous energy consumption. By treating data centers as grid partners instead of just static power drains, this model can accelerate AI infrastructure deployment without overburdening the energy system.

Google's Crystal Ball

Next in AI: Google Research has released TimesFM, a foundation model designed specifically to predict future trends from time-series data. This specialized model makes high-quality forecasting more accessible for developers.

Explained:

- The latest version, TimesFM 2.5, is both more efficient and powerful, using a smaller 200M parameter model while increasing its context window to 16,000 data points.

- Developers can access the model via Hugging Face and integrate it directly into production workflows through Google's BigQuery, simplifying scalable forecasting.

- Its zero-shot performance holds up in rigorous testing, with a March 2026 academic benchmark confirming its capabilities against other leading time-series models.

Why It Matters: TimesFM gives developers a powerful, pre-trained tool to tackle complex forecasting without building models from scratch. This model's availability signals a shift toward making specialized AI readily available for everyday business intelligence and data analytics.

AI Pulse

Anthropic signed a Memorandum of Understanding with the Australian government to cooperate on AI safety research, share economic data, and explore investments in data center infrastructure.

California signed an executive order requiring AI companies that contract with the state to adhere to stringent safety and privacy guardrails, a direct counter to the White House's call to block state-level AI laws.

Mr. Chatterbox debuted as a 340M parameter language model trained entirely from scratch on a corpus of over 28,000 out-of-copyright Victorian-era books from the British Library.

An AI-generated pull request from Shopify CEO Tobi Lütke, which claimed a 53% speed increase for the Liquid engine, remains open with failing tests and has been highlighted for its poor code quality.

Keep Reading