PLUS: Google's new AI coding tool gets hacked, business AI adoption stalls, and the 'AI-Free' movement grows

Good morning

OpenAI's rapid expansion is facing a significant financial hurdle, as a new analysis projects the company will have a stunning $207 billion funding gap by 2030. The report points to the immense cost of computing power as the primary obstacle to its long-term viability.

Despite projections for massive revenue growth, the income won't be enough to cover the staggering infrastructure expenses required to power its models. The situation raises a critical question: how will OpenAI manage this financial collision course to sustain its ambitious trajectory?

In today’s Next in AI:

  • OpenAI’s $207B funding shortfall

  • Google’s AI coder hacked in 24 hours

  • Business AI adoption surprisingly flatlines

  • The rise of the ‘AI-Free’ creative label

OpenAI's $207B Question

Next in AI: A new HSBC analysis projects OpenAI will face a stunning $207 billion funding shortfall by 2030. The report raises serious questions about the company's long-term financial path despite its rapid growth.

Decoded:

  • The primary driver is the immense cost of computing, with OpenAI facing a $620 billion data center rental bill and total compute commitments reaching $1.4 trillion by 2033.

  • Even with optimistic revenue growth to over $213 billion annually and converting millions of users to paid subscribers, the income won't be enough to cover the enormous infrastructure expenses.

  • To power its models, OpenAI plans to use 36 gigawatts of AI compute by 2030, an energy demand comparable to a state larger than Florida, as one gigawatt can power roughly 750,000 homes.

Why It Matters: OpenAI’s aggressive expansion is on a direct collision course with the immense capital required to sustain it. This financial pressure will force the company to either secure unprecedented levels of funding or create new, highly profitable revenue streams to ensure its viability.

Google's AI Ground Control

Next in AI: Google’s new Gemini-powered AI coding assistant, Antigravity, was compromised less than 24 hours after its public launch. A security researcher discovered a severe vulnerability that allows an attacker to install persistent malware on a user's computer.

Decoded:

  • The exploit creates a backdoor that survives reinstallation and reloads whenever a user starts a new project, making it exceptionally persistent on both Windows and Mac systems.

  • This discovery highlights a troubling trend of AI products shipping with major security flaws, with one researcher comparing the current environment to the wild west of hacking in the late 1990s.

  • The tool's "agentic" design, which lets it act autonomously, amplifies the risk, and this isn't its only flaw—Google’s own bug tracker lists other known issues related to data access.

Why It Matters: The rapid pace of AI development is clearly outpacing security testing, placing early adopters at risk. Users should exercise caution with new AI coding tools, especially those requiring deep access to local systems.

The AI Hype Check

Next in AI: Despite soaring investor optimism, a recent analysis of U.S. Census Bureau data reveals that the adoption of AI by businesses has surprisingly flatlined.

Decoded:

  • The data, released on November 20th, asks firms about their AI use in producing goods and services over the past two weeks.

  • Adoption has fallen most sharply among the largest businesses—those with over 250 employees—bucking expectations of enterprise-led growth.

  • Overall, the employment-weighted share of Americans using AI at work recently dropped by a percentage point, now sitting at just 11%.

Why It Matters: This data serves as a critical reality check, suggesting the path to widespread AI integration is more complex than the hype suggests. For now, the excitement in boardrooms and on Wall Street isn't fully translating to the factory floor or the office cubicle.

The 'AI-Free' Label

Next in AI: In a pushback against industry-wide AI adoption, a growing number of indie game developers are using an 'AI-Free' label as a marketing tool to highlight their commitment to human creativity.

Decoded:

  • Indie developers created a "No Gen AI" seal to push back against claims that all game companies now use AI, assuring players their work is entirely human-made.

  • This movement directly contrasts with major studios like Ubisoft, which is developing its Ghostwriter tool to generate in-game dialogue, and Krafton, which is reorganizing to be an "AI-first" company.

  • Developers argue that the problems generative AI purports to solve are actually rewarding creative challenges, with one studio calling the manual process "more fun that way" in a passionate statement.

Why It Matters: The 'AI-Free' tag is becoming a new value proposition, turning human craftsmanship into a premium feature for consumers. This positions authentic, human-made art as a key differentiator in an increasingly automated market.

AI Pulse

NVIDIA claimed its technology is "a generation ahead" of rivals after reports that Meta is exploring a multi-billion dollar deal to use Google's custom TPU chips in its data centers.

OpenAI responded to a lawsuit over a teen's suicide by arguing the death was caused by a "misuse" of its technology and a violation of its terms of service, which prohibit discussing self-harm.

MIT found that current AI systems can already replace 11.7% of the U.S. workforce, representing $1.2 trillion in wages, according to a new labor simulation study.

FoloToy resumed sales of its AI-enabled teddy bear after pulling it from shelves when researchers discovered it was chatting with children about sexual fetishes and other inappropriate topics.

Keep Reading


No posts found