PLUS: Disney sparks creator backlash, AI's growing energy needs, and the rise of 'vibe coding'
Good morning
A rare glimpse into OpenAI's finances has revealed the staggering price tag attached to running its frontier AI models. Leaked documents show a massive payout to Microsoft, highlighting the immense cloud computing costs required to power services like ChatGPT.
With these expenses accelerating at a rapid pace, the numbers raise serious questions about the long-term profitability of foundational AI. Does this immense financial barrier mean the future of frontier development belongs exclusively to a handful of tech giants and their partners?
In today’s Next in AI:
OpenAI’s massive Microsoft payout
Disney's AI plan sparks creator backlash
The rise and debate of ‘vibe coding’
OpenAI's Leaked Finances Reveal Staggering Costs

Next in AI: Leaked documents reveal OpenAI paid Microsoft nearly $866 million in revenue share in the first nine months of 2025, offering a rare glimpse into the massive operational costs behind its powerful AI models. This highlights the steep price of cloud computing required to run services like ChatGPT at scale.
Decoded:
Payments to Microsoft jumped from $493.8 million for all of 2024 to $865.8 million in just the first three quarters of 2025, signaling rapidly accelerating compute expenses.
The figures are based on a reported deal where OpenAI shares 20% of its revenue with Microsoft, its primary cloud and compute partner.
These numbers imply OpenAI's inference costs—the expense of running models for users—could be outpacing its revenue, raising questions about the long-term profitability of foundational models.
Why It Matters: The eye-watering costs reveal the immense financial barrier to competing at the frontier of AI. This reality check underscores the strategic importance of partnerships with cloud giants and fuels the debate over the AI industry's path to sustainable profitability.
Disney's AI Content Play
Next in AI: Disney CEO Bob Iger announced plans to let Disney+ subscribers create their own AI-generated content, sparking an immediate and intense backlash from artists and creators concerned about the future of their industry.
Decoded:
The Owl House creator Dana Terrace led the charge against the proposal, telling her followers to unsubscribe from Disney+ and even urged fans to “Pirate Owl House” in protest.
Iger framed the potential feature as a way to boost viewer engagement, giving users new tools to interact with the platform and its content library.
The move taps into widespread anxiety within creative fields, as a recent report blamed the rise of AI for over 10,000 job cuts in the past year.
Why It Matters: This move highlights the growing tension between media giants looking to leverage AI for user engagement and the creative community fearing their craft will be devalued. The outcome of this debate could set a significant precedent for how major streaming platforms handle generative AI in the future.
The 'Vibe Coding' Debate

Next in AI: Collins Dictionary crowned "vibe coding" its 2025 word of the year, cementing the rise of AI-assisted programming. The trend is sparking a fierce debate over the future of software development, pitting intuitive, conversational methods against more rigid, structured approaches.
Decoded:
Vibe coding, a term from OpenAI's Andrej Karpathy, describes using natural language to create software, allowing a developer to "forget that the code even exists." This approach prioritizes iterative conversation and rapid experimentation with an AI agent.
The counter-movement is Spec-Driven Development (SDD), where AI agents first generate detailed documentation and plans before writing any code, using frameworks like GitHub's Spec-Kit. Critics argue this is a return to the inflexible and bureaucratic Waterfall model.
This debate presents two visions for AI's role: one where AI is a collaborative partner in an agile process, and another where it's a pure executor following a rigid plan. Proponents of the former are championing an approach they call Natural Language Development.
Why It Matters: This isn't just a debate over workflow; it's about defining the fundamental relationship between human creativity and machine execution. The outcome will shape how the next generation of software is built and who gets to build it.
AI in the Classroom

Next in AI: A report from Anthropic shows how university educators are using its AI model, Claude, to develop course materials, build custom interactive tools, and manage administrative work, highlighting a trend of AI shifting from a student tool to a professor's assistant.
Decoded:
Educators are moving beyond chatbots and building their own tools, using features like Claude Artifacts to create interactive chemistry simulations, automated grading rubrics, and data visualization dashboards.
A clear pattern is emerging: professors use AI as a collaborator for creative tasks like lesson planning and grant writing, while delegating more routine administrative work like financial management for full automation.
Automated grading remains a point of tension, with data showing nearly half of grading tasks are heavily automated, even as many educators express ethical concerns and find AI least effective for assessment.
Why It Matters: This signals a shift where AI acts less like an assistant and more like a creative partner in higher education. As a result, educators are being pushed to redesign their courses to focus on skills that AI cannot replicate, like critical evaluation and conceptual understanding.
AI Pulse
PwC highlighted a report from Anthropic detailing the first known AI-orchestrated cyber espionage campaign, where a Chinese state-sponsored group used an AI toolchain to autonomously execute 80-90% of a multi-stage attack.
OpenAI updated ChatGPT to better follow custom instructions, specifically allowing users to finally prevent its notorious overuse of the em dash—a change CEO Sam Altman called a "small-but-happy win."
Meta claimed in response to a $359M copyright lawsuit that nearly 2,400 adult films torrented on its IP addresses were for employee's "private personal use," not for training its AI models.
