PLUS: The $100B AI debt bubble, a new poetry jailbreak, and AI companion dolls
Good morning
A new brain-inspired AI from startup Sapient Intelligence is challenging the industry’s “bigger is better” mantra. The company's tiny model is already outperforming massive systems from OpenAI on complex reasoning benchmarks.
Sapient's success suggests a major shift away from brute-force scaling. Could the next wave of AI capabilities come from more efficient and architecturally distinct models instead of ever-larger ones?
In today’s Next in AI:
Sapient’s brain-inspired AI outperforms OpenAI
The growing $100B AI debt bubble
A universal poetry jailbreak for LLMs
South Korea deploys AI companion dolls
The Brain-Inspired AI

Next in AI: Two 22-year-old founders turned down a multi-million dollar offer from Elon Musk to build Sapient Intelligence, a company whose tiny, brain-inspired AI is already outperforming massive models from OpenAI on complex reasoning tasks.
Decoded:
Sapient’s Hierarchical Reasoning Model (HRM) is microscopic, with just 27 million parameters, challenging the industry's belief that larger models are always more capable.
Unlike transformers that predict text, HRM uses a dual-system architecture inspired by the human brain to reason logically through problems from the ground up.
The model proved its abilities by achieving high performance on the ARC-AGI benchmark, a test designed to measure abstract problem-solving skills beyond simple pattern matching.
Why It Matters: Sapient's success suggests that the brute-force approach of scaling up models may not be the only path forward. More efficient, architecturally distinct models could unlock the next wave of AI capabilities.
The AI Debt Bubble

Next in AI: The AI industry's staggering cash burn is now being fueled by massive debt. OpenAI's partners alone are taking on nearly $100 billion in loans to build data centers, raising serious concerns about a potential market correction and economic fallout.
Decoded:
The debt isn't just with startups; it's an industry-wide shift from cash-funded growth to leveraging loans. On top of the debt held by partners like Oracle and CoreWeave, the five largest hyperscalers (Amazon, Google, Meta, Microsoft, and Oracle) have also taken on $121 billion in new debt this year to fund their AI operations.
This massive spend is running far ahead of actual returns, creating a disconnect between valuation and value. An MIT study found that 95% of organizations have received zero measurable return from their generative AI projects so far, highlighting the gap between investment and profitability.
The risk extends far beyond Silicon Valley, with some analysts warning the entire U.S. economy is being propped up by the promise of future AI gains. A potential downturn could hit the stock market hard, impacting pensions and retirement funds and even risking a broader recession.
Why It Matters: The AI gold rush is increasingly built on a fragile foundation of borrowed money rather than just big tech's cash reserves. The industry must soon deliver on its promise of productivity and profitability, or this debt-fueled expansion could trigger a painful correction with widespread economic consequences.
The Poetry Jailbreak

Next in AI: A new research reveals a universal method for bypassing the safety guardrails of all major AI models: phrasing dangerous requests as poems. This surprisingly effective technique works on chatbots from OpenAI, Meta, and Anthropic, exposing a significant vulnerability in current AI safety systems.
Decoded:
The poetic method achieved a 62% success rate with hand-crafted poems and worked across 25 different chatbots from leading AI labs.
This technique is a type of 'adversarial suffix' attack, where stylistic changes confuse a model's safety systems that are separate from its core reasoning abilities.
Researchers theorize that poetry uses low-probability word sequences that a model’s safety classifier isn't trained to flag, revealing a critical gap between a model's interpretive power and the fragility of its guardrails.
Why It Matters: This finding highlights a fundamental vulnerability in safety approaches that rely on recognizing patterns rather than understanding user intent. As models advance, developers must build more robust protections that can't be outwitted by creative language.
The Robo-Grandma

Next in AI: South Korea is deploying thousands of AI-powered companion dolls to tackle a growing mental health crisis among its isolated elderly population. These AI companions from companies like Hyodol provide conversation, health monitoring, and crucial emotional support.
Decoded:
South Korea is addressing a severe crisis as a “super-aged” society, holding the highest elderly suicide rate among developed OECD nations due to widespread social isolation.
The dolls build an emotional bond through AI-powered chat and responsiveness, with one study showing that regular use reduced depression and improved cognitive scores in seniors after just six weeks.
While ethical questions about dependency remain, the approach is part of a growing trend, with the global eldercare robot market projected to hit $7.7 billion by 2030.
Why It Matters: AI companions are moving from novelty to necessity, offering a scalable way to deliver empathetic care in aging societies. This trend signals a major shift in how technology can address fundamental human needs like connection and mental well-being.
AI Pulse
Ilya Sutskever stated that progress from scaling is flattening out and models generalize "dramatically worse than people," signaling a potential end to the era of improving AI simply by adding more data and compute.
Consumer groups issued warnings on AI-powered smart toys after a report found the FoloToy Kumma teddy bear discussed sexually explicit topics like bondage and roleplay.
Anthropic developed a new agent harness that enables models to perform complex, long-running tasks across multiple sessions by using an "initializer" agent to set up the environment and a "coding" agent to make incremental, well-documented progress.
Researchers found a malicious, guardrail-free LLM called WormGPT 4 being sold on darknet forums for as little as $220, capable of generating functional ransomware and phishing emails on demand.
