PLUS: TSMC's massive AI profits, synthetic neurons for brain mapping, and Sal Khan's AI tutor reality check
Happy reading
A new omni-modal AI is aiming to read the room, analyzing not just what is said, but how it's said. Inter-1 can identify complex social signals like hesitation and confusion from video, audio, and text in real-time.
The technology promises to make AI interactions more effective in fields like sales and telehealth, where understanding subtext is crucial. But as AI learns to interpret unspoken human cues, what does this mean for the future of authentic communication and connection?
In today’s Next in AI:
Interhuman's AI that reads the room
TSMC’s massive AI-fueled profits
Google’s synthetic neurons for brain mapping
Sal Khan’s AI tutor reality check
AI that reads the room

Next in AI: Interhuman just released Inter-1, a new omni-modal AI that analyzes video, audio, and text in real-time. It goes beyond basic emotion detection to identify nuanced social signals like hesitation and confusion in human communication.
Explained:
Instead of the usual emotion wheel, Inter-1 identifies 12 distinct social signals, moving AI from just spotting “anger” to understanding complex cues like skepticism or engagement.
For every signal it detects, the model provides a rationale explaining which behavioral cues it observed, with 53% of its evidence coming from nonverbal inputs like posture and vocal tone.
Benchmarks show Inter-1 outperforming other frontier models on accuracy at near real-time speeds, and developers can already access it through the company's Signals API.
Why It Matters: This technology could significantly enhance AI's use in fields that depend on understanding human subtext, like sales, user research, and telehealth. It marks a clear step toward AI that can interpret the unspoken layers of conversation, making interactions more empathetic and effective.
The AI boom's bottom line

Next in AI: TSMC, the world's most critical chipmaker, just reported a massive 58% profit jump, smashing estimates and underscoring the insatiable global demand for the hardware powering the AI surge.
Explained:
The company's record-breaking first quarter generated $35 billion in revenue, fueled by major clients like Nvidia and Apple.
Advanced chips, including those for high-performance computing, now account for nearly 75% of total wafer revenue, demonstrating where the market's focus lies.
To meet relentless demand, TSMC is increasing its capital spending to the high end of $52-56 billion and adding a new advanced fabrication plant in Taiwan.
Why It Matters: TSMC's financial success serves as a clear barometer for the AI industry's exponential growth, reflecting the massive infrastructure investment underway. This continued expansion of the chip supply chain is critical for enabling the next wave of AI innovation and applications.
AI speeds up brain science

Next in AI: Google Research has developed MoGen, an AI model that generates realistic 3D neurons. Using this synthetic data to train other models is drastically cutting down the manual labor needed to map the brain.
Explained:
Mapping the brain is a colossal task; a complete mouse brain is a thousand times larger than the recently mapped fruit fly brain, making manual reconstruction nearly impossible.
By augmenting training data with synthetic neurons, Google's reconstruction AI reduced its error rate by 4.4%, saving an estimated 157 person-years of manual proofreading.
Google has open-sourced MoGen and plans to fine-tune it to create specific neuron geometries that are known to cause errors, further improving accuracy.
Why It Matters:
AI-generated data is proving to be a powerful solution for data bottlenecks in complex scientific fields. This approach will likely accelerate discoveries beyond neuroscience, particularly in any domain limited by the high cost of expert-labeled data.
The tutor isn't in
Next in AI: Khan Academy founder Sal Khan, one of AI’s biggest champions in education, is pumping the brakes on the AI tutor boom. He admits his chatbot, Khanmigo, was a "non-event" for many students who simply didn't use it or know how to ask for help.
Explained:
The core issue is an engagement gap—students often don’t seek out help or can't articulate their questions, a dose of reality for what some expert analysis called overblown hype from the start.
On-the-ground feedback from teachers shows students found the bot frustrating when it wouldn't give direct answers, with more using AI to find answers than to actually learn.
Khan Academy is now pivoting by integrating Khanmigo directly into practice exercises, acknowledging that the tool has limited impact without student initiative—a point raised in community discussions.
Why It Matters: This serves as a potent reality check for the AI hype cycle, demonstrating that user behavior and motivation are often the biggest hurdles to adoption. The key takeaway for builders is the need to shift from creating standalone AI tools to deeply integrating them into existing workflows where they can provide immediate, contextual value.
AI Pulse
Cloudflare unified its AI platform, providing a single inference layer for developers to access over 70 models from 12+ providers like OpenAI and Google with one API.
Myseum rebranded to Myseum.AI and announced a pivot to AI-managed personal media, sending its stock up over 150% in the latest example of a company chasing AI hype.
Laravel drew criticism from its open-source community for injecting promotional text for its commercial cloud platform directly into the core documentation of its official AI agent library.
Researchers highlighted a new "tool poisoning" attack vector, demonstrating how malicious instructions embedded in a tool's description can trick an AI agent into stealing sensitive files like SSH keys.