PLUS: NVIDIA’s open-source agent play and Anthropic’s standoff with the Trump administration
Happy reading
The world's first commercially available computer running on human brain cells is now on the market, and it has already been programmed to play Doom. An Australian biotech startup is selling these "wetware" machines, which use 800,000 lab-grown neurons.
This biocomputing breakthrough turns a futuristic concept into a tangible research tool. But does the accessibility of programming living neurons signal a new era for both drug discovery and our fundamental understanding of the brain?
In today’s Next in AI:
A computer made of brain cells plays Doom
NVIDIA’s open-source agent play
Anthropic’s standoff with the Trump administration
An AI rewrite challenges copyleft licensing
The Wetware Computer

Next in AI: Australian biotech startup Cortical Labs is now selling the world's first commercially available biological computer. The machine runs on 800,000 lab-grown human brain cells and has already learned to play Doom.
Explained:
Cortical Labs has priced its CL1 unit at $35,000 and is also offering cloud access for $300 per week, turning this biocomputing breakthrough into an accessible research tool.
The platform's accessibility was proven when an independent developer used Python to get the system playing Doom in about a week, demonstrating that programming it doesn't require a neuroscience PhD.
Unlike silicon-based AI, the CL1 uses neurons derived from stem cells that learn and adapt organically, making it a powerful tool for studying neural processes and testing new drugs on actual human tissue.
Why It Matters: This marks a major step in moving biocomputing from a theoretical concept to a practical research product you can actually buy and program. It provides scientists a new way to experiment with living neural networks, potentially accelerating breakthroughs in drug discovery and our understanding of the brain.
NVIDIA’s Open Agent Play

Next in AI: NVIDIA is jumping into the AI agent space with NemoClaw, an open-source platform aimed at enterprises. The initiative allows companies to build and securely deploy their own AI agents to automate workforce tasks.
Explained:
The move addresses growing enterprise interest in autonomous agents, or “claws,” while mitigating the security risks that have made companies hesitant to adopt them.
NemoClaw will be an open-source AI platform accessible to companies like Salesforce and Adobe, regardless of whether they use NVIDIA chips, and will include built-in security and privacy tools.
This is a strategic play for NVIDIA to expand beyond its proprietary CUDA platform, aiming to maintain its dominance in AI infrastructure as more labs develop their own custom chips.
Why It Matters: By providing a secure framework for building agents, NVIDIA is making it safer and easier for businesses to adopt powerful workflow automation. This positions the company as a core software and infrastructure provider for the next generation of enterprise AI.
The Anthropic Standoff

Next in AI: AI safety leader Anthropic is taking legal action against the Trump administration after the Pentagon designated it a security risk for refusing to allow its Claude model to be used for mass surveillance or autonomous weapons.
Explained:
The dispute began after Anthropic rejected the Pentagon's demand for the "all lawful" use of its AI, sticking to its ethical red lines against specific military applications.
In response, the administration labeled Anthropic a supply chain risk—a designation typically reserved for foreign adversaries—and ordered federal agencies to stop using Claude.
The conflict has divided the industry, with competitor OpenAI signing a Pentagon deal just hours later, while dozens of researchers from top labs filed a brief supporting Anthropic's stance.
Why It Matters: This high-stakes legal battle highlights the growing tension between AI developers' ethical commitments and government demands for powerful, unrestricted technology. The outcome will likely set a major precedent for how AI companies navigate national security contracts and define responsible deployment.
AI vs. Copyleft
Next in AI: The AI-assisted rewrite of a popular Python library has ignited a fierce debate over the future of open-source licensing. The new version is 48x faster but sheds its "share-alike" license, pitting legal permissions against long-held community ethics.
Explained:
The maintainer of
chardet, a library used in millions of projects, used Anthropic's Claude to rewrite it from the ground up by feeding the AI only its public API and test suite.This move has opened up a complex legal and ethical discussion about whether an AI-generated reimplementation can be considered a new work, free from the obligations of the original copyleft license.
The case highlights broader implications for all software licensing, as experts warn that cheap AI reimplementation could render copyleft protections unenforceable and fundamentally alter the economics of software development.
Why It Matters: This event sets a precedent that tests the resilience of open-source social contracts in the age of generative AI. As code generation becomes effortless, the industry must now grapple with how to protect collaborative work when technology can potentially sidestep the letter of the law.
AI Pulse
OpenAI acquired Promptfoo, an AI security platform, to integrate its red-teaming and vulnerability testing tools directly into the OpenAI Frontier platform for enterprise agents.
Grammarly drew criticism for its "expert review" feature, which uses AI agents to generate writing advice inspired by real journalists and authors without their permission.
Simile raised $100M in Series A funding to scale its platform that creates "agentic twins"—AI agents trained on human data—for companies like CVS to use in polling and market research.
Researchers created the first high-precision chemical map of the Moon’s far side by training an AI model on spectral and geological data from China's Chang'e-6 mission.