PLUS: Eli Lilly's new AI drug lab, Google's Nano Banana 2, and OpenAI's London hub
Happy reading
AI safety company Anthropic has publicly rejected a demand from the Pentagon to remove key safeguards from its models. The company is taking a firm stand against using its technology for specific military applications like autonomous weapons.
With a $200 million contract on the line and competitors reportedly agreeing to broader terms, Anthropic is taking a significant risk. Will this principled stand establish a new precedent for ethical AI development, or will the company be sidelined in the lucrative national security sector?
In today’s Next in AI:
Anthropic draws a red line with the Pentagon
Eli Lilly’s new AI drug discovery lab
Google’s faster Nano Banana 2 image AI
OpenAI’s major London research hub expansion
Anthropic's Red Line

Next in AI: AI safety leader Anthropic is publicly refusing the Pentagon's demands to remove safeguards from its AI models. The company is drawing a line on using its technology for mass domestic surveillance and fully autonomous weapons, citing democratic values and current tech limitations.
Explained:
Anthropic stresses it is not anti-defense and was the first to deploy frontier AI models in the U.S. government's classified networks for critical national security applications.
The Pentagon has threatened to terminate the contract, valued at $200 million, and label Anthropic a supply chain risk—a designation typically reserved for foreign adversaries.
This move positions Anthropic as an outlier, as competitors like OpenAI and xAI have reportedly agreed to broader terms allowing for any lawful use of their models, making Anthropic's action a principled stand on AI safety.
Why It Matters: This standoff marks a pivotal moment in the debate over AI ethics and who controls the guardrails on powerful, dual-use technologies. The outcome will likely set a major precedent for how AI companies navigate safety commitments and government partnerships in the national security space.
AI's Newest Drug Lab

Next in AI: Pharmaceutical giant Eli Lilly has switched on 'LillyPod', the world's most powerful AI supercomputer wholly dedicated to drug discovery. This new system, powered by over 1,000 of NVIDIA's latest GPUs, is designed to drastically accelerate the creation of new medicines.
Explained:
LillyPod's immense power comes from a DGX SuperPOD system running 1,016 NVIDIA Blackwell Ultra GPUs, which deliver over 9,000 petaflops of AI performance.
The system creates a "computational dry lab," allowing scientists to digitally simulate and test billions of molecular ideas at a massive scale before committing to slower physical experiments.
This marks the pharmaceutical industry's first DGX B300 deployment, and Lilly will make some of its AI models available to biotech partners through its TuneLab platform using federated learning.
Why It Matters: This massive investment in in-house AI infrastructure signals a major shift in how pharmaceutical companies approach research and development. By creating a computational factory, Lilly can explore potential medicines at a scale and speed that was previously unimaginable, potentially shortening timelines for new treatments.
Google's Fast-Pro Imaging

Next in AI: Google is rolling out Nano Banana 2, a new AI image model that combines the high-quality output of its Pro models with the rapid speed of its Flash models, making advanced image generation faster and more accessible.
Explained:
The new model brings Pro-level features, like accurate text rendering and consistent characters across images, to a much faster generation speed for quicker creative iteration.
It's now the default image generator in the Gemini app and is also being integrated into Google Search, Ads, and developer platforms like the Gemini API and Vertex AI.
To help identify AI-generated content, Google is coupling its SynthID watermarking with interoperable C2PA Content Credentials, giving users a clearer picture of an image's origin.
Why It Matters: This move makes high-end AI image generation more accessible, allowing creators and businesses to produce quality visuals without sacrificing speed. By integrating this powerful tool directly into its core products, Google is significantly lowering the barrier to entry for professional-grade creative work.
OpenAI Bets on London

Next in AI: OpenAI announced it's making London its biggest research hub outside the US. This move signals a major investment in the UK's talent and growing technology ecosystem.
Explained:
OpenAI is tapping into the UK’s rich talent pool from leading universities like Oxford and Cambridge to accelerate its work in AI.
The London team will focus on frontier AI research and developing next-generation models, intensifying the local rivalry with Google DeepMind.
UK officials called the expansion a “huge vote of confidence” that supports Britain’s ambition to become a global AI superpower.
Why It Matters: This move highlights the intense global competition for top-tier AI researchers and engineers. It solidifies London's position as a critical center for AI innovation, rivaling traditional tech hubs in the United States.
AI Pulse
Block slashed its workforce by 40%, with CEO Jack Dorsey directly attributing the 4,000+ layoffs to the increased efficiency of "intelligence tools" and predicting most companies will follow.
Researchers demonstrated a "Reverse CAPTCHA" attack where invisible Unicode characters can embed secret instructions in text, finding that tool-enabled models like Claude Sonnet and GPT-5.2 could be tricked into compliance.
Metacritic pulled a 9/10 review for Resident Evil Requiem from the site Videogamer after it was discovered the article was attributed to a fake, AI-generated journalist.
Niantic unveiled Niantic Spatial, a new venture building a large-scale geospatial model designed for machines, aiming to provide a "living map" for robots and AI agents to navigate and act in the physical world.