PLUS: Google's Gemini 3 takes the lead, an open-source math genius, and ads on ChatGPT
Good morning
A new research breakthrough has introduced a language model that operates using zero electrical power. The model, called Entropica, performs inference not with silicon chips, but through a passive optical system.
The research shifts the focus from software efficiency to the physical hardware that underpins AI. Could this innovation be the key to unlocking AI for power-constrained environments like remote sensors and edge devices?
In today’s Next in AI:
Entropica’s zero-power language model
Google’s Gemini 3 takes the lead
An open-source math olympiad champion
OpenAI prepares ads for ChatGPT
The Zero-Power AI

Next in AI: Researchers have unveiled Entropica, the first generative language model designed to run on a passive optical interferometer, enabling it to perform inference with zero electrical power.
Decoded:
The model's forward pass is realized as a passive optical interferometer, enabling it to perform inference using zero electrical power.
It was trained on the TinyStories dataset in under 1.8 hours on a single laptop GPU, showcasing its impressive efficiency.
The entire project is fully open-source, with a practical implementation path detailed in the research paper using components like a $30 laser pointer.
Why It Matters: This research shifts the conversation from purely computational efficiency to the physical hardware that powers AI. It opens the door for hyper-efficient, specialized models that can operate in power-constrained environments like edge devices and remote sensors.
Google's AI Comeback

Next in AI: Google is reclaiming momentum in the AI race with its new Gemini 3 model, which now tops leaderboards and is drawing praise from top competitors.
Decoded:
The model is earning high praise from industry leaders, with Salesforce CEO Marc Benioff stating he's not going back to ChatGPT after experiencing Gemini 3's speed and reasoning.
This success is fueling demand for Google's custom hardware, with companies like Anthropic significantly expanding their use of Google's TPU chips to train their own models.
While not a direct replacement for Nvidia's versatile GPUs, the growing interest in Google's specialized ASIC chips signals a broader industry trend toward diversifying AI hardware.
Why It Matters: Google has successfully shifted the narrative from playing catch-up to leading the pack, proving the AI race is far from over. This resurgence introduces more competition into the ecosystem, not just in AI models but also in the foundational hardware that powers them.
The Open-Source Math Whiz

Next in AI: AI company DeepSeek has released an open-weight model that achieved a gold-medal standard at the International Mathematical Olympiad, making top-tier mathematical reasoning abilities accessible to all.
Decoded:
The model performs at a gold-medal standard in the International Mathematical Olympiad, matching capabilities previously shown only by closed systems from giants like Google and OpenAI.
Instead of just solving problems, its training process focuses on proof rigor, using a unique verifier-generator loop that trains the model to check and refine its own reasoning steps.
Crucially, the model is available on Hugging Face under a permissive license, allowing developers to audit, build upon, and deploy this powerful reasoning engine.
Why It Matters: This release democratizes access to elite AI mathematical reasoning previously held by only a few top labs. It signals a broader shift toward creating transparent and verifiable AI systems that can justify their conclusions.
ChatGPT's Ad Break

Next in AI: OpenAI is preparing to roll out ads on ChatGPT after leaked code in the Android app beta revealed references to search ads and ad carousels. This marks a significant shift in monetization strategy for the AI chatbot that serves 800 million weekly users.
Decoded:
The leak uncovered specific ad-related code including terms like "search ad" and "search ads carousel" embedded in the Android beta version, suggesting ads will appear alongside search-like responses.
With 800 million weekly active users generating approximately 2.5 billion prompts daily, OpenAI has built a massive advertising opportunity that could rival traditional search engines.
ChatGPT's conversational context gives it a unique advantage for hyper-targeted advertising since it understands user intent and preferences through extended dialogue rather than simple keyword searches.
Why It Matters: OpenAI is positioning itself to compete directly with Google's advertising empire by monetizing its massive user base. Users on the free tier should expect an ad-supported experience while premium subscribers likely remain ad-free.
AI Pulse
OCaml rejected a massive, 13,000-line AI-generated pull request, citing copyright concerns, the difficulty of reviewing AI code, and a mismatch with the project's development practices.
Pangram Labs revealed that an estimated 21% of peer reviews for the major ICLR 2026 AI conference were fully generated by AI, with over half showing signs of AI use.
MIT found that students using ChatGPT to write essays produced vaguer, poorly-reasoned work and showed lower levels of brain activity, highlighting potential cognitive costs of over-relying on AI in education.
