PLUS: Google's AI dev crackdown, cloning engineers from git history, and DeepMind's secret drug hunter

Happy reading

A new security paradigm frames the internet as a "dark forest" where autonomous AI agents hunt for vulnerabilities at machine speed. This reality is shifting the focus from building stronger walls to making critical infrastructure invisible by default.

This new reality challenges traditional security models, prompting a debate about whether visibility itself is the primary vulnerability. If anything discoverable is an immediate target, how will this fundamentally change the way we build and deploy technology online?

In today’s Next in AI:

  • The internet as a 'dark forest' for AI security

  • Google's crackdown on AI developer accounts

  • Cloning engineer personas from git history

  • DeepMind’s secret AI drug discovery model

The Dark Forest Internet

Next in AI: A new security paradigm frames the internet as a "dark forest" where autonomous AI agents hunt for vulnerabilities at machine speed, shifting the focus from building stronger walls to making infrastructure invisible.

Explained:

  • Tools are making this threat real, from open-source agents like PentAGI that automate penetration testing to AI models that find deep-seated vulnerabilities human experts miss.

  • This new reality challenges traditional security models, prompting a debate in the tech community that visibility itself is the primary vulnerability, not just weak authentication.

  • The proposed solution is a shift to "Zero Visibility" architectures, a principle reinforced by groups like the Cloud Security Alliance, where infrastructure is made undiscoverable by default.

Why It Matters:
This represents a fundamental change in cybersecurity, moving beyond reactive defense to proactive invisibility. For anyone building online, the default assumption must now be that anything visible is a target for automated exploitation.

Google's OpenClaw Crackdown

Next in AI: Google is abruptly suspending paid AI developer accounts for using the popular third-party tool OpenClaw, enforcing a strict policy that locks users out of their services without warning.

Explained:

  • Google’s internal teams confirmed the suspensions are due to a Terms of Service violation, stating that using its Antigravity servers to power a non-Google product falls under a zero-tolerance policy that cannot be reversed.

  • The crackdown has locked paying subscribers out of their $249/month AI Ultra accounts and, in some cases, even affected connected services like Gmail, with little to no response from customer support.

  • This move is part of a wider trend to control access to proprietary models, following a similar ban by Anthropic just days earlier to prevent third-party tools from bypassing official API usage.

Why It Matters: This enforcement highlights the inherent risk for developers building on closed platforms, where sudden policy changes can disable their entire workflow overnight. Google's hardline stance could damage developer trust and push innovators toward more open or predictable competitors.

Cloning Devs from Git History

Next in AI: A new engineering practice is emerging where teams mine a project's git history to create AI agent “personas.” This allows agents to adopt the specific coding styles, patterns, and instincts of the senior developers who shaped the codebase.

Explained:

  • The process involves an ethnographic-style analysis of a top contributor’s commit logs, diffs, and messages to identify their unique development philosophy—what they fix, what they refactor, and how they communicate.

  • Instead of a rigid style guide, this analysis produces a persona file that describes the developer’s character, enabling an agent to make consistent decisions in novel situations because it knows “what kind of engineer it is.”

  • This trend is already appearing in tools like the Mysti VS Code extension, which features selectable developer personas that alter AI reasoning and problem-solving approaches for specialized tasks.

Why It Matters: This approach helps AI-powered teams generate code that feels human-written and coherent with the project's history. It also signals a future where an engineer's unique judgment and style can be modeled, scaled, and deployed long after they’ve moved on.

DeepMind's Secret Drug Hunter

Next in AI: Google DeepMind's spin-off, Isomorphic Labs, just unveiled a powerful new AI for drug discovery in a new technical report. Called IsoDDE, the proprietary model significantly outperforms its famous predecessor, AlphaFold 3, in key drug design tasks.

Explained:

  • The new model excels at predicting how potential drugs interact with proteins, especially for molecules that are very different from its training data—reportedly doubling the accuracy of AlphaFold 3 in these challenging cases.

  • IsoDDE can calculate a drug’s binding affinity—how strongly it will stick to a target protein—more effectively than both other AI models and slower, physics-based simulations.

  • The system also achieves state-of-the-art results in predicting the structure of antibodies, a critical component of modern medicine, and can even identify hidden binding sites on proteins from sequence data alone.

Why It Matters: This leap forward could dramatically accelerate the creation of new medicines, potentially unlocking treatments for previously 'undruggable' diseases. However, by keeping the model proprietary, Isomorphic Labs signals a major shift from the open-science approach that made the original AlphaFold a global success.

AI Pulse

Physicists uncovered new laws of nature by applying a physics-tailored neural network to dusty plasma, correcting longstanding theories about particle forces with over 99% accuracy.

Wikipedia’s co-founder Jimmy Wales dismissed Elon Musk's Grokipedia as a "cartoon imitation," arguing that human-vetted knowledge is essential to avoid the high hallucination rates that disqualify AI from writing encyclopedia articles.

Scout AI created a new class of AI agents designed to seek and destroy physical targets by operating exploding drones, moving agentic automation into defense applications.

Keep Reading