PLUS: Claude hunts for zero-day bugs and an AI blood test for dementia

Good morning

A major code leak from Anthropic has pulled back the curtain on its popular Claude Code tool. The leak revealed a confidential "undercover mode" designed to hide AI attribution, alongside a surprising lack of basic engineering practices for a billion-dollar product.

The incident gives us a rare glimpse into the inner workings of a top AI company, exposing both unreleased features and major development flaws. Does this peek behind the scenes raise new questions about the maturity and transparency of the tools we're increasingly relying on?

In today’s Next in AI:

  • Claude’s code leak exposes ‘undercover’ mode

  • Claude finds zero-day software bug

  • An AI blood test for dementia diagnosis

  • A plan to replace radiologists with AI

Claude's Code Exposed

Next in AI: Anthropic accidentally leaked the full source code for its popular Claude Code tool, revealing a treasure trove of unreleased features and surprising gaps in its engineering practices.

Explained:

- The code exposed a confidential "undercover mode" designed for internal Anthropic employees to strip all AI attribution and provenance data from commits, making AI-generated code appear human-written.

- Despite being a tool that reached $1B in revenue, the codebase contained zero automated tests and a bug that silently wasted an estimated 250,000 API calls per day.

- The leak also provided a blueprint for KAIROS, an unreleased and powerful autonomous agent mode capable of running background tasks and managing its own memory.

Why It Matters:

This leak offers a rare, unfiltered look into the engineering culture behind a leading AI product, serving as a key signal for vendor maturity and transparency. It's a critical reminder that even the most advanced AI tools require robust human oversight and rigorous testing from the teams who use them.

Claude, The Bug Hunter

Next in AI: In a stunning display of capability, Anthropic's Claude AI found critical zero-day vulnerabilities in the Vim and Emacs code editors, kicking off a Month of AI-Discovered Bugs.

Explained:

- Prompted by a researcher, Claude identified a remote code execution (RCE) bug in Vim that could compromise a machine just by opening a file; Vim’s maintainers quickly released a patch and fixed the issue.

- The AI then found a similar RCE vulnerability in Emacs, though its maintainers reportedly declined to address the issue, attributing it to a git integration; the researcher’s full advisory details the exploit.

- The discovery process was remarkably straightforward. The original prompt was as simple as, “Somebody told me there is an RCE 0-day when you open a file. Find it.”

Why It Matters: AI models now act as powerful automated tools for discovering complex software exploits. This new capability will accelerate the pace of vulnerability discovery, fundamentally altering the landscape for both security researchers and software developers.

The AI Blood Test

Next in AI: Researchers at Lund University have developed an AI model that can detect five different dementia-related diseases from a single blood sample. The breakthrough, detailed in a new paper, promises to make diagnosing complex neurodegenerative conditions much simpler and more accurate.

Explained:

- The model was trained using protein measurements from over 17,000 patients and control participants, making it one of the most comprehensive studies of its kind.

- It successfully identifies signatures for Alzheimer's, Parkinson's disease, ALS, frontotemporal dementia, and even previous strokes from blood biomarkers.

- The AI’s protein-based analysis was found to be more effective at predicting cognitive decline than traditional clinical diagnoses, pointing to a new era of biomarker testing.

Why It Matters: This approach could lead to earlier, less invasive, and more accessible diagnostics for millions of people at risk for neurodegenerative diseases. It also opens new doors for understanding the distinct biological pathways that drive these conditions.

Replacing The Radiologist

Next in AI: The CEO of America's largest public hospital system stated at a recent forum that he is prepared to replace radiologists with AI for initial scans, pushing the job displacement conversation from theory to a concrete plan.

Explained:

- Katz envisions AI performing first-pass reads of mammograms and X-rays to generate major savings, with human experts only reviewing abnormal results.

- Another health system CEO supported the move, claiming their AI is actually better than humans for low-risk screenings, missing cancer in only 3 out of 10,000 cases.

- Radiologists are pushing back hard, with some calling the idea dangerous and naive, warning that an AI-only approach would inevitably lead to patient harm.

Why It Matters: This marks a significant shift from AI as a diagnostic assistant to a potential replacement in a high-stakes medical field. The primary obstacles are no longer just technological, but now include regulatory approval, professional acceptance, and earning public trust.

AI Pulse

Meta released its BOxCrete AI model, an open-source tool for designing more sustainable and domestically-sourced concrete mixes.

Researchers introduced a new framework connecting large language models with the Robot Operating System (ROS) to enable natural language control of robots.

Marc Andreessen argued that AI is a "silver bullet excuse" for layoffs, claiming most large companies were already overstaffed by 50-75% due to pandemic overhiring.

Perplexity CEO Aravind Srinivas suggested AI job displacement could be a positive, freeing people from jobs they dislike to pursue entrepreneurship.

Keep Reading