PLUS: The death of the SDLC, Jony Ive's OpenAI hardware, and an AI voice for ALS
Happy reading
An internal tool at Anthropic has grown into a massive business line. The company's Claude Code assistant is now on track to generate a stunning $2.5 billion in annualized revenue.
The tool's rapid adoption by major companies signals a huge demand for agentic AI that actively participates in workflows. Is this the moment where developer tools move from simple assistants to fully autonomous partners?
In today’s Next in AI:
Anthropic's side project hits $2.5B revenue
How AI is collapsing the SDLC
OpenAI's Jony Ive-led hardware push
An AI voice app for ALS patients
Anthropic's Billion-Dollar Side Project

Next in AI: What began as an internal side project at Anthropic has skyrocketed to a major business, with its Claude Code assistant now hitting a staggering $2.5 billion annualized revenue run-rate.
Explained:
From an internal experiment to a market leader, Claude Code hit $1 billion in ARR within six months of its public launch and has since more than doubled to $2.5 billion annualized revenue.
Unlike simple code suggestion tools, Claude Code is designed to work autonomously, with some users letting the AI handle tasks for over 45 minutes at a time before intervening.
The tool is gaining significant traction with large enterprises, now used by engineering teams at 8 of the top 10 Fortune 500 companies.
Why It Matters:
This rapid adoption signals a massive demand for AI tools that actively participate in the development workflow, not just assist it. Claude Code's success establishes agentic coding assistants as a highly valuable application, setting a new standard for developer productivity.
The SDLC is Dead

Next in AI: A provocative new take argues that AI agents aren’t just speeding up software development—they are completely collapsing the traditional software development lifecycle (SDLC) into a rapid, continuous loop of intent, building, and observation.
Explained:
The classic, sequential stages of development—requirements, design, coding, and testing—are merging into a single, simultaneous process. This is the beginning of the "AI dark factory" where a plain-language request can generate a fully deployed feature.
Human-centric rituals like pull request reviews are becoming bottlenecks, giving way to automated checks and agent-on-agent verification. With manual safeguards gone, observability becomes the most critical component, acting as the primary feedback mechanism for agents to correct their own errors.
While the workflow is changing, recent data shows that while 26.9% of production code is now AI-authored, overall engineering productivity gains are hovering around 10%, indicating the industry is still adapting.
Why It Matters: This fundamental shift redefines the role of an engineer from a hands-on coder to a "context engineer" who steers AI agents with high-level intent. The future competitive advantage won't come from optimizing obsolete stages, but from mastering this new, tighter loop of AI-driven creation and observation.
OpenAI's Hardware Future

Next in AI: OpenAI is jumping from software to hardware, reportedly developing a line of AI-powered hardware designed to bring its models into the physical world. The first device is expected to be a camera-equipped smart speaker slated for a 2027 release.
Explained:
The flagship smart speaker will use its on-board camera to understand its environment, allowing it to identify objects, understand conversations, and even use facial recognition to authenticate purchases.
This push is led by former Apple design chief Jony Ive, after OpenAI acquired Jony Ive's design firm for $6.5 billion. A dedicated team of over 200 employees is driving the hardware initiative forward.
Beyond the speaker, OpenAI is developing smart glasses for 2028 and has prototyped a smart lamp, signaling a long-term vision to create an ecosystem of devices that offer proactive suggestions, such as recommending an earlier bedtime before a meeting.
Why It Matters: OpenAI is building a physical body for its digital brain, moving to control the end-to-end user experience instead of relying on other platforms. This move positions the company to directly challenge Amazon and Google for control of the ambient, AI-powered home.
AI Gives a Voice Back

Next in AI: A Pittsburgh man diagnosed with ALS created an AI voice app called 'Talk To Me, Goose' that clones a user's voice, allowing them to continue speaking naturally even after losing their physical ability to do so.
Explained:
The app uses voice-cloning technology from ElevenLabs and can create a realistic voice clone from just a few 15-second audio clips.
It's designed to eliminate the "awkward pause" in conversations by predicting intent and tone, and is available on Apple, Android, and Windows devices.
The app is distributed for free to people with ALS in the U.S. and Canada through a partnership with the Live Like Lou Foundation.
Why It Matters: This project shows how personal AI tools can directly address profound human challenges, moving beyond enterprise solutions to restore identity and connection. It also demonstrates a powerful new paradigm where individuals can leverage AI as a "teammate" to build and deploy life-changing applications without traditional development experience.
AI Pulse
Tesla approved a new $29 billion pay package for CEO Elon Musk, explicitly linking the award to the company’s strategic transition towards becoming a leader in AI and robotics.
CEOs boast that shrinking headcounts are a positive signal of AI adoption and efficiency, reframing staff cuts as a strategic accomplishment rather than a sign of trouble.
Rod Stewart sparked fan debate after using AI-generated visuals in a concert tribute that showed the late Ozzy Osbourne alongside other deceased music legends like Freddie Mercury and Amy Winehouse.