PLUS: The Pentagon's standoff with Anthropic and AI that scans your trash for data

Happy reading

AI coding assistants are delivering massive productivity gains, allowing a single engineer to complete a three-week project in just 37 minutes. This astonishing leap is forcing development teams to completely redesign their workflows from the ground up.

The engineer's role is shifting from a builder to an architect who directs AI agents. As the bottleneck moves from coding speed to human strategy, how will this fundamental change reshape the future of technical innovation?

In today’s Next in AI:

  • AI slashes a 3-week coding project to 37 minutes

  • The Pentagon’s AI standoff with Anthropic

  • New AI scans your garbage for consumer data

  • OpenAI hires creator of viral OpenClaw agent

The 37-Minute Project

Next in AI: AI coding assistants are accelerating software development at an astonishing rate, allowing one engineer to complete a project in 37 minutes that took three weeks just a year ago. This massive productivity leap is forcing engineering teams to completely rebuild their workflows from the ground up.

Explained:

  • The engineer’s role is shifting from writing code to defining objectives and setting up AI agents for success, with some teams adopting rules like "no coding before 10am" to prioritize strategy and prompt alignment.

  • This new approach delivers powerful business results, as one tech team cut its staff by 30% while simultaneously doubling its output, effectively erasing its entire 12-month backlog of technical debt.

  • The underlying philosophy is also changing, as teams now build systems for agents as the primary users, where code acts as context for AI to learn from rather than a static library for humans to reuse.

Why It Matters: The primary constraint on innovation is no longer the time it takes to write code, but the speed at which humans can absorb change and provide clear direction. This elevates the engineer's role from a simple builder to an architect who orchestrates AI to achieve strategic goals.

Pentagon's AI Standoff

Next in AI: The Pentagon is threatening to sever its partnership with AI firm Anthropic in a standoff over AI safeguards. The dispute centers on Anthropic’s refusal to remove all safety limitations for military applications.

Explained:

  • The core disagreement stems from Anthropic's insistence on maintaining hard limits against using its Claude models for mass surveillance of Americans and fully autonomous weapons.

  • Meanwhile, competitors like OpenAI, Google, and xAI have reportedly agreed to lift their standard guardrails for the Pentagon, increasing pressure on Anthropic to conform.

  • Despite the friction, Claude holds a strategic advantage as the first model deployed on the Pentagon's classified networks under a potential $200 million contract.

Why It Matters: This conflict highlights the growing tension between AI developers' ethical principles and the demands of national security. The resolution will set a major precedent for how AI companies navigate military contracts and the development of responsible AI.

The Trash-to-Cash Pipeline

Next in AI: AI-powered garbage trucks, designed to spot recycling errors, could soon scan and monetize the specific products your household consumes.

Explained:

  • First introduced by Oshkosh at CES 2026, the system uses cameras and edge AI to scan waste, identify recycling errors, and link them via GPS back to your home address.

  • This technology is already being used in real-world deployments; the city of Dallas, Texas is equipping its garbage trucks with AI cameras to boost recycling compliance.

  • A simple software update could shift the AI's focus from just contaminants to identifying specific brands and products, creating a valuable new stream of consumer data from your curb.

Why It Matters: What you throw away is becoming a public, monetizable record with zero expectation of privacy. This development blurs the line between public service and corporate surveillance, opening a new front in the battle for personal data.

OpenAI Nabs OpenClaw Creator

Next in AI: Peter Steinberger, creator of the viral open-source agent project OpenClaw, announced he is joining OpenAI to help bring AI agents to a wider audience. His popular project will be moved into an independent foundation to ensure it remains open source.

Explained:

  • OpenClaw saw incredible growth since its November 2025 launch, quickly gaining nearly 200,000 GitHub stars and enabling over 1.5 million agents by early this month.

  • This move is a talent acquisition, not a company buyout, with OpenAI committing to sponsor the new foundation and support the open-source project's independence.

  • Steinberger chose OpenAI after discussions with other major labs, seeing it as the fastest path to build safe, useful agents for everyone and bolster the company's agentic AI push.

Why It Matters: This hire signals OpenAI is accelerating its focus on creating capable, personal AI agents that can act on a user's behalf. It also sets a powerful precedent for how major AI labs can support the open-source community without absorbing it.

AI Pulse

Curious Refuge became a key training ground for Hollywood, with its AI film academy enrolling 10,000 students looking to adapt their skills for the generative AI era.

1Password released SCAM, an open-source benchmark designed to test whether AI agents can complete workplace tasks without falling for security threats like phishing and social engineering.

Yann LeCun co-created the DjVu document format with fellow AI pioneers Yoshua Bengio and Léon Bottou, a technology that highlighted early deep learning principles in data compression.

Keep Reading