PLUS: Google’s two-chip AI strategy, the Pentagon’s $54B drone pivot, and startups prioritizing AI over salaries

Happy reading

OpenAI just dropped its next-generation flagship model, GPT-5.4, featuring major upgrades for professional tasks alongside a specialized new AI aimed squarely at accelerating scientific discovery.

The dual release highlights a significant shift from general-purpose tools to highly focused, domain-specific models. Will these expert AIs become the new standard for achieving breakthroughs in complex fields like science and cybersecurity?

In today’s Next in AI:

  • OpenAI's new GPT-5.4 and science AI

  • Google’s two-chip AI hardware strategy

  • Startups prioritize AI spend over salaries

  • The Pentagon's $54B pivot to AI drones

OpenAI Unveils GPT-5.4

Next in AI: OpenAI just launched its next-generation flagship model, GPT-5.4, bringing major upgrades for professional tasks. The company also released GPT-Rosalind, a purpose-built model designed to accelerate scientific discovery in the life sciences.

Explained:

  • GPT-5.4 improves reasoning and coding, supports a massive 1M token context window, and enables agents to operate software and complete complex workflows. It matches or exceeds industry professionals in 83.0% of tasks on the GDPval benchmark, with official deployment details now available.

  • GPT-Rosalind is tailored for researchers, combining deep knowledge of chemistry, genomics, and biology with advanced tool use. It helps accelerate drug discovery and experimental planning by reasoning over scientific literature, databases, and experimental data.

  • OpenAI also quietly launched GPT-5.4-Cyber, a specialized variant fine-tuned for defensive cybersecurity. The model assists with tasks like vulnerability detection, malware analysis, and reverse engineering for verified customers.

Why It Matters: This dual release shows AI is moving beyond general-purpose tools and into highly specialized, domain-specific applications. These models act more like expert collaborators, creating opportunities to solve complex, real-world problems in science, coding, and security.

Google's Two-Chip Strategy

Next in AI: Google is escalating the AI hardware race, unveiling its eighth-generation TPUs as two distinct chips—one for training and one for inference—to power the next wave of large-scale AI agents.

Explained:

  • Google's new strategy splits its hardware, dedicating the TPU 8t for massive model training and the TPU 8i for fast, low-latency inference to support complex AI agents.

  • The inference chip is a standout, delivering 80% better performance-per-dollar over its predecessor and using 3x more on-chip SRAM to eliminate processing lag when running models.

  • This isn't just an in-house project; Google is also partnering with chipmakers like Broadcom and MediaTek to expand its custom silicon efforts and challenge Nvidia's market control.

Why It Matters: This signals a major shift in the AI hardware market, moving from a single dominant player to a world with specialized, competing chips. This growing competition will ultimately give developers more powerful and cost-effective options for building and deploying AI.

AI Bills Over People

Next in AI: A new trend called "tokenmaxxing" is emerging, where AI-native startups proudly spend more on compute from services like Anthropic and OpenAI than on employee salaries. This is being framed as a new benchmark for lean, hyper-efficient growth.

Explained:

  • The core idea is to scale with intelligence, not headcount, exemplified by CEOs boasting about six-figure monthly AI bills for teams of fewer than five people.

  • This approach fuels the pursuit of "autonomous" companies, where AI agents handle core functions like engineering and sales, aiming to create billion-dollar ventures with minimal human staff.

  • The strategy is enabled by massive capital injections into AI infrastructure, like Amazon’s major investment in Anthropic, which supports the platforms these startups rely on.

Why It Matters: This signals a fundamental shift in how early-stage companies allocate capital, prioritizing automated intelligence over traditional human headcount. It also raises critical questions about the financial sustainability and true ROI of an AI-first operating model.

Pentagon's $54B AI Pivot

Next in AI: The Pentagon is making a historic shift toward AI-driven warfare, requesting over $54 billion for its new Defense Autonomous Warfare Group. This massive budget aims to accelerate the development of autonomous systems and achieve what officials are calling Drone Dominance.

Explained:

  • The 24,000% funding increase establishes the Defense Autonomous Warfare Group (DAWG), a new department that absorbs previous initiatives to field low-cost drones for future combat.

  • The strategy centers on working with the private sector to develop and integrate autonomous technologies across air, land, and sea, partly driven by a push to move away from Chinese-made drone components.

  • Despite the investment, experts warn that current AI models have exploitable failures and that the military lacks a clear doctrine for deploying autonomous systems safely.

Why It Matters: This monumental spending signals a foundational shift in US military strategy, prioritizing autonomous capabilities over conventional forces. For the tech industry, it unlocks massive funding for drone and AI development but also intensifies the debate over the ethics and safety of autonomous weapons.

AI Pulse

Anthropic confirmed it is investigating a report that its powerful, unreleased cybersecurity model, Claude Mythos, was accessed by an unauthorized group through a third-party vendor environment.

Tesla boosted its planned 2026 capital expenditure to $25 billion, a nearly threefold increase intended to accelerate its ambitions in self-driving taxis, robotics, and its new "Terafab" chip factory.

Sony published research on its AI-powered robot, Ace, which can now compete with and defeat elite human table tennis players, marking a milestone for AI in fast-paced, real-world competitive sports.

DHS demonstrated for US lawmakers how "jailbroken" AI models can be used to generate step-by-step plans for terror attacks, highlighting the risks of AI systems with their safety guardrails removed.

Keep Reading