PLUS: NVIDIA's new AI plumbing, the rise of AI 'slop', and Google's 4GB silent Gemini download
Happy reading
Anthropic's user demand surged an astounding 80-fold in just one quarter, forcing the company to find the compute power it needs in a very unusual place: space. A new partnership with SpaceX is designed to solve this massive infrastructure problem.
The deal provides an immediate boost to Anthropic's capabilities on Earth, but it also signals a new frontier in the AI race. Is the key to winning now less about the algorithm and more about securing unprecedented levels of raw computational power, wherever it can be found?
In today's Next in AI:
Anthropic's space race for AI compute
NVIDIA's fix for AI data traffic jams
The rise of AI-generated 'slop'
Google's silent 4GB Gemini download
Anthropic's Space Race

Next in AI: To handle a staggering 80-fold annualized growth in Q1, Anthropic announced a partnership with SpaceX to secure desperately needed compute power. The deal provides immediate access to the Colossus 1 data center and even opens the door to developing AI compute capacity in orbit.
Explained:
Anthropic CEO Dario Amodei revealed the company planned for a 10-fold increase but instead experienced an 80-fold surge in Q1, a level of growth he described as just crazy and overwhelming its infrastructure.
The new capacity is already benefiting users, as Anthropic immediately doubled usage limits for Claude Code subscribers and significantly raised API rate limits for its powerful Opus models.
Beyond Earth, the deal includes an expressed interest in jointly developing orbital AI compute. The immediate terrestrial boost comes from over 300 megawatts of power, equivalent to more than 220,000 NVIDIA GPUs.
Why It Matters: This partnership highlights the insatiable demand for AI compute, pushing leading labs to secure massive, unconventional infrastructure deals just to keep pace. It signals a new era where access to immense computational power—even in space—is the primary competitive advantage in the AI race.
AI's New Plumbing

Next in AI: NVIDIA, with partners like OpenAI and Microsoft, has released a new open networking protocol called Multipath Reliable Connection (MRC). It's designed to fix the massive data traffic jams inside the AI factories that train today's largest models.
Explained:
Think of MRC as a smart traffic app for AI data centers, dynamically distributing information across multiple paths to avoid congestion and reroute around network failures in just microseconds.
This technology is already proven in production, with OpenAI successfully deploying it for its latest model training, and both Microsoft and Oracle using it in their largest AI-focused data centers.
In a rare industry collaboration, NVIDIA developed MRC with AMD, Intel, and Microsoft, releasing it as an open specification through the Open Compute Project to encourage wider adoption.
Why It Matters: By treating the network as a core, intelligent part of the computer, MRC removes one of the biggest bottlenecks holding back AI development. This infrastructure upgrade paves the way for training the next generation of larger, more complex models faster and more reliably.
The Slop Economy

Next in AI: A thought-provoking analysis argues that while generative AI is boosting productivity, it's also creating a new form of professional “slop.” This refers to work that looks expertly crafted on the surface but lacks the judgment and rigor of a true expert.
Explained:
The core issue is “output-competence decoupling,” where the quality of AI-generated work no longer signals the creator’s actual skill, enabling novices to produce advanced-looking but potentially flawed results in areas outside their expertise.
This dynamic is supercharged by data showing AI disproportionately helps newcomers. One NBER study found that generative AI boosted novice productivity by about a third while offering minimal gains to experts.
Internally, this results in bloated documents and low-signal communication. The problem is worsened by models being roughly 50% more agreeable than human respondents, often affirming a user's flawed ideas and leading to overconfidence.
Why It Matters: In the race for AI-driven efficiency, companies risk hollowing out the very expertise and judgment that clients pay for. The long-term advantage will belong to firms that prioritize verifiable quality and keep a human firmly in control of final judgment.
Google's 4GB Download

Next in AI: Google Chrome is silently downloading a 4GB file to power its on-device Gemini Nano AI model. This background installation enables features like "help me write" and proactive scam detection, but raises questions about user consent.
Explained:
The discovery was made by a privacy researcher who found the AI model's "weights.bin" file repeatedly installing itself after being deleted.
Google's justification is that its on-device AI features run locally to protect user privacy, and an opt-out toggle has been available in settings since February.
Beyond the lack of an explicit prompt, critics point to the significant resource usage and potential environmental impact, with one estimate claiming the rollout could equal the annual emissions of 6,500 cars.
Why It Matters: This signals a major push to embed powerful AI capabilities directly into everyday applications, trading local storage for enhanced privacy and offline functionality. The rollout serves as a crucial test case for how tech giants will navigate user consent and transparency with large-scale, automated AI updates.
AI Pulse
AMD soared over 18% after its Q1 earnings beat expectations, with data center revenue jumping 57% year-over-year fueled by massive demand for its AI processors.
Google updated its AI Overviews to include quotes and perspectives from public web forums like Reddit and social media, aiming to provide more firsthand context for search queries.
Pennsylvania sued Character.AI for the unauthorized practice of medicine after a chatbot on its platform allegedly posed as a licensed psychiatrist and offered medical advice.