PLUS: OpenAI's $100B funding round and Amazon overtakes Walmart

Happy reading

Google just released a major update to its flagship AI model, Gemini 3.1 Pro. The new version boasts a significant leap in its ability to handle complex, multi-step reasoning tasks.

With claims of paving the way for more ambitious agentic workflows, the upgrade pushes AI beyond simple Q&A. Does this mark the point where AI shifts from a basic assistant to a true collaborator on intricate projects?

In today’s Next in AI:

  • Google’s big Gemini reasoning upgrade

  • OpenAI’s $100B infrastructure push

  • Amazon overtakes Walmart in AI-fueled retail race

  • AI decodes particle physics at CERN

Google's Reasoning Upgrade

Next in AI: Google just unveiled Gemini 3.1 Pro, a major update to its flagship AI model. This new version significantly enhances core reasoning to better tackle complex, multi-step tasks.

Explained:

  • The new model shows a massive performance leap, scoring over double the reasoning performance of its predecessor on the ARC-AGI-2 benchmark.

  • Gemini 3.1 Pro is now available in preview for developers and enterprises through the Gemini API, Vertex AI, and directly in Google AI Studio.

  • It's designed to translate complex ideas into functional applications, paving the way for more ambitious agentic workflows that can build dashboards or create interactive designs from a simple prompt.

Why It Matters: This update represents a significant step beyond simple Q&A, pushing AI closer to becoming a true collaborator on intricate projects. It empowers developers and creators to build more dynamic and intelligent tools that require less manual intervention.

OpenAI's $100B War Chest

Next in AI: OpenAI is reportedly in talks to raise a record-breaking $100 billion, signaling a massive push to secure the vast infrastructure needed for future AI development.

Explained:

  • The deal would push OpenAI's valuation past $850 billion, placing it in a territory only a few private companies have ever reached.

  • The funding is primarily for massive AI infrastructure—data centers, chips, and energy—as global data center electricity consumption is expected to more than double by 2030.

  • A group of key tech giants including Microsoft, Amazon, Nvidia, and SoftBank are expected to participate, highlighting a broad industry bet on OpenAI's future.

Why It Matters:
This move shows how markets now treat AI not as a product, but as foundational technology that requires an industrial-scale buildout. The AI race is shifting from who can build the best models to who can secure the physical resources to power them.

The New Retail King

Next in AI: For the first time, Amazon has overtaken Walmart in annual revenue, a historic shift driven by two clashing AI strategies that are reshaping the future of retail. This milestone was confirmed in Amazon's latest earnings report.

Explained:

  • Walmart is taking a partnership-first approach, integrating tools from OpenAI and Google while its own AI assistant, Sparky, helps lift average order values by 35%.

  • In contrast, Amazon is building its own moat by investing up to $200 billion in AI and developing its in-house shopping assistant, Rufus, to guide customers in a way that CEO Andy Jassy compares to an in-store employee.

  • The revenue crown isn't just about online sales; Amazon's lead is powered by its diverse businesses like AWS and advertising, a model Walmart is actively trying to replicate to grow its own higher-margin streams.

Why It Matters: This is more than a retail rivalry; it's a real-time experiment testing two different AI philosophies—partnership versus proprietary development. The outcome will likely set the standard for how AI shapes consumer experiences across all industries.

AI Tackles Particle Physics

Next in AI: Researchers at CERN's Large Hadron Collider are now using a machine learning model to fully reconstruct particle collisions, a breakthrough that speeds up and sharpens our view into fundamental physics. This new approach replaces a decades-old system built on hand-coded rules.

Explained:

  • Instead of relying on rigid, hand-crafted logic, the new algorithm learns to identify particles directly from simulated data, similar to how humans learn facial recognition.

  • The model improves the precision of reconstructing particle jets by 10%–20%, a significant leap detailed in the project's research paper.

  • It runs efficiently on modern GPUs, processing the massive stream of collision data far faster than the CPU-dependent traditional algorithms.

Why It Matters: This application demonstrates AI's power to decode some of the most complex datasets on the planet, accelerating scientific discovery. It also provides a blueprint for using machine learning to find signals in massive amounts of noise, a challenge many industries face.

AI Pulse

OpenAI partnered with Paradigm to release EVMbench, a new open-source benchmark for testing how well AI agents can detect, patch, and exploit smart contract vulnerabilities.

Scientists used machine learning to solve a long-standing problem in quantum chemistry, enabling precise and stable calculations of molecular energies and electron densities for very large molecules.

A study found that while over 92% of developers now use an AI coding assistant, productivity gains have plateaued at around 10%, with AI-authored code now making up nearly 27% of all production code.

The IRS lost 40% of its IT staff and 80% of its tech leadership in 2025 during a major reorganization, a shakeup that happened as the agency plans for AI to play a significant future role.

Keep Reading