PLUS: An amateur solves a 60-year-old math problem and the US's AI countdown begins

Happy reading

A massive price-performance gap is splitting the AI model market in two. OpenAI hiked the price of its latest flagship model, just as DeepSeek launched a powerful open-source alternative at a fraction of the cost.

This forces developers into a critical choice between premium, closed systems and cheaper open infrastructure. Will the vast price difference accelerate a market-wide shift, or is the convenience of an all-in-one product worth the cost?

In today's Next in AI:

  • DeepSeek V4 splits the AI model market

  • Amateur solves 60-year-old math problem with AI

  • The US's 18-month AI countdown

  • Final Fantasy's controversial use of AI art

The Great AI Divide

Next in AI: The AI model market is splitting in two after OpenAI doubled the price of its flagship model just 24 hours before DeepSeek released its new open-source V4 model at a fraction of the cost, creating a vast price-performance gap for developers.

Explained:

  • OpenAI's new GPT-5.5 is priced at $30 per million output tokens, while DeepSeek V4-Pro costs just $3.48 for the same output, creating a scenario where the premium closed model is roughly one-ninth the cost of its new open-source competitor.

  • DeepSeek achieves its low price through an efficient Mixture-of-Experts (MoE) architecture and by releasing the model with a permissive MIT license, enabling developers to self-host and benefit from performance optimizations on NVIDIA's Blackwell GPUs.

  • The new model also signals a hardware shift, as it was designed for both NVIDIA's latest chips and Huawei's Ascend supernodes, marking one of the first times a frontier-level model has been optimized for non-NVIDIA infrastructure at launch.

Why It Matters: This forces developers to choose between paying for a fully integrated premium product or building routing logic to leverage cheaper, open infrastructure. The widening gap accelerates the commoditization of text-based AI while creating a distinct, high-margin market for all-in-one, multi-modal systems.

AI's Mathematical Leap

Next in AI: An amateur armed with a ChatGPT Pro subscription just solved a 60-year-old mathematics problem that stumped experts. The AI's novel method has uncovered a new way to think about number theory.

Explained:

  • The puzzle, which eluded prominent mathematicians for decades, involves finding the lowest possible score for "primitive sets" of large numbers—a conjecture left behind by the prolific mathematician Paul Erdős.

  • The AI bypassed the standard human approach by applying a known formula from a related field, a connection experts like Jared Lichtman—who proved a similar conjecture in 2022—had overlooked.

  • This discovery highlights a powerful new dynamic of human-AI collaboration, as experts noted the raw AI output was imperfect and required their intervention to refine the core insight and verify the final proof.

Why It Matters: The AI's unique method provides mathematicians with a new framework for analyzing the "anatomy" of large numbers, with potential applications beyond this single problem. This achievement demonstrates how powerful AI models are becoming accessible tools that can help anyone, even non-experts, make significant contributions to complex scientific fields.

The 18-Month AI Countdown

Next in AI: U.S. Treasury Secretary Scott Bessent issued a stark warning that the nation's lead over China in AI is just three to six months. He predicts the technology will begin defining daily life within the next 12 to 18 months.

Explained:

  • Bessent's timeline isn't a distant forecast but an urgent countdown, framing AI's societal integration as an immediate event happening within a year to 18 months.

  • The U.S. holds an uncomfortably thin three-to-six-month lead over China, with a strategy focused on increasing its share of global AI computing power from 50% to over 70% in the coming years.

  • He highlighted Anthropic's Mythos model as a key U.S. advantage, praising its ability to identify software and system vulnerabilities, which is critical for national cybersecurity.

Why It Matters: This is a direct signal from top U.S. economic leadership that the AI race has immediate, high-stakes consequences for national competitiveness. For professionals, this accelerates the need to adopt AI tools and understand their strategic impact, as the window for adaptation is rapidly closing.

AI Art's Awkward Debut

Next in AI: A localizer for Square Enix's Final Fantasy XIV sparked controversy after using AI-generated art during a panel at a major fan festival, drawing a muted crowd reaction and criticism from the community.

Explained:

  • The presenter, Christopher Koji Fox, used AI to create a music video and an image of the game's artists, citing time constraints and an inability to get permission for official photos.

  • The move follows public statements from Square Enix's president, who said the company planned to be “aggressive” in applying AI to its development and publishing functions.

  • This corporate push clashed with fan sentiment, as the live audience went quiet during the AI segments and many users online voiced their disappointment with the decision.

Why It Matters: This incident highlights the growing tension between corporate mandates to adopt AI for efficiency and the values of creative communities who are often skeptical of the technology. It serves as a real-world test case for how major brands navigate the public rollout of AI-driven content.

AI Pulse

Elon Musk predicted that saving for retirement will become irrelevant in an "age of abundance" driven by AI and robotics, leading to a "universal 'you can have whatever you want' income."

NVIDIA demonstrated Day-0 support for DeepSeek V4 on its Blackwell GPUs, showcasing preliminary performance of nearly 3,500 tokens per second for the 1.6T parameter model on a single accelerator.

NVIDIA revealed that its new NVFP4 quantization format is a key technology used to accelerate DeepSeek V4 on Blackwell, reducing memory traffic and latency for large model inference.

Keep Reading