PLUS: Hacking an unreleased GPT-5 model, AI's copyright problem, and China's ban on Nvidia chips
Good morning
A significant challenge for AI models is 'catastrophic forgetting,' where new information overwrites old knowledge. A new Google model named HOPE seems to have cracked this problem, enabling AI to learn continuously. This architecture moves AI from being a static, pre-trained tool to a dynamic system that can evolve. Does this breakthrough represent the foundational step toward personal AI assistants that can actually remember and grow with us?
In today’s Next in AI:
Google's new AI that learns continuously
How a developer accessed an unreleased GPT-5 model
AI's reliance on copyrighted news articles
China's ban on foreign AI chips
Google's 'Memory' Upgrade

Next in AI: Google has developed a new AI model named HOPE that learns continuously without wiping previous knowledge. This breakthrough tackles one of the biggest hurdles in AI development, known as catastrophic forgetting.
Decoded:
The new model is built on a concept called Nested Learning, which treats AI not as a single process but as a system of interconnected learning components working together.
It mimics the human brain by using multiple memory layers that operate at different speeds, allowing it to manage short-term and long-term knowledge simultaneously.
Early tests show HOPE achieves higher accuracy on reasoning tasks and can handle contexts of up to millions of symbols, all while using fewer computational resources than comparable models.
Why It Matters: This new architecture moves AI from static, pre-trained tools to dynamic systems that can evolve with new information. It's a foundational step toward creating more adaptable personal assistants, smarter recommendation engines, and ultimately, more general-purpose AI.
Hacking an unreleased GPT-5 model

Next in AI: A developer found a clever way to access OpenAI’s new, partially released ‘GPT-5-Codex-Mini’ model before its public API launch. Instead of cracking a private API, he simply modified the company's own open-source tool to send prompts directly to the unreleased model.
Decoded:
Instead of a direct attack, the developer forked the official open-source Codex CLI tool and, in a meta twist, used the Codex AI agent to write the Rust code needed to add a new direct-prompting feature.
The experiment uncovered OpenAI's private API endpoint for the service and revealed that requests can include a message with a
role="developer", allowing for more specialized instructions ahead of the user's prompt.When tested, the new ‘mini’ model’s performance on SVG image generation was poor, producing what the developer called "terrible" results compared to its larger siblings, which you can see in a side-by-side comparison.
Why It Matters: This experiment shows how open-source releases can provide the community with an early look into a company’s product pipeline. It also serves as a great reminder that smaller, more efficient models often come with significant trade-offs in capability.
AI's Pirate Library

Next in AI: An investigation reveals that the massive Common Crawl dataset, a foundational resource for training AI models from Google and OpenAI, contains millions of copyrighted articles from paywalled news sites. This has allowed AI companies to build their models on high-quality journalism without paying for it.
Decoded:
Common Crawl’s scraper gets around subscription barriers by capturing a webpage's full text in the split second before the paywall executes, grabbing the complete article.
The dataset’s influence is immense; OpenAI famously used Common Crawl’s archives to train GPT-3, the model that powered the initial launch of ChatGPT and kicked off the current generative AI boom.
Publishers are now fighting back, making Common Crawl’s bot the most widely blocked web scraper, but getting previously scraped content removed from the archives is proving to be a slow and incomplete process.
Why It Matters: This practice highlights the foundational legal and ethical questions at the core of the generative AI industry. The outcome of this conflict will set a major precedent for how AI developers can use the world's online information.
China's Chip Wall

Next in AI: Beijing is accelerating its push for technological independence with a new directive mandating that state-funded data centers use only Chinese-made AI chips. The move effectively bans processors from foreign giants like Nvidia, AMD, and Intel.
Decoded:
The financial impact is immediate for Nvidia, AMD, and Intel, who are now cut off from a market backed by over $100 billion in state funding. Nvidia’s market share in China has already collapsed from 95% in 2022 to zero.
This policy creates a protected market for domestic producers like Huawei, Cambricon, and other emerging firms. While their chips are catching up, they have struggled to gain traction against Nvidia’s established software ecosystem.
The directive is a direct countermeasure to Washington's export restrictions and a significant step toward China's goal of technological self-sufficiency. It aims to eliminate Western technology from the country's essential AI infrastructure.
Why It Matters: This move builds a hardware 'great firewall,' securing a massive market for China's homegrown chip industry. However, by cutting itself off from top-tier foreign chips, Beijing risks widening the AI performance gap with the U.S.
AI Pulse
XPeng showcased its new "Iron" humanoid robot, which features a human-like spine and is powered by an all-solid-state battery, with the CEO unzipping a bodysuit during a live event to prove it wasn't a person in a suit.
A Maryland appellate judge blasted a lawyer for submitting a legal brief written with ChatGPT that included multiple hallucinated, non-existent case citations in a custody battle, referring the attorney for disciplinary action.
A viral TikTok account fooled viewers with wholesome videos from the fictional "Basin Creek Retirement" home, which many believed was real until discovering the entire account and its content are AI-generated.
A UK government study found that neurodiverse workers with conditions like ADHD and autism were 25% more satisfied with AI assistants than their neurotypical colleagues, using the tools for note-taking, time management, and focus.
