PLUS: The 'AI scare trade' hits stocks, Hollywood's funding frenzy, and Disney's legal war over Seedance

Happy reading

A newly documented cyberattack shows that malware is no longer just after your passwords. Researchers have observed the first real-world case of an infostealer capturing an AI agent's private keys, tokens, and even its personalized 'soul' file.

The attack was carried out by a general-purpose tool, which means millions of devices could already be vulnerable. As we integrate these AI agents deeper into our lives, how will we secure not just our data, but our entire digital identities from being stolen?

In today’s Next in AI:

  • Malware steals AI agent's 'soul' and keys

  • The 'AI scare trade' rattles the stock market

  • Hollywood's AI funding gold rush

  • Disney's copyright battle with ByteDance's Seedance

Stealing an AI's Soul

Next in AI: For the first time, researchers have documented a first real-world case of infostealer malware stealing the entire working configuration of a personal AI agent. The attack goes beyond passwords, capturing the agent's keys, tokens, and even its personalized "soul" file.

Explained:

  • Security firm Hudson Rock discovered that the malware exfiltrated key files from an OpenClaw AI user, including configuration files with API tokens, the user's private keys, and the soul.md file that defines the AI's personality and permissions.

  • The attack was not carried out by a specialized tool. Instead, a general-purpose infostealer using a broad file-grabbing routine stumbled upon the sensitive files, signaling that millions of devices running similar malware are already primed for this new threat.

  • With these files, an attacker can sign messages as the user's device, bypass security checks, and access a blueprint of the user's life from memory logs, enabling a total compromise of their digital identity.

Why It Matters: This incident marks a significant shift in cybersecurity threats, moving beyond stealing credentials to harvesting a user's entire digital context and identity. It serves as an early and critical warning to secure the AI agents that are quickly becoming integrated into our daily workflows.

The AI Scare Trade

Next in AI: A new "AI scare trade" is shaking up the stock market, creating a trillion-dollar wipeout as investors flee companies threatened by AI while punishing tech giants for their massive, yet-to-be-profitable, AI investments.

Explained:

  • New AI tools are directly targeting established industries, triggering sell-offs in logistics, insurance, and real estate. For example, wealth management stocks like Charles Schwab tumbled after startup Altruist announced a new AI tax planner.

  • On the other side, tech hyperscalers like Amazon and Microsoft face skepticism for spending over $600 billion on AI infrastructure in 2026. This spending is consuming nearly 100% of their operating cash flow, a stark increase from the historical 40% average.

  • The dueling anxieties reflect a market in a "shoot first, ask questions later" mode. While some analysts see the sell-offs as an overreaction, the volatility highlights growing impatience for a clear return on the massive AI investments across the board.

Why It Matters: This market tension reveals the gap between AI's disruptive potential and the reality of its current profitability. It signals a shift from pure hype to a more critical evaluation of how and when AI will deliver tangible financial results.

Hollywood's AI Gold Rush

Next in AI: "Pulp Fiction" co-writer Roger Avary says the secret to getting films funded in 2026 is simple: just add AI. He secured financing for three new movies immediately after launching an AI-focused production company, a feat he described as previously "impossible."

Explained:

  • Avary found that while his acclaimed screenwriting credentials failed to open doors, simply attaching the word AI to his projects made investors eager to provide funding for his new company, General Cinema Dynamics.

  • The immediate backing jumpstarted production on a diverse slate of three features: a family Christmas movie, a faith-based film, and a large-scale romantic war epic.

  • This investor enthusiasm sharply contrasts with growing anxiety in Hollywood, where AI tools like Seedance 2.0 are fueling fears over job displacement and creating significant copyright concerns.

Why It Matters: This trend highlights a major shift where the perception of leveraging cutting-edge technology can be more valuable than a proven creative track record for securing funding. It signals that venture capital's appetite for AI is now reshaping creative industries, potentially prioritizing tech buzz over traditional artistic pipelines.

Disney's AI Copyright War

Next in AI: Disney has threatened legal action against ByteDance, accusing its new AI video generator, Seedance, of training on a "pirated library" of copyrighted Marvel and Star Wars characters. In response, ByteDance has pledged to strengthen safeguards on the platform.

Explained:

  • Disney's legal team sent a cease-and-desist letter accusing ByteDance of committing a "virtual smash-and-grab" of its intellectual property.

  • The pushback extends beyond Disney, with other major studios like Paramount Skydance issuing similar demands and the Motion Picture Association calling for Seedance to halt its infringing activity.

  • This legal fight highlights Disney's dual strategy of aggressively protecting its IP while embracing AI on its own terms, exemplified by its recent $1bn deal with OpenAI to license over 200 characters.

Why It Matters: This high-profile battle sets a critical precedent for how intellectual property rights will be enforced against generative AI models. The outcome will likely force AI developers to prioritize licensed data, fundamentally shaping the business relationship between tech and creative industries.

AI Pulse

Anthropic opened its new Bengaluru office as part of a major expansion into India, its second-largest market, and announced new partnerships in law and education.

The UK extended its Online Safety Act to cover AI chatbots, requiring providers like ChatGPT and Grok to protect users from illegal content or face significant fines.

Anthropic sparked developer backlash after updating its Claude Code tool to hide the names of files it accesses, with users arguing the change reduces transparency and makes it harder to catch errors.

Izwi launched as a new open-source, privacy-focused voice AI engine that runs entirely locally on-device and offers an OpenAI-compatible API for easy integration.

Keep Reading