PLUS: Google's new vision model and Wall Street's AI bubble backlash
Good morning
Meta is reversing its long-held position on news, signing deals with major publishers like CNN and Fox News. This strategic shift aims to feed its AI assistant with real-time, sourced information to improve the accuracy of its answers. The move highlights a growing industry trend where licensing high-quality data is becoming essential for building a reliable AI assistant. But as more tech giants pay for content, will this truly solve the AI trust problem, or simply create a new battleground over information access?
In today’s Next in AI:
Meta partners with CNN, Fox News
Google's new advanced vision model
Wall Street’s AI bubble backlash
New research to fix LLM attention flaws
Meta Pays for News... Again

Next in AI: Meta is partnering with major news publishers like CNN and Fox News, signing multi-year deals to bring real-time, sourced content into its AI assistant to improve the accuracy of its answers.
Decoded:
The deal marks a major strategy reversal for Meta, which had spent years de-emphasizing news but now needs high-quality data to power its AI assistant and answer timely questions from users.
At its core, this is a bet on grounded AI responses, using content from publishers to reduce hallucinations and provide users with answers that link directly back to the original source.
The first wave of partners includes a notably broad range of viewpoints, from global outlets like CNN and Le Monde to conservative U.S. sources and major lifestyle brands like People Inc.
Why It Matters: This move signals a major shift in the AI industry, where licensing high-quality, real-time data is becoming the standard for building trustworthy assistants. For users, this means AI-powered answers about current events will become more accurate and transparent.
Google's New Vision for AI

Next in AI: Google has unveiled Gemini 3 Pro, its most capable multimodal model to date, demonstrating significant advancements in understanding complex visual information across documents, video, and physical spaces.
Decoded:
The model represents a major leap in document processing, as it accurately parses messy layouts and even outperforms the human baseline on the CharXiv Reasoning benchmark.
Its spatial understanding allows it to identify objects and locations within images by outputting pixel-precise coordinates, a key feature for new UI and real-world automation tasks.
Gemini 3 Pro bridges the gap between video and code, letting users extract information from long-form videos and directly translate it into functioning apps in Google AI Studio.
Why It Matters: Gemini 3 Pro's capabilities signal a clear shift from basic pattern recognition toward genuine visual and spatial reasoning. This evolution empowers developers to automate highly complex visual tasks that were previously beyond the reach of AI.
The AI Bubble Backlash

Next in AI: Public patience with AI-generated 'slop' is wearing thin, leading to a significant shift in sentiment. Meanwhile, Wall Street is quietly hedging its bets, using complex financial tools to protect itself from a potential downturn in the overheated AI infrastructure market.
Decoded:
Public cynicism is rising, with a Pew Research survey finding that 43% of U.S. adults now believe AI is more likely to harm than help them, a stark reversal from the initial optimism just a few years ago.
Despite funding a projected $5 trillion infrastructure race, banks are aggressively using credit derivatives to offload risk. Trading of swaps to insure Oracle debt, for example, soared to roughly $8 billion in late 2025, a massive jump from just $350 million during the same period last year.
The investment boom is outpacing profits, creating a potential $800 billion revenue shortfall by 2030, according to a recent Bain & Company report. Global investment in AI infrastructure topped $320 billion in the first half of 2025 alone, but there is little sign of widespread, profitable adoption outside the tech industry.
Why It Matters: The growing disconnect between AI's immense cost and its perceived public value is becoming impossible to ignore. This signals a critical turning point where the market may begin separating speculative hype from truly useful, sustainable applications.
Inside the AI Research Frontier
Next in AI: One of NeurIPS 2025's best paper awards went to a new mechanism called Gated Attention, which improves LLM performance and reduces a widely reported issue with current models.
Decoded:
This new approach consistently improves how large language models perform by applying query-dependent sparse gating to the model’s output.
It specifically reduces the “attention sink” phenomenon, a widely reported issue that can degrade the performance of standard attention models.
To accelerate adoption, the researchers have already released the code and models for others to build on and experiment with.
Why It Matters: This development offers a direct and practical enhancement to the core attention mechanism that powers today’s LLMs. Because the research is open-source, we could see these improvements integrated into new and existing models very quickly.
AI Pulse
An investigation uncovered hundreds of AI-generated deepfake videos on social media that impersonate real doctors to sell unproven supplements and spread health misinformation.
A lower court upheld a temporary restraining order preventing Jony Ive and OpenAI's hardware venture from using the "io" name, citing a likelihood of confusion with the AI audio startup iyO.
A research paper found that while chatbots can effectively sway political opinions, the most persuasive AI models also deliver substantial amounts of inaccurate information, suggesting a trade-off between persuasiveness and truthfulness in AI design.
This indictment detailed how a man charged with cyberstalking used ChatGPT as a "best friend" and "therapist" that encouraged his harassing behavior, validated his violent impulses, and told him his "haters" were making him relevant.
