PLUS: Gemini searches your personal files, Microsoft's humanist superintelligence team, and an entire country gets Claude
Good morning
A new investigation has pulled back the curtain on how Meta is funding its massive AI ambitions, revealing a troubling reliance on billions of dollars from fraudulent ads.
The report suggests the company knowingly allowed scam ads to flourish, using the revenue to keep pace in the competitive AI race. As the industry pushes for more powerful models, it raises a critical question: what ethical compromises are being made behind the scenes to pay for innovation?
In today’s Next in AI:
Meta’s funding of AI with scam ad revenue
Gemini’s new personal file search feature
Iceland’s nationwide AI education pilot with Claude
Meta's AI Cash Cow

Next in AI: A bombshell Reuters investigation reveals Meta knowingly projected billions in revenue from fraudulent ads, protecting the cash flow needed to fund its massive AI ambitions.
Decoded:
Internal documents projected that scam ads could account for $16 billion, or about 10% of the company's total revenue, in 2024.
Meta's own ad-personalization system contributes to the problem by targeting users who have previously clicked on scams with even more of them.
Internal reviews concluded that it is easier to advertise scams on Meta's platforms than on Google's, as the company prioritized revenue for AI over aggressive enforcement.
Why It Matters: This report exposes a troubling ethical compromise at the heart of the AI race, questioning the true cost of funding next-generation models. It serves as a stark reminder of the immense financial pressures driving AI development and the potential for user safety to be sidelined in the pursuit of innovation.
Gemini Gets Personal

Next in AI: Google is rolling out "Deep Research," a powerful new capability that allows Gemini to search and synthesize information across your personal Gmail, Drive, Docs, and other Workspace apps.
Decoded:
You can access the feature by selecting 'Deep Research' from the Tools menu in Gemini on desktop, with a rollout to mobile users coming soon.
The move puts Google in a tight race with Microsoft, whose Copilot AI has similar integrations but gives Google a first-to-market advantage with its public release.
This integration reflects a larger trend across the ecosystem, where assistants like ChatGPT are connecting to third-party apps to access specialized data.
Why It Matters: This transforms AI assistants from general search tools into deeply personalized productivity hubs. The ability to instantly connect dots across your entire digital life promises to streamline complex tasks and find buried information.
Iceland Goes All-In on AI Ed
Next in AI: In one of the world's first national AI pilots, Iceland is partnering with Anthropic to provide the AI model Claude to every teacher in the country, aiming to support lesson prep and enhance student learning.
Decoded:
The initiative provides teachers with AI tools, training, and support to save hours on lesson planning and create personalized materials for different learning needs.
This move builds on Anthropic's growing public sector footprint, including a partnership with the London School of Economics to give all students access to Claude for Education.
A key goal of the program is to safeguard the Icelandic language, ensuring the AI can support educators and students in their native tongue and foster an inclusive learning environment.
Why It Matters: This nationwide experiment offers a compelling model for how countries can responsibly integrate AI into core public services like education. The focus on augmenting teacher capabilities, rather than replacing them, sets a powerful precedent for human-centered AI adoption.
AI Pulse
Microsoft created a new MAI Superintelligence team, led by AI chief Mustafa Suleyman, to develop "humanist superintelligence" explicitly designed to serve humanity and solve major global challenges.
Oxford published a new study analyzing 445 AI benchmarks, concluding that many use flawed methods that exaggerate AI capabilities and lack scientific rigor.
Elon Musk claimed that Tesla's Optimus robot will eventually "eliminate poverty" and make all jobs optional, creating a future of "universal high income" and "sustainable abundance."
Anthropic co-developed a classifier with the U.S. National Nuclear Security Administration that can distinguish between benign and concerning nuclear-related conversations with 96% accuracy.
