PLUS: AI can now read your inner thoughts, the blue-collar AI boom, and Apple's empty cloud
Happy reading
Anthropic's Claude app has skyrocketed to the top of the App Store charts. This surge in popularity comes after the company rejected a major Pentagon contract on ethical grounds, leading to a federal ban.
The public's reaction seems to have rewarded Anthropic's principled stand against using its AI for certain military applications. With users voting with their downloads, does this signal a new era where a company's ethics can become its most powerful competitive advantage?
In today’s Next in AI:
Anthropic's Claude hits #1 after Pentagon ban
AI learns to decode inner thoughts
The blue-collar boom in AI data centers
Apple's underpowered AI cloud turns to Google
The Claude Rebellion

Next in AI: Anthropic drew a line in the sand, rejecting a $200M Pentagon contract over ethical safeguards and earning a federal ban for its defiance. In a surprising turn, the public rewarded this principled stand, catapulting Anthropic's Claude app to the top of the App Store.
Explained:
Anthropic walked away from the deal after it refused to remove safeguards that prevent its AI from being used for mass domestic surveillance or fully autonomous weapons systems.
The U.S. government responded by banning Anthropic from federal contracts and labeling it a “supply chain risk,” a role that OpenAI quickly accepted just hours later.
Users voted with their downloads in a massive show of support, driving a “Cancel ChatGPT” movement and rocketing Claude to #1 on the App Store over the weekend, displacing its chief rival.
Why It Matters: This clash marks a defining moment, demonstrating that a company’s ethical principles are no longer just talk, but a core part of its product. The market's reaction proves that a strong ethical stance can be a powerful competitive advantage, directly influencing user loyalty and adoption.
AI's Inner Voice

Next in AI: New breakthroughs in brain-computer interfaces are enabling AI to decode a person's silent 'inner speech' directly from neural signals. This opens a new frontier in assistive technology, translating thoughts into text for individuals with paralysis.
Explained:
Researchers at Stanford achieved up to 74% accuracy in real-time decoding of imagined sentences, demonstrating that AI can interpret the weaker neural signals of inner speech, not just attempted speech.
Beyond text, a UC Davis lab is teaching AI to reconstruct non-verbal cues like intonation and pitch from brain signals, aiming to give synthesized speech genuine human expression.
In Japan, another team is using non-invasive fMRI scans and AI image generators to create "mind captions," which produce detailed descriptions of what a person is seeing or imagining.
Why It Matters: This technology promises to restore communication for those unable to speak, offering a profound connection to the world. Looking forward, these advancements pave the way for a future where brain-computer interfaces could redefine our interactions with digital devices.
The AI Electrician Boom

Next in AI: The explosive growth of AI is creating massive demand for electricians to build power-hungry data centers, turning a blue-collar trade into a critical bottleneck for Big Tech's ambitions.
Explained:
The scale of the need is staggering, with an estimated 300,000 new electricians required over the next decade as electrical work now accounts for up to 70% of a data center's construction cost.
Tech giants are sounding the alarm, with Microsoft’s president identifying electrical talent shortages as the number one barrier to expansion and Google pledging $15 million toward new training programs.
In response, Gen Z is fueling a blue-collar boom, with applications for commercial electrical apprenticeships surging by 70% as they seek debt-free paths to six-figure salaries.
Why It Matters: The future of AI is not just about algorithms; its growth is directly constrained by physical infrastructure and the skilled tradespeople who build it. This dependency creates a powerful career pathway for a new generation, proving that even the most advanced tech relies on foundational human labor.
Apple's Empty AI Cloud

Next in AI: Apple's custom AI server system, Private Cloud Compute, is reportedly sitting mostly idle, prompting the company to enter talks with Google to power its next-generation Siri.
Explained:
Apple's privacy-focused AI infrastructure is described as underpowered and underutilized, with current usage at just 10% of capacity and hardware not powerful enough for the latest AI models.
The performance gap is forcing Apple to look outward, discussing a plan for Google to run the new, more demanding Siri features from inside its own data centers.
Despite these setbacks, Apple is also playing the long game, as it simultaneously expands US manufacturing to accelerate the production of more advanced AI servers for the future.
Why It Matters: This highlights the immense challenge of balancing on-device privacy with the massive computational power required for cutting-edge AI. The situation shows that even tech titans are finding it difficult to build custom AI infrastructure at scale, forcing them into strategic partnerships with key competitors.
AI Pulse
Anthropic experienced widespread outages for Claude on Monday, as the platform struggled with degraded performance and login errors after a massive influx of new users.
Harvard found in an ongoing study that generative AI tools often increase workloads rather than reduce them, as the technology raises output expectations and expands the scope of an employee’s responsibilities.
Researchers revealed that popular AI-detection tools show extreme variability and high false-positive rates, with one study finding a polished human-written essay was more likely to be flagged as AI-generated than an actual AI essay.
Microsoft banned the term "Microslop" from its official Copilot Discord server after backlash over its aggressive AI push, only to temporarily lock the server down when users found workarounds for the filter.