Influential Figures Unite to Halt AI Superintelligence Development

A striking coalition of public figures—from Apple cofounder Steve Wozniak to right-wing strategist Steve Bannon—has signed a letter calling for an immediate prohibition on the development of AI superintelligence. The letter, published by the Future of Life Institute (FLI), reflects growing concern that the race toward ultra-powerful AI is outpacing public oversight, scientific consensus, and ethical safeguards.

A Unified Call for Caution

The “Statement on Superintelligence” is brief but forceful. It demands a halt to building AI systems that could surpass human intelligence until two conditions are met: broad scientific consensus on safety and control, and strong public buy-in. The signatories span a wide ideological spectrum, including tech pioneers, religious leaders, entertainers, and military officials. Among them are Virgin founder Richard Branson, actor Joseph Gordon-Levitt, Prince Harry and Meghan Markle, and AI luminaries Geoffrey Hinton and Yoshua Bengio—both considered “godfathers of AI”.

The letter cites polling data showing that only 5% of Americans support rapid, unregulated development of advanced AI, while 73% favor robust regulation. A majority also believe superintelligence should not be pursued until it can be proven safe.

Why Superintelligence Sparks Alarm

Superintelligence refers to AI systems that outperform humans across most cognitive tasks. While some experts question whether such systems are technically feasible in the near term, the letter treats it as an imminent possibility. Bengio warns that these systems could solve global challenges—but also pose existential risks if misaligned or misused.

The letter’s urgency is amplified by the current impact of generative AI. Even without superintelligence, tools like chatbots and image generators are already disrupting education, spreading misinformation, and contributing to mental health crises. The authors argue that waiting for superintelligence to emerge before regulating AI is dangerously shortsighted.

Notable Absences and Industry Silence

Interestingly, several high-profile AI leaders did not sign the letter, including OpenAI CEO Sam Altman, Microsoft AI chief Mustafa Suleyman, and xAI founder Elon Musk—despite Musk having signed a similar FLI letter in 2023. Their absence raises questions about the tech industry’s willingness to self-regulate, especially as companies race to release increasingly powerful models like GPT-5.

Toward Democratic AI Governance

The letter’s core message is clear: AI development must be a democratic process. Decisions about humanity’s technological future should not be made solely by Silicon Valley executives. As FLI cofounder Anthony Aguirre puts it, “Nobody developing these AI systems has been asking humanity if this is OK. We did—and they think it’s unacceptable”.

Whether this latest plea will influence policy or industry behavior remains uncertain. But it underscores a growing consensus: the future of AI should be shaped by science, ethics, and public will—not unchecked ambition.

References

Scroll to Top
0

Subtotal