A Call to Pause the March to Superintelligent AI Gains Widespread Support A diverse coalition of more than 800 public figures has united to demand a halt on the development of artificial superintelligence. The group, which includes Apple co-founder Steve Wozniak, Prince Harry, AI scientist Geoffrey Hinton, former Trump aide Steve Bannon, and rapper Will.i.am, signed a statement calling for a prohibition on AI work that could lead to machines surpassing human intelligence. The statement was organized by the Future of Life Institute, which argues that the breakneck speed of AI advancement is outpacing public understanding and regulatory oversight. The core demand is for a ban on superintelligence development that would only be lifted once there is broad scientific consensus that it can be done safely and controllably, and only after securing strong public approval. The institute’s executive director, Anthony Aguirre, expressed a sentiment that the current trajectory has been chosen by AI companies and economic forces, without a broader societal discussion about whether this is the desired path. He questioned whether the public truly wants this rapid, largely unchecked development. The debate centers on the distinction between artificial general intelligence, or AGI, which would allow machines to reason and perform tasks as well as humans, and superintelligence, where AI would outperform even the best human experts. This potential for superhuman capability is often cited by critics as an existential risk to humanity. In practice, however, current AI systems remain limited to specific tasks and still struggle with complex challenges like fully autonomous driving. Despite the absence of recent fundamental breakthroughs, major tech companies are investing billions into developing more powerful AI models and the massive data centers required to run them. Meta CEO Mark Zuckerberg has stated that superintelligence is in sight, while X CEO Elon Musk claimed it is happening in real time. OpenAI CEO Sam Altman has predicted superintelligence could arrive by 2030. Notably, none of these industry leaders or prominent figures from their companies signed the statement calling for a pause. This is not the first appeal to slow down AI development. Last month, over 200 researchers and officials, including Nobel laureates, issued an urgent call for a red line against AI risks. However, that warning focused on more immediate dangers already beginning to materialize, such as mass unemployment, the exacerbation of climate change, and human rights abuses. Other critics are raising concerns about a potential AI economic bubble that could eventually burst with severe consequences for the global economy.


