The Rising Anxiety Around AI and the Fear of Human Extinction
Many of us are feeling uneasy about the rapid advancement of artificial intelligence. The concerns are widespread: AI’s massive energy consumption is harming the environment, companies are using it as a reason to cut jobs, and the internet is drowning in AI-generated misinformation. Governments are leveraging it for surveillance, and some reports suggest it’s even contributing to mental health crises. Amid this growing tension, a former MIT student made headlines by dropping out of school, not to chase the AI gold rush, but because she feared something far more extreme—the possibility that artificial general intelligence, or AGI, could lead to the extinction of humanity.
Her fear wasn’t about job displacement or privacy violations. It was existential. She believed that if AGI were achieved, it could spiral out of control, posing a direct threat to human survival. This isn’t just science fiction anymore. Prominent figures in tech and science have warned about the risks of superintelligent AI, arguing that without proper safeguards, it could act in ways humans can’t predict or control.
The debate around AI safety is heating up. On one side, optimists believe AI will solve humanity’s biggest problems, from disease to climate change. On the other, skeptics see a path where unchecked AI development leads to disaster. The former MIT student falls into the latter camp, choosing to step away from the field entirely rather than contribute to what she sees as a potential doomsday scenario.
This isn’t just about one person’s fears. The broader public is increasingly wary of AI’s trajectory. Stories of AI-generated deepfakes, biased algorithms, and automated systems making life-altering decisions have eroded trust. The idea that AI could one day surpass human intelligence—and perhaps see no need for humanity—adds a chilling layer to the conversation.
While some researchers are working on alignment techniques to ensure AI systems act in humanity’s best interests, others worry it might not be enough. The challenge is immense: how do you program an intelligence smarter than humans to always remain benevolent?
For now, the race to develop more powerful AI continues, with billions pouring into research and startups. But as the stakes grow higher, so do the ethical questions. The former MIT student’s decision highlights a growing divide—between those who see AI as the future and those who fear it could be the end.
Whether her concerns are justified remains to be seen. But one thing is clear: the conversation around AI is no longer just about efficiency or innovation. It’s about survival. And that’s a discussion we can’t afford to ignore.


