
AI is the greatest threat to our existence that we have ever faced. The founder of the field of AI risk, Eliezer Yudkowsy, and his successor at the Machine Intelligence Research Institute, Nate Soares, explain why superintelligent AI is a global suicide bomb and call for an immediate halt to its development.