An anonymous reader writes: The Washington Post has an article about current and near-future AI research while managing to keep a level head about it: "The machines are not on the verge of taking over. This is a topic rife with speculation and perhaps a whiff of hysteria." Every so often, we hear seemingly dire warnings from people like Stephen Hawking and Elon Musk about the dangers of unchecked AI research. But actual experts continue to dismiss such worries as premature — and not just slightly premature. The article suggests our concerns might be better focused in a different direction: "Anyone looking for something to worry about in the near future might want to consider the opposite of superintelligence: superstupidity. In our increasingly technological society, we rely on complex systems that are vulnerable to failure in complex and unpredictable ways. Deepwater oil wells can blow out and take months to be resealed. Nuclear power reactors can melt down. Rockets can explode. How might intelligent machines fail — and how catastrophic might those failures be?"