“When I spoke to [Nick] Bostrom in 2024, he was midway through the publicity campaign for his own new book, Deep Utopia. In the book, Bostrom considers a world in which the development of superintelligent AI has gone well. Some observers, he told me, have assumed that this means he feels a greater bullishness about humanity’s prospects of surviving and thriving. Alas. “We can see the thing with more clarity now,” said Bostrom, “but there has been no fundamental shift in my thinking.” When he wrote Superintelligence, he said, there seemed an urgent need to explore the risks of advanced AI and to catalyze work that might address those risks. “There seemed less urgency to develop a very granular picture of what the upside could be. And now it seems like time to maybe fill in that other part of the map a bit more.”
https://asteriskmag.com/issues/08/looking-back-at-the-future-of-humanity-institute?s=31
Did I read that right? It’s now less urgent to be more granular when it comes to the upsides of catastrophic risks?