On Manifold Markets, "If Artificial General Intelligence has an okay outcome, what will be the reason? — E. Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans." has a probability of 1.0%.
View on Manifold Markets →This question is currently tracked on Manifold Markets only. When the same question appears on additional platforms, we compute a cross-platform consensus probability. Learn about our methodology →
The only prediction market aggregator with academically validated cross-platform consensus.