Philosopher Nick Bostromrecentlyposted a paper, where he postulated that a small chance of AI annihilating all humans might be worth the risk, because advanced AI might relieve humanity of “its universal death sentence.” That upbeat gamble is quite a leap from his previous dark musings on AI, which made him a doomer godfather. His 2014 bookSuperintelligencewas an early examination of AI’s existential risk.
➡️ Read the full article at Wired
This excerpt is provided for informational purposes with a link to the original source. All rights belong to the original publisher.
As an Amazon Associate, we earn from qualifying purchases at no extra cost to you.
