Is superintelligence the idea that eats smart people?
Nah
I go on about AI a lot. Specifically, I go on about superintelligent AI. I believe we’re likely to create a form of advanced general intelligence at some point in the future that’ll have the capability to end life as we know it, and the inclination to do so unless we’re both exceedingly diligent and exceedingly lucky.
It’s an embarrassing belief to hold, because it sounds alarmist and absurd. It also threatens to overshadow valid concerns about prioritising short-term issues to do with sophisticated narrow AI systems, such as those triggering an automated arms race or threatening the reliability of video evidence.
Nevertheless, I can’t resist taking a poke at this article. Several people have presented it in the comments as a damning counterargument against AI safety concerns I’ve raised, despite most of it being absolute rubbish!
Get a Rock Paper Shotgun Premium subscription to access this article and also enjoy ad-free browsing, our monthly letter from the editor, and discounts on RPS merch.