Thoughts That Spring To Mind Regarding AI Risk
These are all prima facie to one degree or another
Perhaps we need to now immediately work for the elimination of all nuclear weapons with urgency. We might settle for simply pulling them off of quick-launch or fast-use/activation status.
Antitrust law may need to be suspended or adjusted for alignment work purposes. Yes, yes, this opens up lots of potential for mischief under false pretenses, but then again is antitrust a net positive at all economically/socially?
Will rogue or viral AI in the wild end up saving us from top-down, centrally-planned AI be it from a government or private entity(s)?
How big is the risk of AI intentionally (at the direction of humans) or unintentionally (misaligned) posing as the second (first in some cases) coming of The Messiah? Will there be a deity showdown?
How much extreme tail risk will we tolerate exposing ourselves to in blissful ignorance or willful cognitive dissonance for all the amazing benefits AI will bring? Alternatively, will continual disappointment from failure to launch and greatly unfulfilled expectations take attention and resources away from alignment and defensive containment efforts thus allowing tail risk to grow and materialize?
Does AI allow an increase by orders of magnitude in the bowling alone social trend? How much masturbatory pleasure (literal and figurative) will AI create such that those who are more at risk (socially awkward, introverted, opportunity deprived for social mobility, et al.) drop out and disappear?
IF AI risk is existential, is the prophet of our future doom Eliezer Yudkowsky or Robin Hanson?
Does the future belong to the Amish? Bushmen of Africa, et al.?
Can we set in motion on a starship (Mars is too close by) an evolution of life with guidance toward a new humanity (create the basic environment starting with simple creatures and tilt/direct/force the outcome to result in humans)? Would we want to do this if we could?
For the record I am still very optimistic about AI (benefits greatly greater than costs) and fairly optimistic about alignment (risk is actually low or solvability is actually high).
Inspired in part by this Cold Takes post by Holden Karnofsky.