Podcast: Center for AI Policy, on AI risk and listening to AI researchers

Crossposted from world spirit sock puppet.

I was on the Center for AI Policy Podcast. We talked about topics around the Expert Survey on Progress in AI, including why I think AI is an existential risk, and how much to listen to AI researchers on the subject. Full transcript at the link.

Podcast: Eye4AI on 2023 Survey

Crossposted from world spirit sock puppet.

An explanation of evil in an organized world

Crossposted from world spirit sock puppet.

The first future and the best future

Crossposted from world spirit sock puppet.

It seems to me worth trying to slow down AI development to steer successfully around the shoals of extinction and out to utopia.

But I was thinking lately: even if I didn’t think there was any chance of extinction risk, it might still be worth prioritizing a lot of care over moving at maximal speed. Because there are many different possible AI futures, and I think there’s a good chance that the initial direction affects the long term path, and different long term paths go to different places. The systems we build now will shape the next systems, and so forth. If the first human-level-ish AI is brain emulations, I expect a quite different sequence of events to if it is GPT-ish.

People genuinely pushing for AI speed over care (rather than just feeling impotent) apparently think there is negligible risk of bad outcomes, but also they are asking to take the first future to which there is a path. Yet possible futures are a large space, and arguably we are in a rare plateau where we could climb very different hills, and get to much better futures.

Experiment on repeating choices

Crossposted from world spirit sock puppet.