Discussion about this post

User's avatar
Will Kiely's avatar

> I think we do have a shot at building a real-life scientific consensus robust enough to motivate serious technical standards before it’s too late.

That's the big question, isn't it? How large do you think the shot is?

Conversely, what is your unconditional forecast of human extinction before 2040?

I just watched The AI Doc last night with my sister who is visiting from across the country. It was my third viewing of the film. She liked the film and thought it was informative for someone like her who doesn't know much about AI. But I got the sense that even after watching it she is still part of the 99+% of Americans who wouldn't name AI when asked what is the most important problem facing America today.

I thought I had a decent understanding of why most Americans are concerned about AI, but very few name AI as *the most important* problem, but after this latest watch-through I'm doubting whether I understand why this is the case.

Is it just the lack of scientific consensus about the risks? AI risk has been a mainstream topic for three years now since Hinton left Google and began speaking out, so people have heard how large many leading experts think the risk is. Do they just remain skeptical that the risk is anywhere near as high as those experts say due to the lack of consensus?

Stuart Russell frequently points out in interviews that the annual risk of human extinction humanity is on track to take on with the development of AI over the next decade or two is about a *million times* higher than is acceptable given acceptable risk thresholds for meltdowns in new nuclear power plants that national and international regulatory bodies have set. Even if most Americans intuitively only partially believe e.g. Hinton and Bengio's 10-50% AI existential risk estimates (say by putting only 10% weight on their estimates), this still results in a 1-5% AI risk estimate, which I would think is more than high enough to make AI the most important problem facing the country today. Yet 99+% of people don't think AI is the most important problem and I'm not sure why. And I'm not sure what would lead them to start thinking it is. And I'm skeptical that we will get the political will necessary for sufficiently "serious technical standards" and regulation to get created and passed until a lot more people start considering AI risk to be the most important problem facing society today.

So this is why I think that people like you clearly voicing what your unconditional forecast of human extinction before 2040 is might be useful -- it communicates the gravity of the situation in a way that I would think would cause many people to update to the view that AI is the most important problem, even if they only partially believe you. But I could easily be wrong--maybe this is not at all what is needed for people to update. Maybe nothing short of scientific consensus will be enough.

No posts

Ready for more?