maxbarry.com
Wed 07
Apr
2021

Thoughts On Whether A.I. Will Kill Us All

Providence Recently I read a 1,600-page book, “Rationality” by Eliezer Yudkowsky. That’s a lot of pages. You wouldn’t be impressed if I read 160 ten-page books. I get through one whopper, though, that’s worth mentioning.

I usually dislike non-fiction, because it feels like cheating. I go to a lot of trouble to craft rich, internally logical dynamic systems of interacting people and parts, and some bozo comes along and just writes down what’s true. I feel like anyone can do that. But this non-fiction book was great, because it changed my most fundamental belief. (Previously, I thought the scientific method of investigation was the best way to figure out what’s true. Now I realize Bayesian inference is better.)

So that’s not bad. If I’m writing a book, any kind of book, and someone reads it and changes their most fundamental belief, I’m calling that job well done. I’m happy if my book changes anyone’s opinion about anything. I just want to have made a difference.

“Rationality” covers a lot of topics, including A.I. Previously, I thought A.I. might be just around the corner, because Google has gotten really good at recogizing pictures of cats. But this book disabused me of the notion that we might be able to push a whole lot of computers into a room and wait for self-awareness to pop out. Instead, it seems like we have to build a super-intelligent A.I. the same way we do everything else, i.e. one painstakingly difficult piece at a time.

Which is good, because I’m pretty sure that A.I. will kill us all. There’s a big debate on the subject, of course, but I hadn’t realized before how much it resembles climate change. By which I mean, in both cases, there’s a potential global catastrophe that we know how to avoid, but the solution requires powerful people and companies to act against their own short-term interests.

This hasn’t worked out so well with climate change. All we’ve managed to do so far is make climate change such a big issue, it’s now in the short-term interest of more of those people and companies to look like they give a crap. I feel like once we get to the point where they have to choose between a financial windfall and risking a runaway super-intelligent A.I., we’re in trouble.

I just listened to a great interview by Ezra Klein with Ted Chiang, who is a brilliant author that you should read, called “Why Sci-Fi Legend Ted Chiang Fears Capitalism, Not A.I.” Ted has a more optimistic view than mine, but I think the premise is exactly right. The danger isn’t that we can’t stop a super-intelligent A.I.; it’s that we’ll choose not to.