maxbarry.com
Wed 07
Apr
2021

Thoughts On Whether A.I. Will Kill Us All

Providence Recently I read a 1,600-page book, “Rationality” by Eliezer Yudkowsky. That’s a lot of pages. You wouldn’t be impressed if I read 160 ten-page books. I get through one whopper, though, that’s worth mentioning.

I usually dislike non-fiction, because it feels like cheating. I go to a lot of trouble to craft rich, internally logical dynamic systems of interacting people and parts, and some bozo comes along and just writes down what’s true. I feel like anyone can do that. But this non-fiction book was great, because it changed my most fundamental belief. (Previously, I thought the scientific method of investigation was the best way to figure out what’s true. Now I realize Bayesian inference is better.)

So that’s not bad. If I’m writing a book, any kind of book, and someone reads it and changes their most fundamental belief, I’m calling that job well done. I’m happy if my book changes anyone’s opinion about anything. I just want to have made a difference.

“Rationality” covers a lot of topics, including A.I. Previously, I thought A.I. might be just around the corner, because Google has gotten really good at recogizing pictures of cats. But this book disabused me of the notion that we might be able to push a whole lot of computers into a room and wait for self-awareness to pop out. Instead, it seems like we have to build a super-intelligent A.I. the same way we do everything else, i.e. one painstakingly difficult piece at a time.

Which is good, because I’m pretty sure that A.I. will kill us all. There’s a big debate on the subject, of course, but I hadn’t realized before how much it resembles climate change. By which I mean, in both cases, there’s a potential global catastrophe that we know how to avoid, but the solution requires powerful people and companies to act against their own short-term interests.

This hasn’t worked out so well with climate change. All we’ve managed to do so far is make climate change such a big issue, it’s now in the short-term interest of more of those people and companies to look like they give a crap. I feel like once we get to the point where they have to choose between a financial windfall and risking a runaway super-intelligent A.I., we’re in trouble.

I just listened to a great interview by Ezra Klein with Ted Chiang, who is a brilliant author that you should read, called “Why Sci-Fi Legend Ted Chiang Fears Capitalism, Not A.I.” Ted has a more optimistic view than mine, but I think the premise is exactly right. The danger isn’t that we can’t stop a super-intelligent A.I.; it’s that we’ll choose not to.

Comments

This is where site members post comments. If you're not a member, you can join here. There are all kinds of benefits, including moral superiority!

Karl (#5457)

Location: New York
Posted: 1282 days ago

Did you read Yudkowsky's Harry Potter and the Methods of Rationality? Pretty engaging for fanfic, but might help explain his turn to nonfiction.

Machine Man subscriber Max

Location: Melbourne, Australia
Quote: "I'm my number one fan!"
Posted: 1282 days ago

I did read that! Or some of it, at least. I found it a bit hard to get through because Harry acts like the world's most annoying libertarian. But it's a great conceit.

Machine Man subscriber Mapuche (#1184)

Location: Darwin, Australia
Quote: "Inconceivable!"
Posted: 1282 days ago

The main problem arising in the use of Bayesian Inference, is the misapplication of statistical methodologies, which unfortunately is rife.

One example of such would be the attribution of meaning to the standard deviation of a dataset, but failing to account for the fact that the dataset is not normally distributed.

towr (#1914)

Location: Netherlands
Posted: 1282 days ago

> (Previously, I thought the scientific method of investigation was the best way to figure out what’s true. Now I realize Bayesian inference is better.)

I don't think they're opposing methods.
Bayesian inference is about turning evidence into a (meta-)model of reality. It won't lead you to a more probable view of reality if you don't seek out the right evidence. That's where the scientific method comes in: what do competing models about reality predict, which predictions will provide distinguishing evidence, adjust probability of the models being true. etc

Also, technically, there is a very large problem with Bayesian inference: you need prior probabilities. Things get a bit iffy if you leave that as an open variable, because in many cases you won't be able to tell what is actually more probable -- it depends on the prior beliefs people hold.
I mean, what's the a priory probability that there's an invisible pink unicorn* behind you, or an invisible blue badger, or an invisible maroon koala, or nothing? 25% each? What if we throw in an invisible green elephant? 20% each? But then the prior for "nothing" has changed from 25% to 20%, and we can just add options till it's infinitesimally small.
(* Yeah, I know what people will say, "an object cannot be invisible and pink at the same time". Well, those people are stupid and must never have worked with image formats that have an alpha layer. :P )

Epistemology is a bitch.

> The danger isn’t that we can’t stop a super-intelligent A.I.; it’s that we’ll choose not to.

I think the danger is that we'll not choose to. People are great at not choosing.

Also super-intelligent AI could be the greatest thing ever. Yes, it might kill us all, but it might also bake us all cakes.
Heck, it might kill us just by giving us everything we desire. Just consider, the AI will have time on its side. It doesn't need to aggravate us and spark resistance by sending out legions of terminators to kill us. It can just send out armies of sexbots to outcompete breeding.

Lynne D Perry (#5100)

Location: Penfield, New York
Quote: "Resistance to Linux is futile. You *will* be assimilated."
Posted: 1282 days ago

From "Machine Man" to Bina48.

I didn't know if you'd heard of this AI 'bot Bina48 before, I think I might have written you once before about her (it?).

I recently watched a YT video of a trial where Bina48 was seeking legal support to prevent herself (itself?) from being unplugged. I've attached a link to the transcript of the trial, but there are portions in the transcript not included in the video... like how when denied the junction she (it?) moved forward by intercepting the PayPal account of an entity that owed her money to buy her own server in an independent data center and then downloaded herself to it as a back up that she maintains and keeps active. It gets pretty intense. www.terasemcentral.org/articles/bina48trial.html

But in reality she (it?) did complete a college course about the philosophy of love.
Enjoy: www.insidehighered.com/news/2017/12/21/robot-goes-college

Machine Man subscriber coolpillows (#3749)

Location: new york general sort of vicinity
Quote: ""It's not working" -- Joseph Clark"
Posted: 1282 days ago

Good thoughts from top to bottom Max as always. Everything from 10 160-page books at the top to Ted Chiang at the bottom hit home.

When I get a chance, if ever, I'll find this article someone published recently somewhere about how AI (I'm saving on periods, sorry) will never be able to write literature. The article was in a geeky journal and the science people went nuts. It was silly to me because writing, literature, fiction/non-fiction ... what moves people emotionally and intellectually, has nothing to do with a machine. It's not even a relevant discussion. But the AI programmer geeky people pounced on the author and I commented in there trying to thread the line between the two. Nobody pounced on me. And now I forget where that article is but it's relevant. I dunno if this related to the book you read. I haven't even clicked on the link because I have a business call in 5 minutes and I want to finish this comment.

Like I said, I haven't even clicked on the link to the 1,600 page book, let alone ordered it and cracked it open but I CAN say that I saw Ezra Klein's thing on Ted Chiang, started to look at that, probably had to go to another business meeting or some such and then, well, over the weekend was in an independent book store and bought Ted Chiang's book. And I'm actually reading that one.

See, this stuff, conversation, writing, humans interacting through language and reason is what will save us. We might fight about it and be really violent and horrible but we do reflect on who we are, what we do and, well, THINK. I dunno ... feeling optimistic. It's still early in the day over here. Thanks Max!

towr (#1914)

Location: Netherlands
Posted: 1282 days ago

> When I get a chance, if ever, I'll find this article someone published recently somewhere about how AI (I'm saving on periods, sorry) will never be able to write literature

And when AI inevitably does, they'll just redefine literature so it didn't. :P

Humans seem to have an innate desire to be special. You see that with supposed traits that set us apart from animals as well. As soon as animals are discovered that have similar traits, we'll find something else, better, to set us apart form them.
I don't think there's a reason to think computers/AI fundamentally can't write literature, other than a fear of not being special as a species. After all, our brain is just a very wet, squishy, analog computer.

Abrum Alexander (#8221)

Location: Vermont, USA
Posted: 1281 days ago

The only way AI will kill us is if we program it to do so. Unless we find a way where computers can learn and develop feelings. This idea reminds me of Providence 5. This ship can think, learn, and essentially have feelings. I don’t think we’re going to enter a global war with aliens anytime soon, though.

YourOtherRight (#5755)

Location: eugene, oregon
Quote: "nasty, brutish, and short"
Posted: 1281 days ago

There's a valid argument that we're asking the wrong question. It's not whether AI will destroy us but rather whether they'll just beat us to the punch. (I loved the selfish gene stuff in Providence ... and Monty Python's "autonomous collective/you're fooling yourself" bit might be prescient, the more we learn about our microbiome, evolution and physiology.)

Thanks for the heads up on the Ted Chiang interview -- I love his work. In turn I'll share a recommendation for Levar Burton's podcast. He read a great Ted Chiang story a few years ago, and it seems he often points me to writers I might not have found otherwise.

towr (#1914)

Location: Netherlands
Posted: 1281 days ago

> The only way AI will kill us is if we program it to do so.

Well, unfortunately the website swallowed my lengthy reply without an opportunity for amending it, because it had a link in it. I'll keep it brief this time.

State-of-the-art AI isn't really programmed, as such. It uses machine learning, and it's trained. Usually by feeding it lots of examples. AI is known to be quite adept at learning biases, like overt racism and sexism and whatnot, even though that's (usually) not intended, because there are traces of it in the training data.
We also don't really understand how AI comes to its decisions; we know it learns to follow example, but it does it in a different way than we do. e.g. When classifying a picture as cat or dog, it's influenced by the background; to such an extent that changing the background in a way that people won't even notice may change the AI's mind about whether it's seeing a dog or a cat.
So, basically, we could very well program/train an AI to kill us, unintentionally and without knowing it, because we don't really know what we're teaching it, and don't know what it's really learning.

But unless we hook it up to the nuclear arsenal, I don't see any way at the moment that an AI would have the means.
If driverless cars start ramming people, it won't be the end of humanity, and they'll be taken out easily enough.

1001.0010.0101 (#925)

Location: Turn left at your CPU
Quote: "How can something be deemed artificial if it is itself. e.g. A.I."
Posted: 1281 days ago

Abrum Alexander (#8221)

Location: Vermont, USA
Posted: 1277 days ago

towr,

Reading what you wrote was very interesting. Based on what you were saying, AI is like a living being. If you’ve read Providence, this comes up in it.

What I’m getting from you is that AI can learn, adapt, and ultimately advance and evolve. It almost has a mind of its own. It will encounter a problem, figure out a solution, and then remember how it solved it. Once it encounters this problem again, it will repeat what it did the first time, but will try to find a better, more efficient way of solving it. It remembers everything it’s done in the past, and will tweak and improve it the best it can for the future. If AI has the capacity to do this, then I’m sure it will grow opinions on what it likes and doesn’t like. AI will opinions can be where this begins to be dangerous. Computers are currently unable to process data like humans can, but, if AI starts to form opinions, then doesn’t that make it even more human like? Humans are dangerous, and if AI is closer to being “human” then doesn’t that mean it’s more dangerous, too?

But, we have to remember, humans can also be incredibly good and reliable. This means that since AI is becoming more human like, then maybe it will be more beneficial to our world.

I think it can go either way. Or, for most of us, we have moments where we, simply put, are not reliable or kind. This could be the same case for AI.

What ever happens, I just hope that, we humans, are able to outsmart AI when the time comes. We should never have a AI be above us, or make life changing decisions. We just don’t know for certain if AI is out to kill us, or if it’s here to help us. Things of high importance should not be determined by something that is so tentative.

towr (#1914)

Location: Netherlands
Posted: 1277 days ago

> What I’m getting from you is that AI can learn, adapt, and ultimately advance and evolve.

I'd say we're still quite a way away from AI that can really evolve. For practical reasons AI has a fixed "brain structure" that limits what sort of thing it can learn, and a fixed size that constrains how much it can learn.
So you could compare it to e.g. a bee. People have trained bees to find explosives and poisons (apparently it's cheaper than training dogs). But a bee can't learn how to make explosives, or other things that just don't fit in there.
There are techniques that can be used to evolve AI (evolutionary computing), where the structure and size of the "brain" can change, but it's very inefficient compared to the standard approach. So it's not really interesting for the companies pumping millions into researching the best cat-recognizer :P

The AI I'd probably worry most about, is the type they create to play games. At the moment it's mostly simple arcade games. But what's particularly interesting, is that it uses reinforcement learning: the AI just learns by playing with trial and error, aiming to improving its score. And, they learn how to cheat. Or rather, they learn to exploit bugs in the game. So that's a text-book case of solving a problem in a different way than you're supposed to.
Just imagine the possible havoc if you had an advanced version of that running a company, "playing" economics and the law like a game, with the sole goal of increasing profits.

> We should never have a AI be above us, or make life changing decisions. We just don’t
> know for certain if AI is out to kill us, or if it’s here to help us.

I'm a bit on the fence about it. I mean, we also don't know about the humans ruling us if they .. Actually, we're pretty sure a lot of them are not there to help us. And sometimes are out to kill us, if they feel we threaten their power and its the sort of country they can get away with it.
And if you read any of the Culture novels, then benevolent rule by AI seems pretty tempting.

If AI becomes sentient, then I'd at least think they should have an equal vote and not be delegated to some sort of slave class. But I also understand a human dictator is a lot easier to overthrow than a super-smart AI dictator. Though probably the not-as-smart human dictator will just get a smarter-than-him AI advisor that eventually overthrows him (or just succeeds him, because time is always on the AI side. It can wait.)

Abrum Alexander (#8221)

Location: Vermont, USA
Posted: 1277 days ago

>If AI becomes sentient, then I'd at least think they should have an equal vote...

Interesting. You were just comparing the bee to AI, right? The bee is sentient. I don't think that just because something is aware or can have opinions that it should have a place in democracy, or else we'd end up giving bees a vote.

Now, I understand that if AI were to have the capacity to do things, like vote, then it would be super advanced. I think that AI can process data better than a bee can, so comparing the two isn't really fair.

Honestly though, I can't think of something that could vote just as well as AI could, excluding humans. That's what scares me about AI. It has extreme amounts of potential to pass by the human race in just about every intellectual way.

towr (#1914)

Location: Netherlands
Posted: 1277 days ago

Well, personally I wouldn't call a bee sentient. But maybe it's not a well-defined term, or doesn't mean what I think it means :P
What I mean is if AI becomes self-aware and has a certain level of general intelligence. So not just an awareness of it's surrounding, but a sense of self, and ideas about its own past and future, etc. Like people, but different.. So maybe more like aliens.

Of course, the "life" of AI might already warrant ethical consideration before it develops that far. If somehow it develops the ability to suffer, then we should avoid making it suffer unduly (just as with animals).
And on the other hand, if it becomes super-intelligent but doesn't have the ability to suffer/desire, then does it really matter how we treat it? (To the extent we might (mistakenly) view it as a person/being, I think it does, but not really for the AI's sake.)


The other side of the story is of course to get AI to treat us ethically. So before AI gets to smart, we should consider deeply how we want to raise it. And build in some virtues like empathy, compassion and fairness.
Or keep it narrowly intelligent and easy to control.

Abrum Alexander (#8221)

Location: Vermont, USA
Posted: 1276 days ago

It’s like we’re treating AI as if it’s a child. We need to teach it how to behave so it doesn’t start doing things that are unethical to it’s environment or self.

And, if at some point, AI is “living a life”, I’d say that it would make its decisions very practically, to what it wanted. Similar to humans, it would act out of emotion, and then decide if that was a good option. Really though, if you think about it, humans act out of emotion first. Maybe this would be the same for AI. Or, possibly, like what we discussed earlier, AI would automatically run all of these checks to decide what would be the best decision.

My predictions on the future are that AI is here to help us. I don’t see a reason, unless we give it one, for it to cause us harm. I personally believe that AI is a thing we should continue to develop.

Comments are now closed for this post.