Everyone Except Me is Wrong About AI
I wrote about AI already, but that was about how we’re all going to die. Since then, the conversation has become more nuanced. Now I’m encountering more subtle ideas I think are totally wrong. So because I know better, here’s why.
“AI is already here.”
ChatBots are good at figuring out what comes next when you start a sentence with, “The capitol of Antigua is…” That’s pretty cool. We didn’t have that before. But it’s not intelligence. It’s almost the opposite of intelligence, like the difference between the kid in high school who was always studying and that guy who never studied but could talk and is now a real estate agent. Both can sound smart but only one knows what he’s talking about.
BY THE WAY, it’s very on-brand for Earth 2023 that our robots are designed to sound plausible rather than be correct. Remember in Star Wars how C-3PO delivered a precise survival probability of flying into an asteroid field? (3720 to 1.) And Han Solo was like, “Shut up, C-3PO,” because he was too cool and handsome to be bothered by math. OR SO WE THOUGHT, because that was the kind of AI we were imagining in the 1980s: AI that was, before anything else, correct.
But if C-3PO was a ChatBot, no wonder Han had no time for his bullshit. All C-3PO could do was regurgitate what other people tended to say about surviving asteroid fields, on average.
“AI is almost here.”
Sure, ChatBots have their flaws, like asserting gross fabrications with confidence, but look at the rate of progress! Check out how Stable Diffusion can produce high-quality images in seconds by quietly aggregating decades of work by uncredited artists! It’s not perfect, but imagine where we’ll be in a few years!
I will concede that AI has made tremendous progress in these two critical areas:
- pretending to know what it’s talking about
- stealing from artists
I’m not contesting that. But I don’t agree that honing these skills will lead to genuine AI, of the C-3PO variety, which is basically a person, only artificial. To get that, we need AI that can perceive things, and form an internal model of reality, and use it to make predictions. If instead it’s only good at imitating what everyone else does, that’s not really AI. It’s just statistics.
“AI is just statistics.”
So, yes, everyone realized that if you call your 18-line Python program an “AI,” it gets more interest. Now when someone says “AI,” they might mean C-3PO, or ChatGPT, or just a plain old computer program that until six months ago was a utility or model or algorithm.
When we mean C-3PO, we should probably say “AGI” (artificial general intelligence), or “strong AI,” but nobody likes redefining terms just because they’ve been appropriated, so we don’t. We do believe, though, that there’s a big difference between an AI that is self-aware, has a mental model of reality, and can fall in love, and an AI that auto-aggregates blog posts. We only feel bad about turning off the first one.
However, even the C-3PO type of AI will undoubtedly be “just statistics.” The problem with “it’s just statistics” is the “just.” It implies that statistics can never lead to anything life-like. And that truly intelligent, conscious creatures like us possess something entirely separate and perhaps magical, which nobody is likely to engineer anytime soon.
This is a dumb comfort thought. Chickens are just beak and feathers. Trees are just wood and leaves. Humans are just food and chemistry. We can dismiss anything like that. The universe doesn’t care what you’re made out of.
“AI is not already here.”
We seem to think there’s a line, to which we’re creeping toward with AI that’s increasingly sophisticated, until suddenly: Eureka! It has gained anima, a soul, consciousness, some special quality that we will admit to sharing with it. And then we have AI citizens, who should probably have rights, and not be property.
So we try to guess when this line might be crossed—next year, twenty years, a hundred years, never? We eye each AI iteration, considering how human-like it is, whether it has finally gained the necessary soul/anima/consciousness/je ne sais quoi. But there is no line. There’s no binary yes/no. There wasn’t when life emerged from the primordial soup, or became intelligent, or recognizably human.
AI will never gain the special magical quality that makes us truly intelligent beings, because we don’t have it, either. We’re wasting our time when we try to figure out how human-like the machines are; we should examine how machine-like we already are.
Because we’re predictable as heck. We develop mechanical faults. The Wikipedia page on free will is 16,000 words long and both-sides it.*
We are creatures of chemistry and biology. They might be probability and statistics. Potato, potato. There’s life all around us, of varying shades; intelligence of all kinds. We live in a universe that isn’t picky about what you’re made of. We’re here now, but so are they.
Bonus ideas:
-
The Alignment Problem
This is the idea that the real problem with AI is figuring out how to make it do what we want but without the part where it destroys humanity because it didn’t realize that when we asked for paperclips, we meant without plundering the Earth’s core. Okay, sure. That’s a good first step. But aligning it with human morality only helps so long as there aren’t humans who want to plunder Earth’s core, too. And there are. Also there are humans who don’t want to plunder Earth’s core, necessarily, but do want to have a job and get paid, and capitalism is awesome at packaging those people up into core-plundering machines.
AI will be good unless we make it bad, so let’s just not do that
This one speaks to a pervasive failing on the part of smart people, which is the belief that once they figure out a solution, they’ve solved the problem. But we figured out how to avoid catastrophic climate change decades ago; we’re just not doing it. There is no “we.” “We” can’t decide anything. “You” can just not build bad AI. You can’t stop me from doing it.
Comments
This is where site members post comments. If you're not a member, you can join here. There are all kinds of benefits, including moral superiority!
Alan W (#1427)
Location: Spokane, Washington
Quote: "Corgis are like potato chips"
Posted: 451 days ago
Mapuche (#1184)
Location: Darwin, Australia
Quote: "Inconceivable!"
Posted: 451 days ago
Maybe I should have had Chat-GPT to compose this comment. You can tell I didn't because of the bad punctuation.
Karen (#2180)
Location: wishing I was at home
Quote: "Question authority, not your mother!"
Posted: 451 days ago
syrup6 (#1224)
Location: Arkansas
Quote: ""Truth always rests with the minority, and the minority is always stronger than the majority, because the minority is generally formed by those who really have an opinion" - Kierkegaard"
Posted: 451 days ago
Here, here!
We need scheduled downtime/maintenance. Periodically we go offline. We overrun our abilities to process and slow everything down. I keep thinking though, what on earth will AI be like in the age of quantum computing? Machine Man 2.0.
But then I'm like, no, I didn't mean duck, auto-correct, and realize we're fine just a little bit longer.
Alice Beltran (#6667)
Location: California
Posted: 451 days ago
Max
Location: Melbourne, Australia
Quote: "I'm my number one fan!"
Posted: 450 days ago
I do think C3PO must have been a chatbot, now I think about it. He was wrong a lot of the time. None of the droids in Star Wars are particularly smart.
towr (#1914)
Location: Netherlands
Posted: 449 days ago
> [..]
> b) stealing from artists
Aside from the scale, I don't see why it's so different for a machine to learn from someone else's work than it is for a human.
Eh. Maybe scale is all it is. Standing on the shoulders of giants is fine, but for a giant to stand on yours, not so much.
On the other hand, given how poor of an example humans are, it would probably be a good idea if we could get AI to learn in another way, so they don't turn out as racist, sexist, greedy and whatnot.
ML engineers used to think the models would get better the more data you train it on, but it turns out that if you train it on more of the internet, beyond some point it just gets more extreme, delusional and "misaligned". And the sexist bias that image generating models have should also not come as a surprise, given the input.
Zetronius (#8965)
Location: By the Thames
Quote: "We are but simple minded creatures"
Posted: 149 days ago
Stop acting like Terminator is going to be based on a true story
Comments are now closed for this post.