maxbarry.com
Mon 17
Jul
2023

Everyone Except Me is Wrong About AI

What Max Reckons I wrote about AI already, but that was about how we’re all going to die. Since then, the conversation has become more nuanced. Now I’m encountering more subtle ideas I think are totally wrong. So because I know better, here’s why.

  1. “AI is already here.”

    ChatBots are good at figuring out what comes next when you start a sentence with, “The capitol of Antigua is…” That’s pretty cool. We didn’t have that before. But it’s not intelligence. It’s almost the opposite of intelligence, like the difference between the kid in high school who was always studying and that guy who never studied but could talk and is now a real estate agent. Both can sound smart but only one knows what he’s talking about.

    BY THE WAY, it’s very on-brand for Earth 2023 that our robots are designed to sound plausible rather than be correct. Remember in Star Wars how C-3PO delivered a precise survival probability of flying into an asteroid field? (3720 to 1.) And Han Solo was like, “Shut up, C-3PO,” because he was too cool and handsome to be bothered by math. OR SO WE THOUGHT, because that was the kind of AI we were imagining in the 1980s: AI that was, before anything else, correct.

    But if C-3PO was a ChatBot, no wonder Han had no time for his bullshit. All C-3PO could do was regurgitate what other people tended to say about surviving asteroid fields, on average.

  2. “AI is almost here.”

    Sure, ChatBots have their flaws, like asserting gross fabrications with confidence, but look at the rate of progress! Check out how Stable Diffusion can produce high-quality images in seconds by quietly aggregating decades of work by uncredited artists! It’s not perfect, but imagine where we’ll be in a few years!

    I will concede that AI has made tremendous progress in these two critical areas:

    1. pretending to know what it’s talking about
    2. stealing from artists

    I’m not contesting that. But I don’t agree that honing these skills will lead to genuine AI, of the C-3PO variety, which is basically a person, only artificial. To get that, we need AI that can perceive things, and form an internal model of reality, and use it to make predictions. If instead it’s only good at imitating what everyone else does, that’s not really AI. It’s just statistics.

  3. “AI is just statistics.”

    So, yes, everyone realized that if you call your 18-line Python program an “AI,” it gets more interest. Now when someone says “AI,” they might mean C-3PO, or ChatGPT, or just a plain old computer program that until six months ago was a utility or model or algorithm.

    When we mean C-3PO, we should probably say “AGI” (artificial general intelligence), or “strong AI,” but nobody likes redefining terms just because they’ve been appropriated, so we don’t. We do believe, though, that there’s a big difference between an AI that is self-aware, has a mental model of reality, and can fall in love, and an AI that auto-aggregates blog posts. We only feel bad about turning off the first one.

    However, even the C-3PO type of AI will undoubtedly be “just statistics.” The problem with “it’s just statistics” is the “just.” It implies that statistics can never lead to anything life-like. And that truly intelligent, conscious creatures like us possess something entirely separate and perhaps magical, which nobody is likely to engineer anytime soon.

    This is a dumb comfort thought. Chickens are just beak and feathers. Trees are just wood and leaves. Humans are just food and chemistry. We can dismiss anything like that. The universe doesn’t care what you’re made out of.

  4. “AI is not already here.”

    We seem to think there’s a line, to which we’re creeping toward with AI that’s increasingly sophisticated, until suddenly: Eureka! It has gained anima, a soul, consciousness, some special quality that we will admit to sharing with it. And then we have AI citizens, who should probably have rights, and not be property.

    So we try to guess when this line might be crossed—next year, twenty years, a hundred years, never? We eye each AI iteration, considering how human-like it is, whether it has finally gained the necessary soul/anima/consciousness/je ne sais quoi. But there is no line. There’s no binary yes/no. There wasn’t when life emerged from the primordial soup, or became intelligent, or recognizably human.

    AI will never gain the special magical quality that makes us truly intelligent beings, because we don’t have it, either. We’re wasting our time when we try to figure out how human-like the machines are; we should examine how machine-like we already are.

    Because we’re predictable as heck. We develop mechanical faults. The Wikipedia page on free will is 16,000 words long and both-sides it.*

    We are creatures of chemistry and biology. They might be probability and statistics. Potato, potato. There’s life all around us, of varying shades; intelligence of all kinds. We live in a universe that isn’t picky about what you’re made of. We’re here now, but so are they.

Bonus ideas:

  • The Alignment Problem

    This is the idea that the real problem with AI is figuring out how to make it do what we want but without the part where it destroys humanity because it didn’t realize that when we asked for paperclips, we meant without plundering the Earth’s core. Okay, sure. That’s a good first step. But aligning it with human morality only helps so long as there aren’t humans who want to plunder Earth’s core, too. And there are. Also there are humans who don’t want to plunder Earth’s core, necessarily, but do want to have a job and get paid, and capitalism is awesome at packaging those people up into core-plundering machines.

  • AI will be good unless we make it bad, so let’s just not do that

    This one speaks to a pervasive failing on the part of smart people, which is the belief that once they figure out a solution, they’ve solved the problem. But we figured out how to avoid catastrophic climate change decades ago; we’re just not doing it. There is no “we.” “We” can’t decide anything. “You” can just not build bad AI. You can’t stop me from doing it.

* The illustrative photo and caption at the top of that Wikipedia page on free will is fantastic, by the way.