AI Is Not the Problem

Composed on the 6th of February in the year 2023, at 3:51 PM. It was Monday.

There’s a rule for watching Star Trek episodes: Pay attention to three-word noun phrases in the jargon. Reversed polarity? Shut up, Wesley, it won’t work. Main deflector dish? Plot point. Inverse tachyon pulse? Say hello to my little paradox.

This is also how real jargon bubbles up to pseudo-relevance in sciencey media. Quantum indeterminacy almost counts as three words, but is gibberish to most. The double-slit experiment was enough phrase to get people looking at the fundamental weirdness of quantum mechanics, then wildly misinterpret it to defend their healing crystal budget.

Today the phrase is deep language model. Artificial intelligence, Deep Blue, AlphaGo, whatever, it’ll kill us all eventually. But deep language model? Time for some 300-word articles about how it’s going to kill us all next Thursday.

It’s not. But there’s some morse code in the doom drums. Let’s go through the list.

WarGames

A more and more curiously prescient movie, WarGames probably best represents where we’re at now. WOPR is a passable conversationalist, capable of learning quickly by playing against itself, and unable to distinguish reality from simulation. The movie is some fear of computers or automation, mixed in with cold war anxiety of the time.

The curious thing is that the computer ends up being the good guy. This pseudo-AI is the opposite of Skynet; given sufficient data, it produces the distinctly non-human response of not going to war and giving up its power. In one of the worst outcomes in cinema, everyone sighs in unearned relief as the AI hands the nukes back to the humans.

Ai

This will be an oddly short entry given the context, but despite spending my entire adult life studying artificial intelligence, computers, programming, film theory, screenwriting, and consciousness, I don’t know what this movie is about. I think it’s what Her, M3gan, 2001, and all the other smartbot movies are about, which is humans have a weird relationship with technology, so it gets a mention, but next time hire a script doctor.

Watson

I watched the first episode of Jeopardy with Watson. It was eerie, and equal parts comforting and disappointing when it bombed the last question. This was the first advanced language-parsing model that the general public was aware of, and it turned out to be quite good at Jeopardy, while simultaneously demonstrating it did not have a model of the real world in any reflective way.

It was forgotten, but it shouldn’t have been, since it could have allayed the fear of today’s bogeyman.

AlphaZero

It was sad when a computer beat us in go, but the writing was on the wall since Kasparov cried foul. It was slightly more alarming when they abstracted the engine to teach itself multiple games, but all that computing power is duking it out in a small, closed set of rules in a hilariously limited environment.

This was a huge leap for anyone paying attention, which was nobody aside from AI researchers and me. Kasparov’s complaint was that Deep Blue had access to nearly the entire history of chess at a moment’s notice, which is essentially complaining that the computer had more RAM. The algorithms improved and you can now put a grandmaster on your phone, but those still evolved from a brute force methodology. We go players scoffed through the 90s because the potential space of games in go is exponentially vaster than in chess, and you can’t even start the process with a history of go games because they won’t begin to cover the possibilities six moves in. We felt safe in this vast space of possibility, because even computers were a long way from having the cheat codes to our human game of steely logic and intuitive projection.

I imagine the first swordmaster to encounter a gun was extremely, if briefly, surprised. Steely logic turns out not to be as effective as speeding leaden statistical modeling in deterministic gameplay. Ian M. Banks called it in 1988: A potentially computable game cannot be won against something that computes faster than us. An element of random chance provides a possible victory, but even these games have optimal strategies, the difference being those strategies can be kept in a human head.1[1] Even ignoring Ian’s godlike AIs, who probably could keep the whole of potential go games in their heads, go is still a game with a logical sequence that can be learned by a collection of microchips that will then outperform any living human. Strictly incalculable does not mean unnavigable, and all closed systems with macroscopic objects and clear rules are navigable, and will eventually be navigated better by computers.

This is a good thing. It’s literally why we built computers.

ChatGPT

It was my own cursed professional tribe that freaked out at ChatGPT, and I have some theories as to why. Software engineering is the modern dream job, even to someone as unambitious as me. Remote work is easy if allowed, the money is absurd, the demand is high. Even if 50,000 of us just got fired, there are tech entrepreneurs pouring out of Stanford ready to borrow money from their parents and develop fresh methodologies for tanking the economy. With few bumps, the last two decades have enshrined software engineering as the career immune to the cataclysmic shocks it creates in the world at large. The pandemic itself created more of us to funnel satirical amounts of money to our masters.

ChatGPT was the first hint of a crack in the fortress. Could we, too, be devalued due to technology? Forsooth, must we fear like peasants?

No.

After six or seven years in this professional clink, what people pay software engineers for is their knowledge of what not to do. If my friends and family ask me to build them a website I send them to Squarespace, since it will be much cheaper and probably better than anything I could do for them. We get paid to grasp the ecosystem at large, stay vaguely up to date, and divert ruinous architectural traps before they show up on a quarterly report. Anyone can google the solutions to most of the particular problems we deal with; we get paid to know what to google and how to read the answer.

The least insane rumination on the technology is it will be a slightly improved “technologist’s Google” but it’s not even that. A coworker brought up a ChatGPT solution to verifying an array of email addresses. My response was, “Well the email regex is wrong, but most email regexes are wrong.”

The formal specification of an email address is out there for anyone to find, but no one ever does, so virtually everyone hunts down or comes up with a half measure that does more harm than good because most emails look like [email protected]. 99 percent of emails will get past these half measures and it’s the first regex most programmers write when they’re young and haven’t discovered package managers. They then litter the internet with comments pertaining to this thing they’ve mastered like they just pulled an X-wing out of a swamp, thus ensuring the next generation of coders won’t find the right answer either. You could bring up the specification if you still haven’t discovered package managers and poke out a regex that reflects the specification, or you could do what’s best for your business from top to bottom and make sure is has “@” and “.” and some stuff between them and no Bobby Tables.

ChapGPT is a first-year programmer googling. It has no concept of the ecosystem. It’s naive statistical patchwork, and that gets you a bad email regex, since it neither knows the correct answer nor grasps the right action; it’s just skimming the reading. It’s impressive that language modeling software can produce code that won’t destroy my laptop, but it takes more expertise to figure out its subtle errors than it does to pick through the top five google results and cobble together a decent solution. The idea that I will soon be inundated with requests to review AI-produced code rife with these subtleties is terrifying, but at the end of the day, it only makes my experience more valuable.

Someone has already pointed out that the fact that ChatGPT can create a passable high school essay is not so much an achievement in artificial intelligence as it is a condemnation of the way American schools teach people to write, and that’s a more succinct version of my own professional critique. The fact that grammatically or syntactically correct semi-nonsense collected from shallow knowledge can displace real world tests of comprehension means we’re not holding ourselves to a particularly high standard. Actual use of ChatGPT for articles or essays or code will produce more of the content that made its output subpar, and achieve little besides accelerating the homogenization of mediocrity.

That Guy at Google who Decided his Chatbot was Conscious

A bunch of people mocked this guy. Mostly technical people, who aren’t qualified. In fact, nobody’s qualified, anywhere, or has been since the concept of consciousness was floated, because nobody knows what it is. Is the chatbot he was talking to conscious the way I am? I doubt it. Then again, it was only a few years ago that I learned some people don’t have an internal monologue. My own boss doesn’t have an internal monologue. He’s a brilliant CTO who built the tech for a multi-million dollar company and a talented artist to boot, but without an internal monologue I can’t tell you how I would go about categorically separating him from a convincing automaton or a large parrot.

I’m going to skip over many millions of pages written on the subject, and summarize that we’re mostly sure that consciousness is a word for having an experience of the world, and we haven’t the foggiest notion about what direction to look in to find a mechanism for it. We don’t have any logic to guarantee anyone else even has it, besides the fact that they seem pretty convincing when they insist they’re not just chatty zombies.2[2] It is among the most baffling mysteries in philosophy and science, and after thousands of years of thinking about it, we’re not even sure what the question is. Yet hundreds of people who don’t study it piled on to laugh at the thought of a computer having it.

Personally, I think there is probably consciousness all over the place. My idle theory is if consciousness arises from some pattern of information exchange, the concept of a server having an experience of the world is as absurd as someone not having an internal monologue having an experience of the world. Both of them experience an internal world radically different from mine, both talk to me in a seemingly self-directed manner, both can be broken with minimal exertion.3[3] I think it’s more likely that the microchips powering the chatbot have an experience of the world than their combined efforts to model text responses that mimic a human’s ability to convey an experience. But, again, nobody knows. Maybe the ability to convey experience is the source of consciousness and it runs backwards. Wilder things have been proposed.

The point is whatever consciousness is, it may manifest in ways we will never think to investigate, and may experience existence in ways we will never be able to conceive. But no human thinks about that in regular discourse: they are only concerned with themselves and the idea of creating something that thinks like them.

It’s not going to happen any time soon, but this baseline narcissism is what fuels the fear mongered by the the tech billionaires who have absolutely no business commenting on the things on which they tend to comment. The terror of building a super artificial intelligence is not due to having something super intelligent hanging around, it is the terror of having something super intelligent that acts like a human. Because if we manage to build something technologically superior to us that also acts like us, it will do what technologically superior humans always do to their neighbors.

DALL-E

Salvador Dali would have liked DALL-E because Salvador Dali was a huge dick and DALL-E provides exciting new ways to be a dick.

DALL-E is savaging the art world in exactly the way programmers and high school English teachers are afraid ChatGPT might savage theirs. Generative models trained on the work of human artists can now produce novel statistical averages loosely related to human language prompts, and that’s good enough for a certain sector of the population.

Computers have been generating art in some fashion for ages, but now it looks like human art. I never worried about this in terms of art because art is about expression and communication. It is inextricably bound up in the history and philosophy of itself and what it means to be human. In this context, I have no interest in what an algorithm has to say.

Unfortunately, there’s that certain sector of the population for whom art is a commodity for shallow consumption, accompanied by an industry happy to sell at scale. In this context, art is not expression: art is packaging. Nobody wants to pay premium fees for packaging, and now nobody will. Any guardrails we could put up against this are probably going to be smothered in a concocted culture war inevitably waged by certain passengers of capitalism who mistake the engine fire for the plane. There is already a subclass of unthink pieces saying it’s about time artists and writers got taken down a peg, since a lifetime of toiling in obscurity to master an arcane skill for few if any rewards strikes some people as unbearably smug.

The circle closed on Ben Moran, as a Reddit moderator deemed his work too much like AI to be thought otherwise, and banned him. Here was the opportunity to try untangling a number of issues: the art community’s fear that it was about to be buried under a mountain of machine learning; the fact that those machines were trained on the very work produced by the community; the fact that AI does seem to have its own recognizable signature; what it means when a human and an AI have a similar style.

This conversation did not happen. Instead, a moderator concerned about the relevance of their space in an AI world, explicitly condemned a contributor to irrelevance because of AI.

Things like this have been happening for a while. This exchange rose above the waves because of the moderator’s irony-blind contempt. People always try to protect the supposed integrity of their social spaces by cutting themselves off from parts of the world; excommunications are inevitable when new threats arise. In order to stay gated to the point of relevance, similarities to enemies cannot be tolerated within the compound.

There’s going to be a lot more of this in the near future as people afraid of losing their livelihoods to AI generation throw others in front of the train, but make no mistake: it is a train and bodies won’t slow it down. It might be possible to rationally sort out how to respond to a sudden influx of autogenerated grade school drivel and copycat artists, but last time we had an opportunity to integrate new technology in an ethical, responsible manner, we sued each other for twenty years and decided Spotify was the acceptable way to screw musicians.

Today’s AI is a thousand years away from churning out the Commander Data we want or the Lore we deserve. It’s little more than a deeply flawed but interesting new toy that could be artfully woven into modern life and technology. But it never will be, because the problem, as always, is that humans are trash.

1 A famous poker player once said, “You might beat me, but you’ll never outplay me.”

2 Though if you want to be safe, remember tinfoil only keeps your brains fresher.

3 Though wildly different legal consequences, depending on the server.

Spinnaker kills bird. Flocks mourn.


If you don't like giving money to Amazon or Lulu, please feel free to make a suitable donation and contact me directly for an ePub or PDF of any book.

The City Commute

An investigation of the principles of commuting in one hundred meditations. Subjects include, but are not limited to, the implications of autonomy, the attitudes of whales, the perfidy of signage, and the optimal positioning of feet when approaching one's subway disembarkation.

Click to see on Amazon

Noware

This is the story of a boy, a girl, a phone, a cat, the end of the universe, and the terrible power of ennui.

Click to see on Amazon

And Then I Thought I was a Fish

IDENTIFYING INFORMATION: Peter Hunt Welch is a 20-year-old single Caucasian male who was residing in Bar Harbor, Maine this summer. He is a University of Maine at Orono student with no prior psychiatric history, who was admitted to the Acadia Hospital on an involuntary basis due to an acute level of confusion and disorganization, both behaviorally and cognitively. He was evaluated at MDI and was transferred from that facility due to psychosis, impulse thoughts, delusions, and disorientation.

Click to see on Amazon

Observations of a Straight White Male with No Interesting Fetishes

Ever wondered how to justify your own righteousness even while you're constantly embarrassed by it? Or how to make a case for your own existence when you contribute nothing besides nominal labor to a faceless corporation that's probably exploiting children? Are you clinging desperately to an arbitrary social model imposed by your parents and childhood friends? Or screaming in terror, your mind unhinged at the prospect of an uncaring void racing to consume the very possibility of your life having meaning?

Click to see on Amazon
×