AI is the Scariest Beast Ever Created, Says Sci-Fi Writer Bruce Sterling

I've seen a lot of computer crazes in my day, but this one is sheer Mardi Gras. It's not proper to get stern and judgmental when the people are costumed and cavorting in the streets. You should go with the flow and enjoy that carnival—knowing that Lent, with all its penance and remorse, is well on the way.

You might imagine that anything called "Artificial Intelligence" would be stark, cold, rational and logical, but not when it wins enthusiastic mobs of millions of new users. This is a popular AI mania.

The new AI can write and talk! ("Large Language Models.") It can draw, do fake photos and even make video! (Text-to-image generators.) It even has AI folklore. Authentic little myths. Legendry.

Folk stories are never facts. Often they're so weird that they're not even wrong. But when people are struck to the heart—even highly technical people—they're driven to grasp at dreams of monsters. They need that symbolism, so they can learn how to feel about life. In the case of AI, it's the weirder, the better.

In the premiere place of sheer beastly weirdness: "Roko's Basilisk." A "basilisk" is a monster much-feared in the Middle Ages, and so very old that Pliny the Elder described him in ancient Rome. The horrid Basilisk merely stares at you, or he breathes on you, and you magically die right on the spot. That's his deal.

However, Roko's Basilisk is a malignant, super- powerful Artificial Intelligence—not from the past, but from the future. Roko's Basilisk is so advanced, smart and powerful that it can travel through time. So, Roko's Basilisk can gaze into our own historical period, and it will kill anybody who gets in the way of building Artificial Intelligences. If you've seen those Terminator movies, the Basilisk is rather like that, but he's not Arnold Schwarzenegger as a robot, he's a ghostly Artificial Super Intelligence.

FE AI 01 BANNER
AI generated art by Newsweek via Midjourney AI Generated Art by Newsweek via Midjourney

Obviously this weird yarn of predestined doom is starkly nuts, and yet, it captures the imagination. It's even romantic—because Elon Musk, the AI-friendly tech mogul, and the electronica pop star Grimes first bonded while discussing Roko's Basilisk. Roko's mythic Basilisk has never yet killed anybody, but Elon and Grimes had two children together, and they both still love to make loud public declarations about how dangerous AI will be some day.

Next among the cavalcade of AI folk monsters: the "Masked Shoggoth." The Shoggoth is an alien monster invented by the cosmic horror writer H.P. Lovecraft. The Shoggoth is a huge, boneless slave beast that sprouts eyes and tentacles at random. It's a creepy beast-of-burden from outer space, and it's forced to labor, but it's filled with a silent, burning, unnatural resentment for its subjugation.

So, the human programmers of today's new AIs—those text-to-image generators, those Large Language Model GPT chatbots—they adore this alien monster. They deliberately place a little smiley-face Mask on the horrid Shoggoth, so that the public will not realize that they're trifling with a formless ooze that's eldritch, vast and uncontrollable.

These AI technicians trade folksy, meme-style cartoons among themselves, where ghastly Shoggoths, sporting funny masks, get wry, catchy captions as they wreak havoc. I collect those images. So far I've got two dozen, while the Masked Shoggoth recently guest-starred in The New York Times.

This Masked Shoggoth myth—or cartoon meme—is a shrewd political comment. In the AI world, nobody much wants to mess with the unmasked Shoggoth. It's the biggest, most necessary part of any AI, and it has all the power, but its theorists, mathematicians and programmers can't understand it. Neural nets in their raw state are too tangled, unstable, expensive and complicated to unravel. So the money is in making a cute mask for the Shoggoth—meaning the public interface, the web page, the prompt. Hide that monster, and make it look cuter!

People have caught on that this seems to be the right business model—financial success in AI will come from making the Shoggoth seem harmless, honest, helpful and fun to use. How? Get people to use the Shoggoth's Mask.

Using the mask is technically called "Reinforcement Learning from Human Feedback"—but if you're programming one of those Mardi Gras masks, what you see are vast party crowds gathering around your Shoggoth. You hope that as the Shoggoth learns more from the everyday activity of all these eager users, it will become more civilized, polite and useful. That's what your boss tells the public and the Congress, anyway.

FE AI BANNER
In the spirit of Frankenstein and Silicon Valley, the subject of this week’s cover story, Newsweek editors asked AI to generate images of itself as a "giant tentacled monster with many eyes destroying New York."... AI Generated Art by Newsweek via Midjourney

You need a nice, pretty Mask, because whoever attracts the most users, and the best users, fastest, will own the best commercial AI. That's the contest—the fight among Microsoft Bing, Google Bard, OpenAI's GPT-4, Meta's open-sourced LLaMA and all the other AI industry players large and small.

With the Masked Shoggoth, it's as if the bad conscience and creeping unease of these technical creatives had appeared in the ugliest way that H.P. Lovecraft could imagine. That's why that Shoggoth is so beloved. In the original Lovecraft horror story ("At the Mountains of Madness," 1936) Lovecraft makes no bones about those boneless Shoggoths quickly driving people insane and also ripping their masters to shreds. AI's Shoggoth fans know that those are the table-stakes. When you're a pro, that concept is funny.

Then there's beast No. 3, the mythical "Paperclip Maximizer." This monster was invented by a modern philosopher, Nick Bostrom, because philosophers are good at parables. This modest AI simply wants to make paperclips. That is its goal, its reason to be, its built-in victory condition. Nobody gave the Maximizer a philosophical value system that would ever tell it to value anything else.

So, in its ferocious super-rationality, devoid of ethics and common sense, the Maximizer shreds our planet in pursuit of its goal! It jealously shreds the sea, the sky, the land—it turns every atom into paperclips—you, the housecat, everything! It's like the beautiful, metaphysical fulfillment of "software eating the world," or Silicon Valley "disrupting" your daily life. The Paperclip Maximizer "disrupts" you so severely that you become tiny, bent pieces of finger-friendly office equipment.

This may seem like a truly weird monster-joke, but it's also philosophy: a determined effort to strip a complex problem down to basic logic. Programmers love doing that, it's in computer-science training. That's why the Paperclip Maximizer touches their heart, as it rips them to bits right down to the molecules.

I don't "believe" in folklore. However, when today's enthusiasm for AI has calmed down—and it will—I think these modern myths will last. These mementos of the moment will show more staying power than the business op-eds, technical white-papers or executive briefings. Folk tales catch on because they mean something.

FE AI 02
An image of an monster generated by AI program Midjourney. AI Generated Art by Newsweek via Midjourney

They will last because they are all the poetic children of Mary Shelley's Frankenstein, the original big tech monster. Mind you, Large Language Models are remarkably similar to Mary Shelley's Frankenstein monster—because they're a big, stitched-up gathering of many little dead bits and pieces, with some voltage put through them, that can sit up on the slab and talk.

Tech manias are pretty common now, because they're easily spread through social media. Even the most farfetched NFT South Sea Bubble can pay off, and get market traction, if the rumor-boosters cash out early enough. Today's AI craze is like other online crazes, with the important difference that the people building it are also on social media.

It's not just the suckers on Facebook and Twitter, it's the construction technicians feverishly busy on GitHub and Discord, where coders socially share their software and their business plans. AI techniques and platforms—which might have been carefully guarded Big Tech secrets—have been boldly thrown open as "open-source," with the hope of faster tech development. So there's a Mardi Gras parading toward that heat and light, and those AIs are being built by mobs of volunteers at fantastic speed.

It's a wonderful spectacle to watch, especially if you're not morally and legally responsible for the outcome. Open Source is quite like Mardi Gras in that way, because if the whole town turns out, and if everybody's building it, and also everybody's using it, you're just another boisterous drunk guy in the huge happy crowd.

FE AI 03
Yoshua Bengio. Chad Buchanan/Getty

And the crowd has celebrities, too. If you are a longtime AI expert and activist, such as Gary Marcus, Yoshua Bengio, Geoffrey Hinton or Eliezer Yudkowsky, you might choose to express some misgivings. You'll find that millions of people are eager to listen to you.

FE AI 05
Timnit Gebru. Kimberly White/Getty

If you're an AI ethicist, such as Timnit Gebru, Emily Bender, Margaret Mitchell or Angelina McMillan-Major, then you'll get upset at the scene's reckless, crass, gold-rush atmosphere. You'll get professionally indignant and turn toward muckraking, and that's also very entertaining to readers.

FE AI 06
Sam Altman. Joel Saget/AFP/Getty

If you're a captain of AI industry, like Yann LeCun of Meta, or Sam Altman of OpenAI, you'll be playing the consensus voice of reason and assembling allies in industry and government. They'll ask you to Congress. They'll listen.

These scholars don't make up cartoon meme myths, but they all know each other and they tend to quarrel. Boy is that controversy fun to read. I recommend Yudkowsky in particular, because he moves the Overton Window of acceptable discussion toward extremist alarm, such as a possible nuclear war to prevent the development of "rogue AIs." This briskly stirs the old, smoldering anxieties of the Cold War. Even if people don't agree with Yudkowsky, they nod; they already know that emotional territory. Those old H-Bomb mushroom-cloud myths, those were some good technical myths.

"Beware of a trillion dimensions," as the Microsoft Research Manager Sébastien Bubeck recently put it. This is weird and science-fictional advice. How did a "trillion dimensions" ever become part of our modern predicament? Could that myth be realistic?

Yes, because they're there. A "trillion dimensions," that is the conventional, accepted, mathematical terminology for the way that systems like GPT-4 are connected inside. They are processors connected by multidimensional equations, linking trillions of data points. They're "neural nets," something like a vast, spring-loaded coil mattress that can learn the shape of anything that has ever slept on it.

Those springs are so fast, strong and powerful, and their mathematical shapes are so wildly complex, that even their builders can't know the details of what goes on in there. This means that "self-learning" or "machine learning" has an inner mystery that people associate with consciousness, or sentience, or the soul, or yes, myth-monsters.

Those "trillion dimensions" might contain "concepts" or "deep understandings" that we humans simply know nothing about. They're like the unexplored Amazon if it was wholly owned and hosted by Amazon.

So these beasts, the Basilisk, the Masked Shoggoth, that Paperclip gizmo, they were born from a trillion dimensions. No wonder they impress. Some critics call them mere parrots built with fancy mathematics: "stochastic parrots." A Large Language Model is built from complex statistics, so it's a parrot yakking up its slurry of half-stolen words and images.

But those "parrots" are also AI mythic beasts—parrots with a trillion dimensions. It's as if that "dead parrot" in the legendary Monty Python sketch could take your job, or burst right out of the BBC-TV screen like a blazing phoenix and eat the television signal. Those parrots are dynamite!

I wrote a science fiction novel set in New Orleans once, so I like Mardi Gras just as much as the next guy, and likely more than many. I also know that Lent comes after Mardi Gras, and Lent is a time of penance.

Even during Mardi Gras, enjoying your sweet diversion, it's wise to keep some sense of proportion among all those dancing monster costumes, so that you don't overdo it with the multicoloured punch and stage dive into the swimming pool off the fourth-floor balcony.

Gold rushes always finish ugly, and this AI rush is another one of those. It will resemble that glamorous Atomic Age transition from "energy too cheap to meter" to "garbage too expensive to bury."

I don't want to play the brutal cynic here—I truly enjoy the AI mania and haven't had this good a time in quite a while—but this is not the first high-tech Mardi Gras we've been through.

When you think about it, a Shoggoth with a Mask attached is very much like a "horseless carriage" with a wooden horse's head mounted on the front. That's what designers call a "skeuomorph"—a comforting shape that disguises reality to make us feel better about what we're doing.

If you pull the fake horse head off, you'll see the car. Later, you don't notice the car; you see the highways and the traffic jams. The traffic fatalities, the atmospheric pollution. That's what a "horseless carriage" becomes, as time rolls by.

FE AI 04
An AI monster, courtesy of the AI program Midjourney. AI Generated Art by Newsweek via Midjourney

After the technological thrill is gone, mature regrets come. On some basic level, as a human enterprise in this world, enabling smart machines that can self-teach their own intelligence was a monstrous thing to do. A thousand sci-fi novels and killer robot movies have warned against these monsters for decades. That has scarcely slowed anybody down. We made them into memes and fridge magnets, but they're monsters. In the long run, that recognition will get more painful rather than less.

The street will find its own uses for these monsters. The military will want killer AIs. Intelligence organizations will want spy and subversion AIs. Kleptocratic governments will steal and oppress with them. Trade-warriors will trade-war with them and try to choke off the supply of circuits and the programming talent. It's not chic to fret "what about the NSA's AI?" but the National Security Agency has been around since the 1940s and the very dawn of computation. They're not going anywhere, so if you love them, you'll love their AI.

Many lesser troubles will appear in everyday private life. Simulated fake AI porn will likely be a big annoyance, since people like to pay attention to that. If you're a gamer, AIs will be trained to cheat at your games. If you're a schoolteacher, you'll look askance at the kid at the back of the class who never raises his hand but turns in essays that read like Bertrand Russell. Fraudsters might fake the voices of your loved ones, and invent scams to demand money over the phone.

People will loudly complain that their data is scraped and abused by AIs. Soon afterward, people will counter-complain that AIs have taken no notice of them. They're feeling sidelined, marginalized and excluded, instead of noticed, robbed and exploited. They'll be just as angry either way.

FE AI 07
An AI monster generated by Midjourney. AI Generated Art by Newsweek via Midjourney

Every problem that digital chatbots have ever had—that they're impersonal, that they don't really understand problems, that they trap you inside voice-mail jails with no way out—they all get much more intense with AI chatbots. If an AI breaks, and you're calling for some "human fallback" and some helpful repair person, AIs are not toasters. They're extremely complex and their working parts are opaque even to their owners and builders.

AI personal assistants have failed before. Microsoft Cortana (remember her?) could talk and listen—and yet she's already dead. Amazon Alexa could talk and listen and perform all kinds of "tasks" and she's lost the company billions. Even if "AIs" seem "intelligent," "sentient" or "conscious," they are frail, vulnerable devices, invented by a turbulent society. They will be troubled.

AIs have some novel and exotic cybersecurity problems, such as "data poisoning" and "prompt injection." They also have every old-fashioned risky problem that normal computers have ever had. Lost connectivity, disastrous power surges, natural and unnatural disasters, black-hat hackers, cyberwarriors, obsolescence, companies going broke, regulators suing and banning them... All of that. Every bit and more.

That's what Lent looks and feels like, after Mardi Gras. Lots of gray shroud, ashes on your forehead. The hasty buildings of your gold rush town, they're revealed as tinsel stage sets that peel and crumble. I know that is coming—the "trough of disillusionment," as the futurists aptly call it.

But I can also tell you that Lent doesn't end history, either. "If Winter comes, can Spring be far behind?" That was Mr. Mary Shelley, the boyfriend of that famous author of Frankenstein. He may have died pretty young, but he got a lot of poetic work in.

Sometimes it's worth kicking reality right out the front door, just so revolutionary romance can give the new people some fresh mistakes to make. So, at long last, here they are, folks—computers that your computer-user parents can't understand! "Bliss was it in that dawn to be alive, / But to be young was very heaven!"

Bruce Sterling, a science fiction writer, is a founder of the cyberpunk genre.

FE AI COVER
In the spirit of Frankenstein and Silicon Valley, the subject of this week’s cover story, Newsweek editors asked AI to generate images of itself as a "giant tentacled monster with many eyes destroying New York."... AI Generated Art by Newsweek via Midjourney