Content Is Dead, Long Live Content | Opinion

If the internet age has a founding statement, it might be the popular and oft-cited "content is king" essay penned by Microsoft co-founder Bill Gates in 1996. Gates' thesis was that the internet would allow for a proliferation of content like we had never seen before:

Over time, the breadth of information on the Internet will be enormous, which will make it compelling. Although the gold rush atmosphere today is primarily confined to the United States, I expect it to sweep the world as communications costs come down and a critical mass of localized content becomes available in different countries.

Gates got a lot right in that essay, but he was perhaps too optimistic in identifying what was to be an age of online abundance as a new golden age of prosperity. Content in all its forms soon became so freely available to produce and consume that it was anything but king.

Like many others, Gates failed to take into account Marshall McLuhan's old principle, and warning, that the medium overrides the message. The very nature of the internet, this now-commonplace alien life form where the creator became nearly indistinguishable from the consumer, did indeed free content, but it also made content freely available, which might be good for consumers (more on that later), but definitely not for producers. Gates may have also underestimated the extent to which this liberation would also stoke democratization to the point of mass delusion and conspiracy theory, of which he would, strangely, become a target in the 2020s.

Just when you think the value of content has fallen to zero, it turns out that value can decline a little more. More than a quarter century after his essay, the company Gates co-founded, Microsoft, is staking much of its future on the promise of generative AI to transform the production of content as we know it, with everyone from ad copywriters to online publishers gnashing their teeth at the possibility that a "robot" could put them out of a job (and as the recent layoffs at CNET have shown, they can).

But what if this seemingly novel form of AI, which threatens to cheapen content like never before, actually has the opposite effect, reshuffling and reprioritizing what we think of as valuable content in a way that actually helps us to produce more that is of higher quality?

While Gates thought content would adapt to the internet and new models of monetization to become more valuable, it was actually the new gatekeepers of content who, about halfway through the consumer internet's second decade, became rich. This was due not so much to the monetization of the content itself, but the monetization of those who consumed it.

ChatGPT
This photo illustration shows the ChatGPT logo at an office in Washington, DC, on March 15, 2023. - The company behind the ChatGPT app that churns out essays, poems or computing code on command released... Stefani Reynolds / AFP/Getty Images

The solution developed by internet and social media companies allowed for the taming of what had become very noisy media, while at the same time serving advertiser interests by prioritizing the placement of sponsored content. At some point—perhaps when Facebook introduced EdgeRank and an algorithmic newsfeed in 2011, or Twitter introduced its own in 2016, or Google did the same behind the scenes with YouTube's suggested videos—whether we knew it or not, we had handed over our media to the robots, the algorithms that prioritized engagement over everything else.

At first, users seemed to win in this deal too, presented with a solution to the problem of a medium that had created too much information. There was only so much time in one day and something like an algorithm-based news feed allowed users to catch up on the people and brands with which they most interacted, and the kind of content in which they had already expressed an interest. Eventually, though, platforms leaned too heavily, and lazily, on the power of the algorithm, and users altered their own patterns of sharing and consuming accordingly, nudged by clickbait, bots, and advertiser priorities. Sharing didn't come from, for lack of a better term, an organic place, but one of clout chasing and fakery, reinforced by the trap of more likes, more engagement, more reach.

Over time, this created its own set of problems, including filter bubbles and a "dumb" algorithm that promoted content that was either the most superficial or the most inflammatory, appealing to base and knee-jerk emotion. Perhaps without knowing it, we had already turned our lives over to an AI, which assigned us opinions based on our preconceived biases and reinforced those biases. It all led to what may very well be the most passive-aggressive era in the history of human communication.

The rapid rollout of consumer artificial intelligence tools like ChatGPT, however, offers a way for us to retrain the AI to prioritize data in a way that resembles actual values beyond surface "engagement," making way for a new and better era for content.

For over the last decade, AI has been sneakily promoting cheap and easily reproducible content and its latest iterations are showing us just how cheap and easily reproducible that content can truly be. But this next stage of interaction with AI-generated content will train us to actually identify it in the wild and avoid it. In just the few months since OpenAI's launch of ChatGPT, students are already being trained to spot basic errors, the AI's notorious hallucinations, and even more subtle bias beyond that. While AI may become more powerful over the coming decades, the flip side is that content consumers and producers will also become more AI literate. They can take back some of the advantage they lost in the fractious, algorithm-driven 2010s.

IBM's Watson cleaned the clock of Jeopardy's best human competitors over a decade ago, so it should be no surprise that artificial intelligence today is able to synthesize that knowledge base and spit out all means of written content, from college-level essays to new beer recipes. What the new conversational skin on AI will really change is how we synthesize information and persuasion together, with ramifications for everything from marketing to political campaigns to art.

Part of that persuasion, though, is rooted in the kind of shared human experience an AI will never be able to truly know or fake: joy, anxiety, sadness, happiness, passion, rage, forgiveness, alienation, belonging, love. Colin Meloy of the band The Decemberists isn't scared. You shouldn't be either.

As for the content creation and marketing communities, forget the natural inclination toward anxiety in an age that seems to love promoting it, and see the opportunity here. The new medium of television presented its own opportunity, and the new medium of the internet presented its own opportunity. Now, AI-generated content presents an opportunity to resist the tyranny of algorithms and SEO, to think differently, to think weirder, to think with a little more empathy—to think with a little more humanity. Maybe then, content can be king.

Ian Chaffee is a technology and startup media relations consultant based in Los Angeles who has worked with AI brands and researchers

The views expressed in this article are the writer's own.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer

Ian Chaffee


To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go