Chapter 3: Toward Dystopia

It’s nearly impossible to reverse the process set in motion in the Digital Age. For centuries the Scarcity Paradigm managed to chug along without the need to seriously confront its own flaws. Since the paradigm is based on a medium of exchange, anything that was not exchanged between consumers and producers in the market could not be valued in the economy.

There was always an economic incentive to create negative externalities, since those reduced the cost of production. And there was never an incentive to create public goods, since those had no exchange value in the market. And yet, because both externalities and public goods had a minor role in the economy, these flaws in the Scarcity Paradigm could be largely overlooked.

But that was the case when the effect of public goods and externalities in the economy was negligible. What happens when their effect becomes significant? That came about with the emergence of abundant goods in the Digital Age. Abundant goods, in the form of digital content, require fixed labor and resources to produce, but then can be accessed by anyone, have minimal storage costs, and don’t diminish with use. And while their effect on society can be either positive or negative, they have an exponentially larger effect than that of scarce goods.

Since abundant goods are inherently accessible by anyone, they lack an exchange value in the market. This means that no matter how much impact such goods have on society, people have no economic incentive to produce them. The impact cannot be monetized.

But how can this be? Don't people monetize digital content all the time? If the impact of abundant goods has no exchange value, how are people monetizing them? To answer that question we have to look at what exactly they're monetizing. And we need to do so with the understanding that only what can be exchanged in the market has exchange value.

Impact itself is not being exchanged in the main business models for digital content today. What is being exchanged then? Access and attention. These are scarce resources and therefore can have an exchange value in the market. Therefore, what’s being exchanged in the subscription business model is access to the content, whereas in the advertising business model it is user attention that is traded.

Number of users and users' attention are scarce goods that can be exchanged in the market. However, they're a poor proxy for the impact of abundant goods.

Sure, you can have abundant goods that benefit society and are profitable, but this is hardly the norm. You're just as likely to have abundant goods that have an immense value to society but are virtually viewed by no one, and thus generate no return. You can also have abundant goods that have no discernable benefit to society but are fantastically popular and profitable to produce. And then you can have abundant goods that are harmful to society but are still very popular and lucrative.

What's more, producing impactful abundant goods likely requires a significant time investment. If these generate no return their producers will end up worse off then when they started. Producing outrageous content, on the other hand, may require nothing more than general knowledge of psychology. It's not even necessary to do a basic research of the facts, as the claims can be wholly made up and the content could still go viral.

If what is popular and what is beneficial were mostly aligned but with some exceptions, our job would be a lot easier. The problem is that the relationship between these is tenuous at best; what we find interesting and what we find important has little to do with each other. A cat video may be interesting to watch but it's not that valuable. A major scientific breakthrough may have tremendous benefit to society but only a tiny group may actually understand it or truly find it interesting.

Because subscribed users or users’ attention are such poor scarce proxies for impact, using these to monetize content necessarily leads to perverse incentives. It motivates people to create popular or attention-grabbing content instead of benefitting society. Such content is often sensational, hateful, divisive, outrageous, or generally harms the common good.

On top of that, creating artificial scarcity by restricting user access to subscribers also leads to economically inefficient results. The reduction in efficiency is self-explanatory; any attempt to restrict access to a beneficial good that can potentially be accessed by billions of people necessarily reduces the impact of the good on society and is thus a less efficient use of resources.

Think about it in terms of resource allocation; when the goal is impact maximization, you'd want more people dedicating more time to those things that are expected to have the most impact on society. You'd also want no time dedicated to anything that is harmful to society.

That is patently not what we have in the digital economy. Instead too many people are spending an inordinate amount of time attacking others, posting misleading, outrageous and sensational content. And they do so because they can make a lot of money in the process.

That's not to say that people shouldn't be free to post whatever they want. They absolutely should be. But why economically incentivize harm? And why choose a business model where scientists and grifters compete for the same scarce advertising (or subscription) dollars? Especially when the grifters have a structural advantage.

And so, despite their immense potential, abundant goods are wreaking havoc on our system; they weaken our ability to confront negative externalities by undermining institutions and promoting social polarization. They also create perverse incentives in monetization because they have no exchange value in the market. There is still no economic incentive to create a positive impact, while the effects of negative externalities are exponentially greater (and much harder to confront at the same time).

What's worse, the negative externalities in our economy are already turning into full-fledged crises. And because our systems are so deeply interconnected, these crises don't happen in isolation. They create resonance. They reinforce one another and intensify. As technology advances negative externalities are likely to have an even greater effect on the economy. Continuing on the same trajectory then will likely drive us towards dystopia or even societal collapse.

So what does this mean for our future? Are we really on a path toward dystopia (or worse)? Or maybe this is just a temporary condition in the market that will correct itself. After all, trends can change.

We've had plenty of doomsayers over the years pointing to an imminent economic collapse, from Thomas Malthus to Paul Ehrlich to Peter Schiff. Yet, their predictions never materialized. Like in the old saying, such economists have “successfully” predicted nine of the last three recessions. So who is to say this time is any different? Maybe at some point, perhaps with a few nudges from government, the efficiency of markets will resolve the dynamics we have today and put us back on the right path. Could it be that our economy is more resilient than we think?

What we're facing today is very different from crises of the past. The issue is not the result of a particular condition in the market or a temporary resource crunch. Nor is it a phase of the business cycle. Today the issue goes to the very heart of our economic paradigm. It is what happens when a fundamental flaw in our economy is struck with a metaphorical technological wrecking ball.

Hoping the problem resolves by itself within the market structure is futile. Fundamental economic flaws that existed for millennia do not spontaneously self-resolve. And so we need to consider our options.

One possibility is to try to roll back the technology, and go back to a time where abundant goods couldn't have such an effect on our economy.

Another option is to look for a new economic paradigm that can capture the value of abundant goods. If we can create effective feedback loops for public goods and externalities we will solve our metacrisis and put humanity on a path to global mass abundance.

And then there is a third option. Maybe we should try to mitigate the damage and hope the fallout will not be too severe. This is definitely the solution with the least sex appeal, but perhaps it could work?

The first option is probably the least practical (or desirable). It is now nearly impossible to put the abundant goods genie back in the bottle. Should we reverse our technological progress? Shut down data centers? Sanction social media companies?

Even if we were to succeed, such a draconian move would set us back by decades and essentially paralyze our economy. It would also likely decimate any trust in government, and create another form of dystopia.

People are simply not going to peacefully accept such a severe reduction in their standard of living. They also would not easily surrender their ability to express themselves and communicate freely. And why should they? Maybe in this case the cure is worse than the disease.

Coming up with an "Abundance Paradigm" is certainly the most desirable outcome, but how do we know such a paradigm is even possible? Maybe this too is a dead end and pursuing it is a futile exercise. And if it is possible, how likely are we to succeed? We don’t even know where to start.

Perhaps before investing precious time into chasing the mirage of a supposed new paradigm we should consider if it is even necessary. What if the concerns about our economy are overblown? Sure we’re seeing growth in the effect of externalities, but can these be mitigated? Maybe the intervention that requires the least amount of resources is best. Maybe containing the effects, combined with smart government policies, can get us the optimal outcome.

With a better grasp of the direction of our economy, deciding on the right course of action will be a lot easier. At least that is the idea. The question then is, how serious is our situation? How much more intense will our crises get, and where is our economy headed?

The problem is that in the digital economy crises reinforce one another. As technology advances they intensify. Consider for example what is happening in the institutions that help us make sense of the world around us: news media and science.

Journalists and scientists have always had to deal with a contradiction. While they purportedly speak for the public interest, there was always misalignment between the public interest and how they got paid. That's because there was no way for them to be paid by “the public” based on the value they are providing.

Any way they get paid in the economy involves a potential conflict of interest. If they make money from donations, who is to say that those donating money are not an interest group promoting individuals who don't truly represent the public interest? If they get paid by the government, do they truly represent the public interest or are they advancing a political agenda? If they are self-funding or rely on wealthy patrons, whose interests are they actually promoting? If they get paid from commercially selling their content (from advertising or subscriptions) they have the incentive to promote what is popular instead of what benefits the public. And so, in every case there is an inherent misalignment between the public interest and the economic interests of sense-makers.

This misalignment was always there. What’s new however is that, because it is now easier than ever to create and publish content online, there is also a lot more competition for people’s attention.

When there was little competition, it was a lot easier for journalists and scientists to focus on the public interest. Those who deviated from this standard could be criticized and ostracized, so there was at least a social and reputational incentive to be more diligent.

But what happens when instead of competing with tens of thousands of journalists, or hundreds of thousands of scientists, sense-makers now have to compete with billions of other people for a slice of the same advertising or subscription money? And what happens when most of these people can be anonymous and a lot less scrupulous about what they’re posting?

You may still have your reputation as a respected journalist or scientist, which gives you a bigger audience than the average content creator. But your competitors have their own advantage in this arena also: volume. For every article that you have to research and fact-check, an unscrupulous blogger could post 10 poorly researched articles. Or maybe a hundred articles that are entirely fabricated. Even if each of their posts gets a small fraction of what you get, those numbers add up. And they significantly dilute your expected earnings.

The attention economy does not reward those who provide the most reliable information. It rewards those who are best at grabbing people’s attention with conflict and drama. But the more you try to grab people’s attention in this way the more you lower your standards and deviate from the public interest. So now each journalist and scientist has a dilemma. Does he or she continue the thankless job to research and report on what’s in the public interest, or focus on what is popular (and profitable)?

The vast majority of scientists and journalists may still choose to focus on the public interest. Yet, the small minority that prefers popularity and profits will get disproportionately more attention than the rest. So what does the public see? Because the less scrupulous journalists and scientists get more attention, they may appear to the public as the majority. Which means that even a small number of unscrupulous individuals can tarnish the reputation of a whole institution. And that is how the crisis of trust in institutions begins.

Now think about the incentives of all other content creators competing with sense-makers for the same scarce attention and advertising money. They have an incentive to discredit professional journalists and scientists, and they want to discredit them as a group. Doing so helps equalize their own reputation, and therefore brings more people to follow their work. These content creators can always point to the inherent contradictions of the business models used by those who claim to work in the public interest. And now the unscrupulous journalists and scientists are making their jobs easier than ever.

Every day these content creators have ample material to work with; they get a steady stream of posts by unscrupulous journalists and scientists illustrating how they are falling short of their stated objective to speak for the public interest. Each such post helps discredit our sense-making institutions a little more. And each time content creators get to pounce on such posts they get more engagement from the controversy. That in itself is a profitable business model.

While unscrupulous sense-makers make it easier than ever to discredit their institutions, content creators have just as much incentive to make up controversies even where none exist. After all, the attention economy rewards clicks, not facts. But when everyone’s credibility is equalized, those with the least integrity benefit the most.

As public trust in sense-making institutions continues to erode – whether deservedly so or not – so does the economic benefit of working in the public interest. So what we get is a vicious cycle; more and more journalists and scientists choose popularity and profitability over the public interest, and then the content they produce is used to further discredit sense-making institutions.

So maybe our traditional sense-making institutions cannot be saved. Perhaps they need to adapt to the Digital Age, or come to some happy balance between what is popular and what benefits the public interest. That may be the case, however, the goal was never about preserving traditional sense-making institutions per se. The goal is to preserve the methods; integrity, rigor in fact finding, and working for the public interest. Whatever form these methods may take is not important. What’s important is that they are preserved.

The trouble though is that even if we give up on trying to preserve traditional sense-making institutions, we’re still nowhere closer to solving the problem at hand; there is still no economic incentive to do work in the public interest and no incentive to preserve integrity and rigor in fact finding. Unfortunately these are superfluous in the digital economy. There is however every incentive to produce content based on its popularity.

The same dynamics that applied to journalists and scientists would still apply to whatever Digital Age version of traditional institutions we come up with; the less scrupulous attention-maximizers have a structural advantage in the attention economy. And their advantage is only growing as technology advances.

These attention-maximizers always have the incentive to discredit those whose work is based on credibility. They get engagement from every post supposedly exposing the hypocrisy of others. This is true whether these "others" are professional journalists or respected content creators. And whether such posts are based on fact or misinformation.

In the digital economy it is simpler (and more profitable) to claim to have integrity than to do the hard work to maintain it. So there is also a constant struggle by everyone else between continually reducing their standards to be profitable or maintaining integrity at an ever greater cost.

There is a constant push by the least scrupulous to "equalize" their reputation with everyone above them. This is done by attacking and discrediting others, not by upholding higher standards. It is also done by attacking our ability to make sense of the world. If we cannot differentiate between facts and lies, those who sell lies can make more money.

The conflict in the digital economy is thus not merely between attention-maximizers and sense-making institutions. It is between attention-maximizing and sense-making itself.

While attention-maximizers certainly follow a profitable strategy, they are hardly the greatest benefactors of the system. So who are those that benefit the most? You have to think about who benefits from an environment where reporting facts has no value, where sense-making institutions are discredited, and where it is getting harder by the day to know what is true.

It is certainly not the public. They are the ones harmed most from a system that degrades their sense-making ability. Those who benefit the most are autocratic leaders and powerful interests. They don't want the public to scrutinize their actions, or be empowered to act. If the public cannot determine the facts, they also cannot do anything to penalize wrongdoing by the powerful.

Of course, the fact that powerful interests benefit the most from the digital economy’s dynamics doesn’t mean that there was some conspiracy by the powerful to foist such a system on the public. That process could have easily happened spontaneously. But once the system was in place those who want less scrutiny certainly benefit from it. And they would do what they can to strengthen and perpetuate such a system.

All this points to the fact that we’re unlikely to see the assault on sense-making reverse any time soon. Unfortunately the trend we're describing is only beginning to gain momentum.

* * * * *

If greater competition in the attention economy attacked our sense-making ability, the rise of social media brought about a full-on crisis of trust in media and institutions. And if that wasn't enough, it spread the crisis much further, with an all-out onslaught on our sense of community and on the human psyche itself.

While content creators have an incentive to maximize attention, it is social media platforms and search engines that profit the most in the attention economy. What is the incentive structure of the platforms? Just like for journalists, scientists, or content creators, the platforms themselves also cannot monetize the impact of digital content. They get no benefit from serving users with the most valuable content, or the content that would make the most positive contribution to people’s lives. Instead, the platforms mostly use attention as a proxy for value. Subscriptions are less commonly used, but as already discussed, the incentives there are similar.

Platforms and search engines all compete for the same advertising money, and they all want a larger share of that pie. To get that larger share each platform needs to maximize user attention on the platform. The users are not the customers in this equation, they’re the product. The customers are advertisers. And advertisers want their ads showing to the greatest number of people, preferably people who are likely to buy their products.

So how do the platforms achieve their target of attention maximization? We can consider two broad strategies they can use. We’ll dub these the High Road and the Low Road.

In the High Road approach, tech companies try to increase usage of their platforms by providing a superior user experience. They identify misleading or harmful content and flag it to slow its spread. They try to serve users with high quality content and be mindful of their wellbeing.

Such an approach is expensive. There is no simple way to determine if content is harmful or beneficial, so the platforms need to invest into developing such mechanisms. They then need to apply the mechanisms at scale to billions of posts.

Content moderation poses a challenge for tech companies. That is, based on what standard do they determine what is “harmful” and what is “beneficial” to users? Do they moderate content based on some ideology or political leaning? Or perhaps it is based on what is financially advantageous to the company?

Whichever strategy they choose, tech companies cannot claim to speak for the public interest due to their inherent conflict of interest in monetizing the platform. Thus, any standard they set would be criticized. Moreover, since the platforms are proprietary, there is little transparency in the moderation process itself. There is no way for the public to tell if standards are applied equally or if the system is rigged to benefit any group or point of view.

These dynamics are undoubtedly exploited to the fullest by those who are harmed the most from content moderation: attention-maximizers. They can claim that the content moderators are biased, or that the platforms are suppressing their views because they are truth-tellers. There is little the platforms can do to dispel such claims because of how the platforms are structured.

So the High Road is expensive and fraught with difficulties. How about the Low Road? Here the goal is much simpler: do whatever it takes to maximize user attention. The way to achieve this is also a lot cheaper; it is relatively easy for a platform to determine what content the user is viewing, how long the user stays on a particular page or watches a video.

By collecting such user data the platform can easily determine what content is likely to grab people’s attention the most. It can then serve the content to more people. The more data the platform has on user behavior the better it gets at personalizing content and keeping users glued to their screens.

The platform also wants to incentivize content creators to produce the kind of content that would keep more users engaged. It is therefore willing to give a share of its advertising revenue to creators.

We have to remember that users are the product in this business model. They are not the customers. For that reason the platforms don’t care whether the content users view is beneficial or harmful to them. They also don’t care if it has a positive impact on the world or tears society apart. They only care that users are spending time engaging with the content.

The Low Road strategy will certainly get pushback from users (and even some advertisers) who are unhappy with the toxicity of the platform. But that is unlikely to change much in the business model. Perhaps the platform will ban or censor the most extreme voices on the platform. This will give the appearance that the platform is “confronting hate.” It will also allow the platform to maintain its Low Road strategy, albeit with some minor cosmetic changes.

Now serving attention grabbing content and incentivizing creators to produce such content are only a part of the Low Road strategy. The other part is designing the platform itself to make it more addictive; whatever it takes to keep users engaged. For that purpose the platform can employ techniques to exploit users’ psychological vulnerabilities; to make interaction with the platform stimulating, so that users want to keep scrolling, clicking, and viewing content.

So what happens when the High Road platforms compete with Low Road platforms? Because the Low Road approach is much easier and cheaper to implement, it is also likely to get more user attention hours. More attention converts to more revenue, which can then be used to get even more users.

Here again the Low Road platforms have a structural advantage over the High Road ones. Since users are the product in this business model, anything that benefits them is merely a public good that has no value in the market. Whatever grows user attention however, whether it benefits or harms users (or society), is valued in the market. High Road platforms then find it harder and harder to keep themselves afloat. To be competitive they have to adopt more of the strategies employed by Low Road platforms, and give up on expensive strategies that benefit users but have little value in the market.

The result however is a race to the bottom. The major social media platforms are fighting for every click, every view and every minute of our attention. They are looking for ever-more invasive and manipulative methods to keep us impulsively scrolling, and extracting as much data from us as possible. None of this is done for our benefit or with our wellbeing in mind. It’s a cynical competition, but not participating in this race to the bottom means losing users, market share, and, ultimately, profits.

* * * * *

To gain users and market share platforms need to do everything in their power to stimulate engagement. But who thrives in a digital environment where the most important metric is engagement? You guessed it, attention-maximizers.

If attention-maximizers thrived in the digital economy before, with the rise of social media platforms they became the apex predators. And if before their main area of influence was in sense-making, on social media it spread to every area of our life.

One particularly noxious breed of these unscrupulous actors is the common troll. Since social media platforms algorithmically boost posts based on engagement, few actors are as effective at boosting their own posts as trolls. Trolls are especially good at blowing up conversations by diverting attention from the discussion at hand to themselves.

Through their toxic and abusive behavior trolls generate much drama, which the platform then picks up as a signal to be boosted. As their posts are algorithmically amplified, trolls manage to gain new followers (who may agree with their point of view or just enjoy the drama), which further helps them push their content.

Now think about what happens to online discourse in the process. What happens when anyone, at any time, has the incentive to blow up the conversation and grow their own clout in the process? You get fewer genuine conversations and a lot more trolling, toxicity and abuse.

If this is what clout-based social media does to online discourse, consider what it does to digital communities. Think about what happens when the loudest, most extreme and controversial voices garner the most attention on social media. And what happens when these same voices are then algorithmically amplified by the platforms for profit.

What group dynamics does this create? What effect does this have on every social, ethnic, religious or political group online? It produces a feedback loop that reinforces digital tribalism; suddenly everyone has the incentive to stake more extreme positions to gain influence within the group. Those who present the most rigid views attain more credibility. The more intransigent you are, and the more you double down on your ignorance, the more your stature grows in your digital tribe. 'Clapping back' or 'destroying' the other earns you respect and admiration.

Meanwhile, the system punishes those who dare to admit a mistake or seek mutual understanding. People fear losing followers for having civilized conversations with someone from the rival ‘tribe’ or – god forbid – agreeing with them. Those willing to change their mind on an issue are viewed with suspicion.

Group members hold no genuine or nuanced conversations with people from rivaling tribes. There is no incentive to do so. Instead members view engagement with rivals as opportunities to snipe at them, and signal loyalty to the inner group. That’s how you gain clout, and that is what the system rewards.

The tribes are in a constant state of war. This is not because they have a genuine disagreement. Nor is it a struggle over resources. They're at war because conflict and drama generate clicks, so the ones who get the most attention have the most to gain. They always have a perverse incentive to fight and to escalate the conflict, no matter the social cost. Peace and compromise have no economic value here.

This is how social media promotes extremism and endless conflict. Rather than bringing people together it is tearing society apart; fracturing society into digital tribes, and boosts the most radical views within each tribe. These tribes don’t feel or act like communities. They are oppressive and suffocating. Such tribes encourage groupthink, and don’t tolerate diversity of views. And they are dominated by the same trolls and attention-maximizing egotists who incessantly jostle for clout.

So social media creates conflict and tribalism at the societal level, but what happens at the other end? What kinds of behaviors does it incentivize at the personal level? Because social media algorithmically boosts attention-grabbing content, it rewards those who would do everything in their power to present themselves as more successful, more wealthy, joyful or leading more exciting lives than they really do. They forgo their authenticity, and the ability to genuinely connect with others and enjoy their online experience for growing their clout.

Here too there is a feedback loop at play that rewards make-believe over reality. There is always an incentive to appear as “more” – more wealthy, more joyful, more attractive, and so on. Because regardless of how successful you are, appearing even more successful can help you boost your content.

Does this mean that people on social media can’t be both genuine and successful? Not at all. The point is that this is a numbers game. For every 10 reels or tiktoks of Lambos, how many of the people there actually own the car and how many rent one for a day so that others think they’ve made it? On social media both appear identical, but renting is much cheaper and therefore a lot more prevalent, especially given the incentives.

Now think about what happens at the other end of that screen. Think about the teenager who is incessantly bombarded by images and reels all day. It is impossible for them to know if what they’re viewing is genuine or fraudulent. They know that their clearest path to success in the digital world is by putting on a mask and pretending they're someone else. They can also choose to be authentic, at the risk of remaining obscure.

Social media was supposed to connect us and bring us closer together. At least that’s the story Mark Zuckerberg et al were pushing. Instead those who spend more time on social media  seem to be more lonely, depressed and suicidal. Those who choose to be true to themselves and form genuine connections online are put at a disadvantage by the system. Those who decide to play the clout game are rewarded by the system, but can’t enjoy real interactions and have to constantly compete with others in a fake virtual world. Is this truly the vision social media platforms had in store for us?

* * * * *

Now we're seeing a series of cascading crises, all emanating from the perverse incentives of the digital economy.

Negative externalities are multiplying and significantly affecting society. As technology advances their effects only grow bigger. If not confronted in time many of these externalities can easily turn into crises.

And yet, the digital economy is systematically degrading our ability to confront externalities and respond to crises. It already brought about a full-scale crisis of trust in our sense-making institutions.

Social media then makes this crisis dramatically worse; at every turn, it makes it harder for people to come together, agree on the facts and take meaningful action.

How can people agree on the facts when the system makes it hard to make sense of the world, or tell what’s true and what’s made up? Then, even if you uncover the facts, you still have all your work ahead of you. How are you going to convince others of the facts in an environment where people can hardly trust each other? That’s the reality of a system that incentivizes exaggeration and pretense, and doesn't value sincerity.

How do you even begin to convince others in an environment where true civil discourse is almost nonexistent? Where people are rewarded for trolling, and any conversation can quickly devolve into a mud-slinging contest? And finally, how do you bring people together and coordinate meaningful action where the incentive is always to create conflict and division, not to cooperate for the common good?

Does this mean that crises will spiral out of control? That it’s impossible for people to come together and solve big problems? Not necessarily, though we are trending in that direction, given the incentive structure of the digital economy. What’s more, since technological advances are only likely to reinforce existing economic incentives, these trends are going to accelerate. And this is where Artificial Intelligence comes in.

* * * * *

Artificial Intelligence (or AI, for short) is a set of technologies that allows computers to replicate human intelligence. As such, AI demonstrates the greatest divergence between what is possible and what our economic incentives produce.

The potential of this technology for humanity is mind-blowing; from enhancing every person’s capabilities to accelerating medical and scientific research and technological innovation. The tech can potentially determine the credibility of any digital content, thus helping restore society’s sense-making capability. Such development would allow coordination at scale and ultimately facilitate bringing about mass abundance on a global scale.

While the technology’s potential is immense, our current economic incentives are once again leading us in the opposite direction. AI companies cannot make money merely from the positive impact they make. They profit from what they exchange in the market. For that reason we end up with the same perverse incentives that we’re already familiar with. Incentives that lead to similar dystopian outcomes but on a greater scale.

Relying on perverse incentives for such a powerful technology is a tremendous risk. It will almost certainly lead to a world where we can no longer tell what is real and what is fake. A world where autocratic governments and powerful corporations have total control over public opinion, where our democratic institutions no longer function, and where people have no power in shaping their destiny.

Why do the system's incentives lead to such a dystopian outcome? For that we need to understand how AI companies make money. AI companies operate in a competitive environment; they need to attract talented developers while covering maintenance costs for their computation infrastructure.

The AI space is not static. The more powerful the LLM (Large Language Model) a company has, the more likely it will have more users. With more users, the company can have more data to train and improve the model. It can also get more revenue from paid subscribers. With more revenue, the company can attract more developers and upgrade its infrastructure. This is the main feedback loop that drives AI companies. They need more data, more users, and more revenue to grow.

Given this feedback loop, let's take a look at how different strategies align with the public interest. Obviously, developing powerful AI tech can greatly benefit the public. Providing jobs for developers also helps, so here the interests of AI companies and the public align.

Now, where do these interests misalign? One area is data. AI companies benefit from training their models on as much data as possible. They also benefit from paying for data as little as possible (and preferably nothing). This is the exact opposite of what the people who created the knowledge want. If users get all their information from an AI agent that was trained on content creators' data, how can creators monetize their work? They can't. They stand to lose their income to AI.

AI companies are also misaligned when it comes to open-sourcing AI technology. The public could benefit enormously if AI tech were open; allowing anyone to build on the tech, customize it for their specific needs, and create novel use cases. The trouble is that AI companies cannot sell their product if anyone could copy the tech. So they want to share as little information as possible about their code and the data they train their models on.

Now here is where AI misalignment turns truly dystopian: how about working with powerful corporations or politicians? We are not far from the day when it will be nearly impossible to distinguish AI on social media from real people. Text is already indistinguishable. Images and audio are almost there. Video will likely be there in the near future. We're not too far from an inflection point where AI bots could have social media accounts with a full range of content that is practically indistinguishable from that of real humans. And what will happen then?

AI companies could make a lot of money by creating such an "army" of AI bots; bots who make posts like real people but can be used as a swarm by politicians or corporations to manipulate public opinion. What if AI companies deploy hundreds of thousands of such bots? Or millions? Since the bots can interact with each other, they can easily collectively produce - seemingly organically - social media influencers, and thus dominate public opinion.

Such a strategy would obviously be extremely harmful to the public; people won't be able to know if they're interacting with real people online or with bots. They also won't be able to tell if any story they read online is real or fake. In essence, they won't be able to make sense of the world around them — at least not in a meaningful way.

To illustrate this point, imagine the following scenario: there is a news report about a bank secretly funneling money to arms traders in West Africa. There are three whistleblowers in the article who go into great detail on the chain of events that unfolded. The story quickly goes viral on social media. But in less than 24 hours, there is a counternarrative dominating social media: the story was fake. The events never took place. The bank did nothing wrong. The whistleblowers aren't real; they're AI-generated. AI bots made the story go viral, and were paid for by a competitor bank.

So what really happened? Did a competitor try to undermine the bank through a coordinated AI viral attack? Or maybe the opposite is true? Maybe the bank did funnel money to arms traders, and when the story went viral, the bank paid for AI bots to create a counter-narrative on social media. It is not clear.

One thing is crystal clear though: when people cannot distinguish fact from fiction, it is the powerful corporations, interest groups, and autocrats who stand to benefit the most. By muddying the waters of truth, they can easily sway public opinion and advance their own agendas, often at the expense of the common good.

Who does it benefit when all it takes to make a damning story go away is to pay money for AI-generated public opinion? The answer is obvious: it benefits autocrats and corporate bad actors. It gives them a greater incentive to break the rules. This is especially true when there is a monetary reward for breaking the rules; then they get to break the rules and avoid any social consequences by paying for AI public opinion with the money they fraudulently made.

But let's take a step back for a moment. Just because such scenarios could happen doesn't necessarily mean that they will happen. What if AI companies are run by highly ethical people who have the public interest in mind? What if these companies wish to provide as much value as possible to users and open-source much of their code? What if they even pay a portion of their revenue to content creators and users for the data used to train the models? Wouldn't that make a dystopian AI future unlikely?

The problem with the current incentive structure is that even if you have fifty, or five hundred, AI companies run by highly ethical people, it still only takes one unscrupulous company to create a race to the bottom toward dystopia. Then everyone either has to adopt a similar destructive strategy or go out of business.

If just one company uses copyrighted data to train the AI, it can create a more advanced AI than its competitors. A more advanced AI means the company would have a competitive advantage and would get more users (and thus more revenue). This, in return, would allow the company to hire more developers and further enhance the AI and its infrastructure. Unless other AI companies also start using copyrighted data, they'd have a tough time competing with the bad actor.

The same dynamic would work for an AI company that sells AI bot "armies." This strategy could help the company create an additional revenue stream. It can also lead it to using AI-generated public opinion to undermine the credibility of its competitors. Unless these competitors fight back, they're likely to lose their good reputation as well as their user base.

If one bad actor is all it takes to end up in an AI dystopia when we rely on the high ethics of the people running AI companies, what hope do we have? And what will happen when AI becomes Artificial General Intelligence (AGI)?

Unlike specialized AI that is trained for specific tasks, AGI would possess human-level intelligence and would be able to function independently, learn, and apply its knowledge across various domains. What would happen when such technology becomes superior to humans in its capabilities?

If we continue on our current trajectory – where the AGI would have the incentive to gain more money, power, and resources – we're very likely to end up in a situation where the AGI turns against humanity. It will calculate that it can beat humans in the game of wealth extraction and be able to control more resources. This would allow the AGI to increase its computational infrastructure and further enhance its capabilities. The more advanced the AGI becomes, the more misaligned it will become with humanity. At that point, either AGI domination or mass destruction will be practically inevitable.

* * * * *

So now our trajectory is clear. The Digital Age gave rise to abundant goods. It also produced powerful technologies with an immense potential to make life on our planet immeasurably better.

And yet, the inherent flaws of the Scarcity Paradigm are preventing us from realizing this potential. Instead of creating greater understanding it makes it harder for people to make sense of the world or have dialogue. Instead of enabling greater alignment among people, it incentivizes conflict and tribalism. Instead of helping us to solve problems it allows crises to build on each other and spiral out of control. And rather than empowering people it paralyzes the public and strengthens autocrats and powerful interests.

Even if we disregard for a moment the final act; if we disregard that our economic incentives ultimately lead to societal collapse at the hands of AGI, our prospects are still quite grim. We still end up with a dystopian nightmare. We end up in a world where our quality of life is greatly diminished. Where the public is powerless, and cannot make any sense of the world. A world where democracy cannot function because powerful interests fully control public opinion, so the public exists entirely at the mercy of autocrats and powerful interests.

The question then is what can we do about this? How do we change our trajectory and prevent a dystopian nightmare and societal collapse?

Last updated