Internet Algorithms and Cults

The following is a derivation of a speech I gave at a conference recently, regarding the relationship between cognitive biases, internet algorithms, and the increased polarization of the political spectrum that can be witnessed today.

 

Francis Fukuyama argued in his 1989 article “The End of History?” that following the rise of Western liberal democracy after the collapse of the Soviet Union, humanity was reaching

the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government”¹

Without arguing in detail against Fukuyama’s complete thesis – despite of me agreeing with one of the conference attendees in describing him as a ‘tragic fool’ – I will instead focus on one key assumption where I think he went wrong, and from whence I believe all the other mistakes hence derived. In my opinion, Fukuyama’s original mistake was in thinking that ideas could be defeated even semi-permanently.

Recently, there has been a clear rise in identity politics – the tendency of people of shared ethnicity, religion, sexual orientation or any other on the surface non-ideological feature to group together to advocate causes from their in-group’s perspective. At the same time, old favourites like Marxism and fascism refuse to go away, with thriving groups still advocating them. The reason for this, I believe, is also one of the reasons why Fukuyama’s predictions about the future of ideologies went so wrong as far as I know, ideas can only be defeated by other ideas – not by the collapse of regimes – and never permanently as long as the idea can have some utility towards individuals. For as long as an ideology cannot answer to everyone’s every material and spiritual need, other ideas will be there to compensate this lack. No matter what you think of western liberal democracy, you must surely agree that it cannot fulfill everyone’s every need, material and spiritual.

When thinking of the resilience of ideas, just think of how thriving astrology is, despite it having no factual basis. It fulfills people in some valuable way, so it persists. As long as western liberalism cannot appease everyone on every single facet of their material and spiritual lives, other ideologies will go on living, even if their flames may be muted for a time. Western liberal democracy has not proven to be able to answer to everyone’s every need. At least not so quickly to have made other ideologies obsolete.

The role of the internet in this is that it has enabled people to group together and polarize around ideas catering to their own needs faster and more easily than was previously the case. Nowadays, most of information transferred within the western society goes through the internet and its many algorithms. From social relationships to politics, our information about other people and their ideas gets filtered through the internet.

As such, it matters how internet algorithms pander to our biases.

Big browser and social media companies make revenue based on how long they can keep us browsing on any given page, and as such they have employed algorithms that are designed to give you more of what you had before, or what people with similar browsing habits to you have looked at before. This is a succesful tactic in both making a more pleasant browsing experience for the consumer, and creating revenue for these corporations by keeping you browsing.

Ideologically, it is also a recipe for:

  • Regression in tolerance and increased in-group/out-group dichotomy – i.e. tribalism
  • Ideas becoming insulated from criticism.
  • The distillation of ideologies into their more extreme forms.

The human mind is a machine that did not evolve to deal with the current intellectual and technological environment. It has not had the time to adapt to the current information age, and definitely has no mechanisms to counter internet algorithms. Cognitive biases are a group of reasoning flaws that our minds tend to make based on the shape of our brain, and when combined with ideologies and group identities, they attract cultishness. Like cognitive biases, cultishness is a human phenomenon which happens when we group together, because it served us well for a period of our evolutionary history.

”Every cause wants to be a cult.”²

What I mean by cultishness in this context is high conformity to one’s in-group, hostility towards out-groups, and the polarization and distillation of the group’s beliefs as time goes on. Conformity, in-group bias, and reluctance to change one’s opinion are also all highly ubiquitous among humans regardless of other factors. Our mind is a machine that enjoys being in a cult. It is not a matter of whether an ideology or the people who support it are innately cultish, as they all have that potential.

This tendency of ideological groups to decay into cultishness has been around for a long time, but the internet and its algorithms have begun to work as a catalyst, accelerating this process. The algorithmic nature of social media and search engines have enabled the formation countless in-groups that are effectively insulated from opposing ideas, unless their members go out of their way to try to seek countering voices – which we are unlikely to do based on our innate biases and insecurities.

Think of this: A person has an issue to which she has not found a satisfying answer. She goes to the internet to find answers, and finds a group with a cause that seems to answer this. She is relieved, and begins interacting with the people of this cause, reading more of what they have to say and forming meaningful relationships within the cause. Internet algorithms make sure she gets served with more links to associated ideas and causes, and she goes deeper into the rabbit hole in her euphoric death spiral of having her eyes opened to so many things she had not thought of before just like that.

Eventually she will encounter a ’normie’ who never even heard of the answer to the her original issue, let alone the other associated ideas she has now adopted. To the convert, this normie and everyone else will seem like sheep with their eyes closed, for she has seen the light. At this point, unless she goes out of her way to try to challenge herself, nobody will manage to make her rethink her stance because everyone else will think she is the lunatic and discussions between people who underestimate each other’s mental capabilities are not going to convince either side.

Additionally, within these groups the first ones to leave or get ostracized are the moderate people on the margins. When the most sceptical members are inclined to leave or be excommunicated by the group, the average opinion naturally shifts towards the more extreme.

Rinse and repeat, and causes can become quite ’extreme’ in no time.

An ideology does not need a deep hidden flaw for its adherents to form a cultish in-group. It is sufficient that the adherents be human. Everything else follows naturally. Decay into cultishness is the default state. Internet algorithms are unwittingly complicit in increasing cultishness by pandering to our biases. However, the internet is just a catalyst to a fundamentally human phenomenon. Even so, because the internet facilitates and accelerates this process, we need more vigilant meta-cognition and advocates for a better way to think about ideological issues.

Ideas cannot be defeated, at least not permanently, because of the shape of our brain. People will keep returning to ideas that appeal to them as long as alternative ideas do not answer to all their material and spiritual needs. When they form groups around these ideas, all the mechanisms I have talked about kick in, enhanced by the algorithmic nature of the internet, which filters almost all of the information we receive today.

This is why I think we are at where we are at right now, and western liberal democracy isn’t the sole surviving player on the field.

 


¹ Fukuyama, Francis. ”The End of History?” The National Interest, no. 16 (1989): 3–18.

² Yudkowsky, Eliezer. ”Every cause wants to be a cult” in Rationality: from AI to Zombies” Berkeley, MIRI (2015). 458–460.

This blog post and the speech it is based on was overall heavily influenced by Yudkowsky, Eliezer. Rationality: from AI to Zombies” Berkeley, MIRI (2015).

Mundane Intellectual Honesty

In this essay series, I will write down my own thoughts about Eliezer Yudkowsky’s essays on the Rationality: Ai to Zombies –series from the point of view of a historian. My reason for writing these is primarily to organize my own thoughts regarding the use of rationality as a mental tool in my own profession, and as such, I do not presume to even attempt to appeal to a very wide audience. However, if you are reading this disclaimer and find my essays insightful or entertaining, more the power to you and I implore you to go and read the original essays, if you have not already.

Historians have the liberty to interpret their sources with scarcely any limitations. Having an established theoretical framework helps, since the reader can then draw on previous knowledge of the type of reasoning used as they proceed with a book or an article. Nevertheless, as long as we can explain our thought process to whomever who might take issue with our assessments, our interpretations are considered valid, even if people may disagree with us. Consequently, it is quite beneficial to keep track of your thoughts as you do research, preferably by writing down the path your thoughts took to get to a conclusion – including the possible leaps of faith along the way. You might be surprised to find how many of your assumptions are actually based on cached thoughts rather than actual evidence. When pointed out, these reasoning mishaps can cause embarrassment or worse, resistance in yourself to give up your unfounded ideas (because you already became attached to them, and admitting to being wrong is hard). Wherever you may find a mushy step in your reasoning that either equates to ‘…it’s complex’ or ‘Step X emerges from Step Y’ without Y giving any concrete hint of why X would emerge from it, get brutal with yourself and replace the step with “I don’t actually know what happens here”. You can then return it to try to find out what happens there later, or be honest and admit to not knowing everything about the phenomenon you are studying. On the bright side of this self-scrutinizing, you may get novel ideas and perspectives on several points of interest upon reviewing your thought process.

Thinking is such a natural process that society does not put enough emphasis or give credit to those who do it exceptionally well. To think well, you need to meta-think, and through meta-thinking you will be able to find the blind spots of your reasoning and even predict some mental processes before your brain subjects you to them. As Yudkowsky says in his essay “The Lens That Sees Its Own Flaws”:

“The whole idea of Science is, simply, reflective reasoning about a more reliable process for making the contents of your mind mirror the contents of the world.”¹

We want our assessments to be our best possible estimates of the state of reality – everything else is a lie. It is easy to get carried away with ideas that we like – either because it feels like we are offering a novel perspective that will get us attention or because our assessment falls in line with our previous expectations about how the world is. When this happens, we are tempted to not think too hard of the process that our minds went through to reach our conclusions. It is natural to want to be proven right, but a historian, like any other scientist, should be most pleased when they are looking at the world with as few filters as possible.

On a related note, the quest for accuracy can unfortunately take a banal turn at times. Sometimes (quite often), discussions within the field about the true nature of things transform into debates about semantics and what we actually mean when we use certain words. To be sure, it is useful to get on the same page about the use of terms like ‘nationalism’ or ‘commerce’ etc. because of the risk of anachronisms. We also want to avoid discussion participants having a separate idea of what kind of a phenomenon is being discussed, as conversations like these lead nowhere. However, when we delve into discourse about ‘truths’ in history, I am often reminded of this debate example by Yudkowsky:

“Suppose that, after the tree falls, the two walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.”²

More often than not, semantic discourse between history ends up in this territory of preferred terms, even if the starting point began with a genuine attempt to understand what is actually being talked about. If the expected end result of a discussion is at best the victory of one term over another without either of the debating parties having actually changed their minds about the contents of the phenomenon being discussed, I struggle to find a point in these interactions.

Our beliefs should be our best possible estimates of the nature of reality, and we should avoid using muddled language whenever we can. Be that as it may, getting too wrapped up in semantics over substance only make discourse within the field harder, as well as making us look petty to any outside listener who might be interested in what we have to say about the phenomena themselves.

 


¹ Yudkowsky, Eliezer. ”Rationality: from AI to Zombies” Berkeley, MIRI (2015). 40–42.

² Yudkowsky, Eliezer. ”Rationality: from AI to Zombies” Berkeley, MIRI (2015). 45–48.