Not Agreeing to Disagree

I have always had an instinctive problem with the concept of people agreeing to disagree. As such, I was delighted to discover Aumann’s Agreement Theorem, which proposes that no two perfectly rational agents can agree to disagree. From this theorem follows that if two people disagree with each other, at least one of them must be doing something wrong, or have limited data on the subject.

Coincidentally, this theorem started from Robert Aumann’s 1976 (1976!) discovery that a sufficiently respected game theorist can get anything into a peer-reviewed journal. Considering the origins of the theorem and how long it has been around, you would think that we would have fixed this issue. Yet many of you probably remember the Grievance Studies affair of 2017-18, where three authors (James A. Lindsay, Peter Boghossian, and Helen Pluckrose) created bogus academic papers and submitted them to academic journals in the areas of cultural, queer, race, gender, fat, and sexuality studies. Their motive was to expose poor science in these categories of study. From Wikipedia: ‘By the time of the reveal, four of their 20 papers had been published, three had been accepted but not yet published, six had been rejected, and seven were still under review. One of the published papers had won special recognition.’

It seems that for the editors that accepted the articles, as long as the articles seemed to be taking a social constructionist point of view, it did not matter what was written because all interpretations coming from this angle can be valid. The disciplines targeted in the Grievance Studies affair are particularly vulnerable to this, as they are very theory-heavy subjects and are structured around social constructivism. However, historians and history as a field of study are not immune to this either.

From my own observations, I would say that historians walk too much on eggshells when it comes to other people’s interpretations, especially if they personally know them. Nobody wants to tell another person that their work was for naught and their ideas silly – unless of course their conclusions imply this about one’s own research. The social constructivist and post-modern views of deconstructing ideas and interpreting anew from a fresh perspective have created an environment where any explanation goes if the person can explain themselves sufficiently enough given the restraints of their theoretical framework. The problem with this is that the theoretical frameworks themselves are usually based on nothing at all but constructivist ideas. This situation of course presupposes that the methodology and handling of the sources is sound by the historian in question, since otherwise historians as a community luckily do not seem to have trouble in sinking their teeth into any of the gaps in a given study. But when source work has been diligent enough and there exists a theoretical framework that is aligned with the interpretations, we become muted and start nodding our heads at theories we don’t quite agree with, and interpretations which we do not quite understand.

This topic ties into why I began this blog by writing about what I think truth should mean to historians, and how we should at least be able to acknowledge that there are truths (That which is true or in accordance with fact or reality) and not just truths (A fact or belief that is accepted as true). Even though historians can only ever aspire to the latter kind of truth, I believe that our discipline is still about truth-seeking, the making of maps that are our best estimates of the territory, and as such it becomes frustrating when differences in interpretation of the same sources are so readily accepted without attempts through discussion to find a synthesis. The trend is social constructivism, and if you can paint a picture according to the rules of this ‘style’, then the actual contents of the picture seem to become somewhat proofed from criticism.

I do not think all interpretations are equal, and when I come across a disagreement before historians, I automatically think one of them is either wrong or ignorant of some relevant source material. At least what comes to myself, I would take someone challenging all my presumptions and interpretations about a given topic rather than nodding vacantly despite not quite understanding where I am coming from. I would hope that other historians could find this bit of fight in themselves as well, and leave tolerance to some other playing field.

Internet Algorithms and Cults

The following is a derivation of a speech I gave at a conference recently, regarding the relationship between cognitive biases, internet algorithms, and the increased polarization of the political spectrum that can be witnessed today.

 

Francis Fukuyama argued in his 1989 article “The End of History?” that following the rise of Western liberal democracy after the collapse of the Soviet Union, humanity was reaching

the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government”¹

Without arguing in detail against Fukuyama’s complete thesis – despite of me agreeing with one of the conference attendees in describing him as a ‘tragic fool’ – I will instead focus on one key assumption where I think he went wrong, and from whence I believe all the other mistakes hence derived. In my opinion, Fukuyama’s original mistake was in thinking that ideas could be defeated even semi-permanently.

Recently, there has been a clear rise in identity politics – the tendency of people of shared ethnicity, religion, sexual orientation or any other on the surface non-ideological feature to group together to advocate causes from their in-group’s perspective. At the same time, old favourites like Marxism and fascism refuse to go away, with thriving groups still advocating them. The reason for this, I believe, is also one of the reasons why Fukuyama’s predictions about the future of ideologies went so wrong as far as I know, ideas can only be defeated by other ideas – not by the collapse of regimes – and never permanently as long as the idea can have some utility towards individuals. For as long as an ideology cannot answer to everyone’s every material and spiritual need, other ideas will be there to compensate this lack. No matter what you think of western liberal democracy, you must surely agree that it cannot fulfill everyone’s every need, material and spiritual.

When thinking of the resilience of ideas, just think of how thriving astrology is, despite it having no factual basis. It fulfills people in some valuable way, so it persists. As long as western liberalism cannot appease everyone on every single facet of their material and spiritual lives, other ideologies will go on living, even if their flames may be muted for a time. Western liberal democracy has not proven to be able to answer to everyone’s every need. At least not so quickly to have made other ideologies obsolete.

The role of the internet in this is that it has enabled people to group together and polarize around ideas catering to their own needs faster and more easily than was previously the case. Nowadays, most of information transferred within the western society goes through the internet and its many algorithms. From social relationships to politics, our information about other people and their ideas gets filtered through the internet.

As such, it matters how internet algorithms pander to our biases.

Big browser and social media companies make revenue based on how long they can keep us browsing on any given page, and as such they have employed algorithms that are designed to give you more of what you had before, or what people with similar browsing habits to you have looked at before. This is a succesful tactic in both making a more pleasant browsing experience for the consumer, and creating revenue for these corporations by keeping you browsing.

Ideologically, it is also a recipe for:

  • Regression in tolerance and increased in-group/out-group dichotomy – i.e. tribalism
  • Ideas becoming insulated from criticism.
  • The distillation of ideologies into their more extreme forms.

The human mind is a machine that did not evolve to deal with the current intellectual and technological environment. It has not had the time to adapt to the current information age, and definitely has no mechanisms to counter internet algorithms. Cognitive biases are a group of reasoning flaws that our minds tend to make based on the shape of our brain, and when combined with ideologies and group identities, they attract cultishness. Like cognitive biases, cultishness is a human phenomenon which happens when we group together, because it served us well for a period of our evolutionary history.

”Every cause wants to be a cult.”²

What I mean by cultishness in this context is high conformity to one’s in-group, hostility towards out-groups, and the polarization and distillation of the group’s beliefs as time goes on. Conformity, in-group bias, and reluctance to change one’s opinion are also all highly ubiquitous among humans regardless of other factors. Our mind is a machine that enjoys being in a cult. It is not a matter of whether an ideology or the people who support it are innately cultish, as they all have that potential.

This tendency of ideological groups to decay into cultishness has been around for a long time, but the internet and its algorithms have begun to work as a catalyst, accelerating this process. The algorithmic nature of social media and search engines have enabled the formation countless in-groups that are effectively insulated from opposing ideas, unless their members go out of their way to try to seek countering voices – which we are unlikely to do based on our innate biases and insecurities.

Think of this: A person has an issue to which she has not found a satisfying answer. She goes to the internet to find answers, and finds a group with a cause that seems to answer this. She is relieved, and begins interacting with the people of this cause, reading more of what they have to say and forming meaningful relationships within the cause. Internet algorithms make sure she gets served with more links to associated ideas and causes, and she goes deeper into the rabbit hole in her euphoric death spiral of having her eyes opened to so many things she had not thought of before just like that.

Eventually she will encounter a ’normie’ who never even heard of the answer to the her original issue, let alone the other associated ideas she has now adopted. To the convert, this normie and everyone else will seem like sheep with their eyes closed, for she has seen the light. At this point, unless she goes out of her way to try to challenge herself, nobody will manage to make her rethink her stance because everyone else will think she is the lunatic and discussions between people who underestimate each other’s mental capabilities are not going to convince either side.

Additionally, within these groups the first ones to leave or get ostracized are the moderate people on the margins. When the most sceptical members are inclined to leave or be excommunicated by the group, the average opinion naturally shifts towards the more extreme.

Rinse and repeat, and causes can become quite ’extreme’ in no time.

An ideology does not need a deep hidden flaw for its adherents to form a cultish in-group. It is sufficient that the adherents be human. Everything else follows naturally. Decay into cultishness is the default state. Internet algorithms are unwittingly complicit in increasing cultishness by pandering to our biases. However, the internet is just a catalyst to a fundamentally human phenomenon. Even so, because the internet facilitates and accelerates this process, we need more vigilant meta-cognition and advocates for a better way to think about ideological issues.

Ideas cannot be defeated, at least not permanently, because of the shape of our brain. People will keep returning to ideas that appeal to them as long as alternative ideas do not answer to all their material and spiritual needs. When they form groups around these ideas, all the mechanisms I have talked about kick in, enhanced by the algorithmic nature of the internet, which filters almost all of the information we receive today.

This is why I think we are at where we are at right now, and western liberal democracy isn’t the sole surviving player on the field.

 


¹ Fukuyama, Francis. ”The End of History?” The National Interest, no. 16 (1989): 3–18.

² Yudkowsky, Eliezer. ”Every cause wants to be a cult” in Rationality: from AI to Zombies” Berkeley, MIRI (2015). 458–460.

This blog post and the speech it is based on was overall heavily influenced by Yudkowsky, Eliezer. Rationality: from AI to Zombies” Berkeley, MIRI (2015).

Mundane Intellectual Honesty

In this essay series, I will write down my own thoughts about Eliezer Yudkowsky’s essays on the Rationality: Ai to Zombies –series from the point of view of a historian. My reason for writing these is primarily to organize my own thoughts regarding the use of rationality as a mental tool in my own profession, and as such, I do not presume to even attempt to appeal to a very wide audience. However, if you are reading this disclaimer and find my essays insightful or entertaining, more the power to you and I implore you to go and read the original essays, if you have not already.

Historians have the liberty to interpret their sources with scarcely any limitations. Having an established theoretical framework helps, since the reader can then draw on previous knowledge of the type of reasoning used as they proceed with a book or an article. Nevertheless, as long as we can explain our thought process to whomever who might take issue with our assessments, our interpretations are considered valid, even if people may disagree with us. Consequently, it is quite beneficial to keep track of your thoughts as you do research, preferably by writing down the path your thoughts took to get to a conclusion – including the possible leaps of faith along the way. You might be surprised to find how many of your assumptions are actually based on cached thoughts rather than actual evidence. When pointed out, these reasoning mishaps can cause embarrassment or worse, resistance in yourself to give up your unfounded ideas (because you already became attached to them, and admitting to being wrong is hard). Wherever you may find a mushy step in your reasoning that either equates to ‘…it’s complex’ or ‘Step X emerges from Step Y’ without Y giving any concrete hint of why X would emerge from it, get brutal with yourself and replace the step with “I don’t actually know what happens here”. You can then return it to try to find out what happens there later, or be honest and admit to not knowing everything about the phenomenon you are studying. On the bright side of this self-scrutinizing, you may get novel ideas and perspectives on several points of interest upon reviewing your thought process.

Thinking is such a natural process that society does not put enough emphasis or give credit to those who do it exceptionally well. To think well, you need to meta-think, and through meta-thinking you will be able to find the blind spots of your reasoning and even predict some mental processes before your brain subjects you to them. As Yudkowsky says in his essay “The Lens That Sees Its Own Flaws”:

“The whole idea of Science is, simply, reflective reasoning about a more reliable process for making the contents of your mind mirror the contents of the world.”¹

We want our assessments to be our best possible estimates of the state of reality – everything else is a lie. It is easy to get carried away with ideas that we like – either because it feels like we are offering a novel perspective that will get us attention or because our assessment falls in line with our previous expectations about how the world is. When this happens, we are tempted to not think too hard of the process that our minds went through to reach our conclusions. It is natural to want to be proven right, but a historian, like any other scientist, should be most pleased when they are looking at the world with as few filters as possible.

On a related note, the quest for accuracy can unfortunately take a banal turn at times. Sometimes (quite often), discussions within the field about the true nature of things transform into debates about semantics and what we actually mean when we use certain words. To be sure, it is useful to get on the same page about the use of terms like ‘nationalism’ or ‘commerce’ etc. because of the risk of anachronisms. We also want to avoid discussion participants having a separate idea of what kind of a phenomenon is being discussed, as conversations like these lead nowhere. However, when we delve into discourse about ‘truths’ in history, I am often reminded of this debate example by Yudkowsky:

“Suppose that, after the tree falls, the two walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.”²

More often than not, semantic discourse between history ends up in this territory of preferred terms, even if the starting point began with a genuine attempt to understand what is actually being talked about. If the expected end result of a discussion is at best the victory of one term over another without either of the debating parties having actually changed their minds about the contents of the phenomenon being discussed, I struggle to find a point in these interactions.

Our beliefs should be our best possible estimates of the nature of reality, and we should avoid using muddled language whenever we can. Be that as it may, getting too wrapped up in semantics over substance only make discourse within the field harder, as well as making us look petty to any outside listener who might be interested in what we have to say about the phenomena themselves.

 


¹ Yudkowsky, Eliezer. ”Rationality: from AI to Zombies” Berkeley, MIRI (2015). 40–42.

² Yudkowsky, Eliezer. ”Rationality: from AI to Zombies” Berkeley, MIRI (2015). 45–48.

On talking about history to a lay audience

In this essay series, I will write down my own thoughts about Eliezer Yudkowsky’s essays on the Rationality: Ai to Zombies –series from the point of view of a historian. My reason for writing these is primarily to organize my own thoughts regarding the use of rationality as a mental tool in my own profession, and as such, I do not presume to even attempt to appeal to a very wide audience. However, if you are reading this disclaimer and find my essays insightful or entertaining, more the power to you and I implore you to go and read the original essays, if you have not already.

Compared with most scholars and scientists, historians are in quite an easy position when we are faced with having to explain our research to a lay audience. Apart from some specialized brands of history – usually interdisciplinary explorations – even articles published in influential historical journals tend to limit professional jargon. In fact, many publications make sure to include it within their author guidelines to instruct prospective submitters to avoid jargon as much as they can in favour of clarity. As a proponent and defender of the popularization of history I find this to be a good thing. I want people to be able to understand what we are talking about, and despite the benefits of having a professional language to allow the professionals to discuss topics with useful shortcuts, we should take a few steps back and translate our thoughts to more commonly language when we address a wider audience.

When presenting jargon to a lay audience, you are not only being unkind and unprofessional in your duty as an educator (which I think is a duty of all scientists and scholars to some extent) but I am also inclined to think that you are trying to intentionally smuggle your agenda through by masking it in confusing words. Alternatively, you are trying to save face and hide the fact that in all actuality you have nothing substantial to talk about. We rely on the audience to give us the benefit of a doubt and find an agreeable way to interpret what we say, despite of what we actually say. Usually this works too, especially within the narrow confines of academia, because people want to listen in good faith. They may even think they’re too stupid to understand, and let you off the hook. This way, no matter what is said, the façade of professionalism remains.

Yudkowsky considers this issue in his essay “Rationality and the English Language”¹ and includes within a highly relevant quote by George Orwell:

”When one watches some tired hack on the platform mechanically repeating the familiar phrases—bestial, atrocities, iron heel, bloodstained tyranny, free peoples of the world, stand shoulder to shoulder—one often has a curious feeling that one is not watching a live human being but some kind of dummy . . . A speaker who uses that kind of phraseology has gone some distance toward turning himself into a machine. The appropriate noises are coming out of his larynx, but his brain is not involved, as it would be if he were choosing his words for himself . . . What is above all needed is to let the meaning choose the word, and not the other way around. In prose, the worst thing one can do with words is surrender to them. When you think of a concrete object, you think wordlessly, and then, if you want to describe the thing you have been visualising you probably hunt about until you find the exact words that seem to fit it. When you think of something abstract you are more inclined to use words from the start, and unless you make a conscious effort to prevent it, the existing dialect will come rushing in and do the job for you, at the expense of blurring or even changing your meaning. Probably it is better to put off using words as long as possible and get one’s meaning as clear as one can through pictures and sensations.”²

Using jargon, stock phrases, and vague statements begging the question can create multiple interpretations, when we should strive for our words to be undersood as we intended. It is better to be literal and simplistic than to sound authorative or deep, even if we wish to retain our professionalism or want to avoid conciseness in fear of being patronizing to the audience. Rather than making up convoluted sentences that take time to unpack, or hiding the things you don’t know by saying it was ‘complex’ or an ‘emergent phenomenon’, we should strive for clarity and be ready to admit to that we do not know all the details. Self-aggrandizing and trying to hoodwink an audience is unflattering.

 


¹ Yudkowsky, Eliezer. ”Rationality: from AI to Zombies” Berkeley, MIRI (2015). 282–285.

² George Orwell, “Politics and the English Language,” Horizon (April 1946)

Obviously they should have seen it coming

In this essay series, I will write down my own thoughts about Eliezer Yudkowsky’s essays on the Rationality: Ai to Zombies –series from the point of view of a historian. My reason for writing these is primarily to organize my own thoughts regarding the use of rationality as a mental tool in my own profession, and as such, I do not presume to even attempt to appeal to a very wide audience. However, if you are reading this disclaimer and find my essays insightful or entertaining, more the power to you and I implore you to go and read the original essays, if you have not already.

The reason why historical actors tend to appear to us as either Masterminds or Imbeciles can be attributed to both hindsight bias, and the fact that people – historians especially – are very keen on constructing coherent narratives of the past. While it is considered key to consider only what the people themselves knew by the time of any particular source, the historian usually already has the already existing narrative in mind, from start to finish. We know what’s coming next, and thus sometimes we need to remind ourselves that the people at the time did not. It is notoriously difficult to predict the future, or even the consequences of your own actions. There are simply too many factors to consider. Even if in hindsight, some particular feature may stand out above all else, because it is the straw that broke the camel’s back.

In his essay concerning Hindsight Bias, Yudkowsky uses the Challenger disaster as an example, reminding that preventing the disaster ‘would have required, not attending to the problem with the O-rings, but attending to every warning sign which seemed as severe as the O-ring problem, without benefit of hindsight.  It could have been done, but it would have required a general policy much more expensive than just fixing the O-Rings.’

Resulting from hindsight bias, we tend to think that successful people were successful in their endeavours because they could plan their course meticulously Meanwhile, those who failed ought to have been able to predict that one thing and in failing to do so, appear to be have been idiots. Humans are not well equipped to rigorously separate forward and backward messages, so even mindful historians can fall prey to allowing forward messages to be contaminated by backward ones.

Examples of this kind of thinking is especially rife among political history.

Another thing that causes bafflement in students of history of every level is the assumption that most other people likely share your interpretation of a message’s contents. This gets confounded when you take into account the historian’s perspective of usually actually knowing what the message was supposed to say, due to the consequences of its misinterpretations.

In ”Illusion of Transparency: Why No One Understands You”¹, Yudkowsky recites a Second World War example used in a heuristics study by Keysar and Barr to illustrate an over-confident interpretation:

“-two days before Germany’s attack on Poland, Chamberlain sent a letter intended to make it clear that Britain would fight if any invasion occurred. The letter, phrased in polite diplomatese, was heard by Hitler as conciliatory—and the tanks rolled.”

It is an instinctive reaction to tear at one’s figurative beard at the stupidity of both parties involved – how could Chamberlain have left any room for interpretation, and what possessed Hitler to think that in absence of a direct threat, Britain would stall military action? However, Chamberlain’s style was to be very cautious and mild-mannered in his communication, and it had never resulted in a war before. Similarly, Hitler may have decided to act regardless of the word choices in Chamberlain’s message. We may never know, but this exchange makes both appear as Imbeciles, knowing both how the war ended, and what it cost.

Hindsight Bias is one of those mechanisms of the mind that historians are well aware of and actively work to undermine, yet end up submitting to too often. Be it hubris, attachment to one’s own narrative, or just laziness of meta-cognition, we all make this mistake sometimes. Still, it should be considered a required professional skill to be able to go backwards with one’s thinking and separate one’s own knowledge from what information motivated a particular source.


¹ Yudkowsky, Eliezer. ”Rationality: from AI to Zombies” Berkeley, MIRI (2015). 34–36.
The study he refers to in the essay: Boaz Keysar and Dale J. Barr, “Self Anchoring in Conversation: Why Language Users Do Not Do What They ‘Should,”’ in Heuristics and Biases: The Psychology of Intuitive Judgment: The Psychology of Intuitive Judgment, ed. Griffin Gilovich and Daniel Kahneman (New York: Cambridge University Press, 2002), 150–166, doi:10.2277/0521796792.

Why it always takes longer than expected

In this essay series, I will write down my own thoughts about Eliezer Yudkowsky’s essays on the Rationality: Ai to Zombies –series from the point of view of a historian. My reason for writing these is primarily to organize my own thoughts regarding the use of rationality as a mental tool in my own profession, and as such, I do not presume to even attempt to appeal to a very wide audience. However, if you are reading this disclaimer and find my essays insightful or entertaining, more the power to you and I implore you to go and read the original essays, if you have not already.

The Planning Fallacy is one that is guaranteed to hit most starting academics under the belt. To illustrate it, Yudkowsky gives a few sample results of studies exploring this heuristic.

These are direct quotes from his essay Planning Fallacy¹, where he summarizes the findings. I am including them because the point deserved to be driven home by the anvil.

Buehler et al. asked their students for estimates of when they (the students) thought they would complete their personal academic projects. Specifically, the researchers asked for estimated times by which the students thought it was 50%, 75%, and 99% probable their personal projects would be done. Would you care to guess how many students finished on or before their estimated 50%, 75%, and 99% probability levels?

13% of subjects finished their project by the time they had assigned a 50% probability level;

19% finished by the time assigned a 75% probability level;

and only 45% (less than half!) finished by the time of their 99% probability level.


Newby-Clark et al. found that

  • Asking subjects for their predictions based on realistic “best guess”

scenarios; and

  • Asking subjects for their hoped-for “best case” scenarios . . .

. . . produced indistinguishable results.


Likewise, Buehler et al., reporting on a cross-cultural study, found that Japanese students expected to finish their essays ten days before deadline. They actually finished one day before deadline. Asked when they had previously completed similar tasks, they responded, “one day before deadline.” This is the power of the outside view over the inside view.

The planning fallacy has the most impact on the practical side of academia, and its lessons ought to be heeded by especially PhD researches and others who are taking on an expansive research and writing project perhaps for the first time in their lives. Without prior experience on such projects, we tend to over-analyze the project and counter-intuitively this leads to overtly optimistic estimations of the duration it will take us to complete the project. Yudkowksy calls this thinking in the terms of the unique features of the project the ‘inside view’, and recommends switching to the ‘outside view’ instead when organizing projects for oneself.

The outside view is, in all of its simplicity, deliberately avoiding to think about unique features of your current project and just ask how long it took others to finish broadly similar projects in the past.

This should be good news especially to all PhD candidates working on their dissertation, as they have a multitude of peer examples to draw from. And not only that, they also have their advisors, who have completed a dissertation in the past themselves, but have also likely supervised a few of them into completion before you came along. Their estimations should not be brushed off, and one ought not to underestimate other PhD Candidates either – likely, they had their reasons for the project extending beyond what was initially planned. The “inside view,” does not take into account unexpected delays and unforeseen catastrophes.

… And still, I expect my own dissertation project to be finished in the year 2023, maternal leaves in between and all. In my defense, I was faster (around 25% faster) than the average student is during my BA and MA, and I am in a particularly favourable position because I have steady funding until the end of 2022.

Let this blog entry stand as a lesson in humility and the perils of hubris, should 2024 come without me being a PhD.


¹ Yudkowsky, Eliezer. ”Rationality: from AI to Zombies” Berkeley, MIRI (2015). 30–33.

The studies in the quotes are, in the order of appearance:

  1. Roger Buehler, Dale Griffin, and Michael Ross, “Exploring the ‘Planning Fallacy’: Why People Underestimate Their Task Completion Times,” Journal of Personality and Social Psychology 67, no. 3 (1994): 366–381, doi:10.1037/0022-3514.67.3.366; Roger Buehler, Dale Griffin, and Michael Ross, “It’s About Time: Optimistic Predictions in Work and Love,” European Review of Social Psychology 6, no. 1 (1995): 1–32, doi:10.1080/14792779343000112.
  2. Ian R. Newby-Clark et al., “People Focus on Optimistic Scenarios and Disregard Pessimistic Scenarios While Predicting Task Completion Times,” Journal of Experimental Psychology: Applied 6, no. 3 (2000): 171–182, doi:10.1037/1076-898X.6.3.171.
  3. Roger Buehler, Dale Griffin, and Michael Ross, “Inside the Planning Fallacy: The Causes and Consequences of Optimistic Time Predictions,” in Gilovich, Griffin, and Kahneman, Heuristics and Biases, 250–270.]

Just because it is Plausible does not make it Probable

In this essay series, I will write down my own thoughts about Eliezer Yudkowsky’s essays on the Rationality: Ai to Zombies –series from the point of view of a historian. My reason for writing these is primarily to organize my own thoughts regarding the use of rationality as a mental tool in my own profession, and as such, I do not presume to even attempt to appeal to a very wide audience. However, if you are reading this disclaimer and find my essays insightful or entertaining, more the power to you and I implore you to go and read the original essays, if you have not already.

 

I said, “It is more probable that universes replicate for any reason, than that they replicate via black holes because advanced civilizations manufacture black holes because universes evolve to make them do it.”

And he said, “Oh.”

The following is based on Yudkowsky’s essay Burdensome Details¹.

The conjunction fallacy is when humans rate the probability P(A;B) higher than the probability P(B), even though it is a theorem that P(A;B) ≤ P(B).

In a classic experiment by Tversky and Kahneman (1982)², they asked test subjects to rate the probability of statements regarding an imaginary person, Linda. Before giving the statements, they introduced her with this description:

Linda is 31 years old, single, outspoken, and very bright.  She majored in philosophy.  As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.”

Among the statements were the following three:

    • X) Linda is active in the feminist movement.
    • Y) Linda is a bank teller.
    • Z) Linda is a bank teller and is active in the feminist movement.

The test subjects subsequently rated the probability of statement (Z) being higher than (Y), and (X) having the highest probability. This research result has been replicated many times, and you can read Yudkowsky’s essay Conjunction Controversy (Or, How They Nail It Down) for more examples of studies into this heuristic bias.

The interpretation is that subjects substitute judgment of representativeness for judgment of probability. Because the statement (Z) feels more right than the statement (Y), they assign it a higher probability even though with a little bit of thinking it would be clear to them that P(A;B) ≤ P(B). The description activates our heuristics and because Linda more closely resembles a feminist than a bank teller, the test subjects presumed it more likely that she was a feminist bank teller rather than just a bank teller. The implausibility of one claim is ‘averaged out’ by the plausibility of the other.

By adding extra details, you can make an outcome seem more characteristic of the process that generates it. We are susceptible to weaving contrived narratives within our heads that sound more plausible the more threads we weave into it. We have to look back and remind ourselves of the difference between sources and our own additions. We have to hold up every detail of our intricately weaved accounts independently, and ask, “How do I know this detail?”

Yudkowsky refers to futurologists and their tendency to weave intricate details into their future projections, but the same applies to historians. The more ‘neat’ and detailed an account of history sounds like, the less probable it likely is. A picture of a garden with a garden gnome might be more interesting to look at than one without it, but if the map does not correspond with the territory, can we claim to be scientific even to the little extent historians usually can?

To avoid this bias that seems at the same time stupidly obvious yet keeps tripping our minds up whenever we are not mindful of it, Yudkowsky recommends noticing the word “and,” and being wary of it. It is easy to get carried away with heuristics that sound plausible and neat, and pat ourselves in the back in the process for spotting the connection. But if there is no evidence of a connection in the first place, our heuristics are just a burden on our quest for the truth.

To win in the game of heuristics, we need to begin with the shortest/least detailed answer and assign it the highest probability of potential answers. Only then can we turn on our plausibility radar and start guessing what other factors may have been present, as long as we remain mindful of the difference of plausibility and probability.


¹ Yudkowsky, Eliezer. ”Rationality: from AI to Zombies” Berkeley, MIRI (2015). 26–29.

² Tversky, A. and Kahneman, D. 1982. Judgments of and by representativeness. Pp 84-98 in Kahneman, D., Slovic, P., and Tversky, A., eds. Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press.

The pitfalls of available sources

In this essay series, I will write down my own thoughts about Eliezer Yudkowsky’s essays on the Rationality: Ai to Zombies –series from the point of view of a historian. My reason for writing these is primarily to organize my own thoughts regarding the use of rationality as a mental tool in my own profession, and as such, I do not presume to even attempt to appeal to a very wide audience. However, if you are reading this disclaimer and find my essays insightful or entertaining, more the power to you and I implore you to go and read the original essays, if you have not already.

 

In his essay Availability¹, Yudkowsky defines the availability heuristic as the “judging [of] the frequency or probability of an event by the ease with which examples of the event come to mind.”

This means that we are both likelier to think deaths by accident occur more frequently in relation to deaths by diseases than they actually do, and compare out own lot to the rich and famous because they are the ones everyone is talking about.

“The objective frequency of Bill Gates is 0.00000000015, but you hear about him much more often. Conversely, 19% of the planet lives on less than $1/day, and I doubt that one fifth of the blog posts you read are written by them.”

This heuristic bias does not only make us more likely to be anxious and jealous in the present, but it also affects the work of everyone who studies people, past or present. The modern man receives information through several selective filters (how likely were people to share the news, does an algorithm consider the news potentially interesting to him, etc.), and most people are not very mindful of this fact. Those that are, however, can try to work their way around forming too heavily biased heuristics by diving into the cornucopia of information available to us at all times, and formulating a more balanced view of any given issue.

Once again, this is a luxury that historians have at best in a very limited capacity. Some periods in history are a regular desert of information, and whatever new research is published tends to be about looking at the few sources available from a novel angle. Not a problem in itself, but it becomes precarious when one tries to draw too far-reaching conclusions from them in their thirst for answers about the society from whence the sources originated. For a hypothetical example, imagine trying to answer questions about how a regular farmstead wife experienced their daily lives based on a single source written by a monk in a monastery a few villages past. Mind you, I do not think that we should not touch things that we cannot make very informed analyses about, but it is good to keep that mindfulness about oneself of not being very well informed.

This is an issue that anyone who studies illiterates has to face, and it is likely the biggest contributor to why historians did not concern themselves too much with peasants before the 20th century. The written word is a potent filter in and of itself, and the more time passes, added filters of what is considered in any given time to preserve to the future gets added. No large wonder that most historical sources concern the highborn and educated, as well as do histories. Why study the silent poor, when you could say so much more with confidence about Bill Gates?

When it is not a question of having only a handful of sources available, the issue begins to resemble the predicament of our modern lives much more. Namely that we are more likely to recall dramatic or interesting events and thus presume that they were more frequent than they were. It also relates to the temptation that I have mentioned before, of finding garden gnomes where there are none. In my opinion, this is a much bigger issue than making broad generalizations based on the writings of just a couple of monks. At least when the problem is the dearth of sources, the scientific community tends to be pretty good at taking it into account when assessing new research.

When one studies a time period where sources are too many for any single person to comb through and take into account, the intrinsic tendency of humans to give more weight to the shocking and dramatic poses a bigger threat to how any given period of time is perceived of as. Consider any common sense ideas about how violent and intolerant people used to be in any given time period, and it becomes easy to see why those ideas may have become stuck in the cultural consciousness.

Most likely the reality is more boring than our ideas of it, and as professionals it would be prudent of historians to not become one of the filters between reality and the broader audience, ending up propagating ideas of our ancestors as more ludicrous and wild than they were just because we like to remember the juicy bits.

 


¹ Yudkowsky, Eliezer. ”Rationality: from AI to Zombies” Berkeley, MIRI (2015). 23–25.

Pasta and Bias

In this essay series, I will write down my own thoughts about Eliezer Yudkowsky’s essays on the Rationality: Ai to Zombies –series from the point of view of a historian. My reason for writing these is primarily to organize my own thoughts regarding the use of rationality as a mental tool in my own profession, and as such, I do not presume to even attempt to appeal to a very wide audience. However, if you are reading this disclaimer and find my essays insightful or entertaining, more the power to you and I implore you to go and read the original essays, if you have not already.

 

“What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world.”

This quote is can be found in Yudkowsky’s essay …What’s a Bias, Again?¹, in which he vaguely described the category of cognitive biases and why they can – to simplify – be viewed as an obstacle in the quest to find truth. It is also an excellent quote for historians – myself especially – to remember. I tend towards easy generalizations and intuitively prefer to paint with broad strokes, and I know I am not the only one to be tempted by the faulty generalization bias.

If you go to Wikipedia and look up the list of cognitive biases, the wide variety of packages they come in becomes apparent. As such, it is not useful to try to distill one unifying feature out of all of them. In reality all of them need to be acknowledged separately and deliberately, if one wishes to overcome their effects when they check the results of their reasoning. The best way to describe cognitive bias, as Yudkowsky does, is by referring to them as errors in our reasoning arising from the shape of our own mental machinery. It is not that the machine is broken or does not have enough energy; it is just that it was built to make spaghetti when all we would like now is tagliatelle.

Improving one’s reasoning capabilities helps to avoid biases by giving us a kind of a checklist to go over when we are re-checking our reasoning and conclusions to see if they actually make sense. Do we have tagliatelle, or did we just made spaghetti again because that’s what always tends to happen when you crank the lever. The first draft of our thinking cannot rid itself of bias, but when we look at the pile of spaghetti mindfully, we can remind ourselves about the differences between various types of pasta. Only then can we grab a roller and begin to reshape the outcomes of our reasoning into something that serves our original purpose. As with spaghetti and tagliatelle, usually the results of our thinking are not so far off from what we were trying to achieve as to be unsalvageable. They just need a bit of work.

There is value in trying to obtain the most truthful answer to our questions regarding history, but historians should never forget that we are most likely going to be wrong in one way or another. What always strikes me when I read news reports of events where a reporter was actually present and several witness accounts were heard is just how often they still manage to botch the representation of the events. Historians are on this task as well, only we were never really there and did not even directly talk to the people who were. How likely must it be that our attempts to report on past events would have made the actual people involved aghast at their misrepresentation?

The amount of untruths is infinite, and truth itself is a difficult target to hit even when you have the chance to go back and empirically test if your reasoning was sound. We as historians have to do without this luxury. All we have is the garden views through our selected and blotched windows, and often not even that, just pictures drawn of the views by someone else. Nothing short of a new window or a removal of a stain in an already existing one can give us a chance to check someone’s reasoning in filling in the blanks in their drawing. So our chance of being mistaken and is already far greater, without even accounting for the fact that testing for bias is trickier for us than for most other scientists.

But that’s alright, after all what’s the worst that could happen? Luckily for us, nobody’s life (at least directly) hangs on representations of history, and there’s at least a dozen of us who can give a second opinion if someone makes a particularly unfounded and egregious claim. So we should be bold to try.

Scientific truth-seeking is a recent endeavor compared to how useful cognitive biases have been to humans in our everyday lives for hundreds of thousands of years. Therefore, we will have to make do with the spaghetti-making-machine that is our brain and see to it that we do not just leave it at that if what we actually seek is some tagliatelle.

 


¹ Yudkowsky, Eliezer. ”Rationality: from AI to Zombies.” Berkeley, MIRI (2015). 19–22.

What motivates historians

In this essay series, I will write down my own thoughts about Eliezer Yudkowsky’s essays on the Rationality: Ai to Zombies –series from the point of view of a historian. My reason for writing these is primarily to organize my own thoughts regarding the use of rationality as a mental tool in my own profession, and as such, I do not presume to even attempt to appeal to a very wide audience. However, if you are reading this disclaimer and find my essays insightful or entertaining, more the power to you and I implore you to go and read the original essays, if you have not already.

 

Truth: 1. The quality or state of being true.

1.1 That which is true or in accordance with fact or reality.

1.2 A fact or belief that is accepted as true.

 

Accurate knowledge of scientific truths of the (1.1) kind let us manipulate the world, but no space shuttle is going to crash based on whatever we think is historically true, as historical truths all fall neatly into the (1.2) category of truths. What then drives historians to seek the truth, if it is only ever going to be our best guess with no significance in the grand scale of things?

Again I return to the analogue of gardens and windows and artists, as I will probably do in the future as well on this blog. To reiterate: Gardens are temporal realities in history, windows are pieces of sources through which we peer through at the historical events and contexts, and historians are the artists who draw what they see through the window, filling in the blanks where the view is smudged or obscured. These pictures are both for colleagues and for wider audiences.

In his essay ’Why Truth? And…’¹, Yudkowsky proposes three separate modes of motivation for truth-seeking: Curiosity, pragmatism, and morality. All of these motivations come into play when historians decide on which window to peer through and start sketching to share the view with others. However, I would argue that curiosity plays the most significant role in this process for historians.

The pragmatic motivation for historians to conduct research drives them to focus on topics that are not curious just to ourselves but others as well (especially if they are willing to pay us), so that we may keep food on our table. On a wider scope, we cannot boast much pragmatic utility. Nothing we may discover has the kind of practical use that discoveries in natural sciences yield. Even if historians discovered irrefutable evidence that Hitler was secretly a Finnish man and in cahoots with Mannerheim and the rest to build Greater Finland, it would not necessarily motivate society to change anything about where we are proceeding. Instead, often the pragmatic concerns of agenda-driven agents would often best be served if history were shrouded in mystery, as this gives more leeway for lay interpretations that can be utilized for political gain.

In countering these agents lies the morality driven motivation of historians. We should be the ones to discover and bring to light the truth of historical events and ideas, so that they could not be twisted to serve whatever narrative is trending at any given moment. The problem with morality driven research is that it seems that often the historians with a very clear sense of moral duty also have specific expectations of what they will discover. As such, they need more integrity to not become-the-monster and twist their findings to suit their own agenda, if what they find is not what they thought it would be. You should never set out to do research to prove someone wrong, and serious self-reflection should always be practiced when a historian analyzes what draws them to a specific topic.

Curiosity is the motivation that cannot be removed from the study of history, as the gardens are innumerable and the choice we make about which one to focus on almost always rests on our curiosity.  Sometimes convenience overrides curiosity, but more often than not, historians decide to tackle vistas that are trickier to interpret just because we are curious about what we will discover. The pitfall of curiosity is finding there was nothing of significance there after all. As the picture is unfolding, a historian may realize that the picture contains nothing of interest even to themselves, and certainly will not catch the attention of anyone else. Finishing the picture can become tedious in this case, and the temptation to add in garden gnomes may become overpowering.

In this case, the curious historian does well to borrow a page from the morality-driven historian’s playbook and remind themselves that history still attempts to be science rather than literature. We are here to discover the truth, even if it is only the soft (1.1) kind.

 


¹ Yudkowsky, Eliezer. ”Rationality: from AI to Zombies.” Berkeley, MIRI (2015). 15–18.