What Constitutes X to You?

After reading Yudkowsky’s essay on tabooing your words to figure out whether you are actually disagreeing about something or not with another person, I felt like I needed to update on that topic. I highly recommend you read Yudkowsky’s original essay, as the exercise feels very worth it to add into anyone’s toolbox.

While I still do not know how to efficiently battle ideas that are built upon a network of air, what Yudkowsky’s Taboo Game is good for is to have a systematic tool to get out of some of the disagreements one encounters in academic circles. It takes a few minutes compared to agreeing to disagree, but it also has the potential to resolve many pending or long-standing disagreements between two people who would like to retain mutual respect to each other without always needing to avoid that one subject.

Here is the an example of the game:

”Albert:  ”A tree falling in a deserted forest makes a sound.”

Barry:  ”A tree falling in a deserted forest does not make a sound.”

Clearly, since one says ”sound” and one says ”not sound”, we must have a contradiction, right?  But suppose that they both dereference their pointers before speaking:

Albert:  ”A tree falling in a deserted forest matches [membership test: this event generates acoustic vibrations].”

Barry:  ”A tree falling in a deserted forest does not match [membership test: this event generates auditory experiences].”

Now there is no longer an apparent collision—all they had to do was prohibit themselves from using the word sound.”¹

The point is that focusing on labels (sound vs. not-sound) will leave you thinking there is a disagreement, while proposing a test (vibrations vs. hearing) shortcuts to differences of definition without having to step outside the original question each time a label is encountered. I recently resolved a surface-level disagreement with a friend regarding the usage of the word ‘murderer’ by telling them my own membership-test for the word, after which she conceded that ‘killer’ worked just as well for her in the discussed case – at which point we agreed again.

If Albert and Barry agreed to disagree rather than pursuing the root of the disagreement, both would think the other one was ultimately wrong, which might subsequently lead to a lessening of respect for any other judgements made by them. All the while focusing on what either party thinks it takes for a phenomenon to pass into the category of ‘sound’ would lead to understanding and preserved mutual respect.

History does not have questions that are not for interpretation, and as such this tool would be especially useful for anyone on our field. Think of the following questions and imagine how much headache would be saved if instead of arguing about the question, the participants of the debate would substitute the bolded word with a membership-test:

When did the Roman Empire collapse?

Is Christian religion the foundation of Western Culture?

Where did industrialization begin?

¹ https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your-words

Voice Your Disagreement

In this essay series, I will write down my own thoughts about Eliezer Yudkowsky’s essays on the Rationality: Ai to Zombies –series from the point of view of a historian. My reason for writing these is primarily to organize my own thoughts regarding the use of rationality as a mental tool in my own profession, and as such, I do not presume to even attempt to appeal to a very wide audience. However, if you are reading this disclaimer and find my essays insightful or entertaining, more the power to you and I implore you to go and read the original essays, if you have not already.

In Not Agreeing to Disagree, I mentioned Aumann’s Agreement Theorem, which proposes that no two perfectly rational agents can agree to disagree. From this theorem follows that if two people disagree with each other, at least one of them must be doing something wrong, or have limited data on the subject. I also brought up the Grievance Studies affair and how it relates to the primary issue I see with social constructionist point of view as it is being practiced in humanities right now. Namely, my problem with this approach is that you can make any interpretation valid as long as you have gone through the effort to construct a theoretical base upon which you can build entangled interpretations of the world. What gets forgotten here is that the original theory upon which all subsequent assertions are made, is often built on nothing but air. It is exceedingly difficult to figure out anything factual about the world, and complicated theories built on the internal logic of inevitably biased researchers is as inevitably bound to be – if not a straight lie – at least a perversion of reality.

“Once you tell a lie, the truth is your enemy; and every truth connected to that truth, and every ally of truth in general; all of these you must oppose, to protect the lie. Whether you’re lying to others, or to yourself.”¹

Research built on flaky theoretical bases is guaranteed to yield flaky results. With empirical sciences, there comes a watershed moment when theories get tested experimentally, and the flaky ones get washed away for not being able to hold up to scientific testing. With many fields in the humanities, however, actual experiments are never conducted, and the ultimate watershed moment is the peer review process, where it’s enough if you manage to make your reviewers think “this seems legit”. Logical consistency within the work itself, results that agree with what the reviewers would have liked to see, and all the minute little things interfering with a reviewers better judgement come into play here when bad research gets passed into publication.

But one can put only so much blame on the reviewers, if the entire trend of your field is to pull rabbits out of hats. And I don’t exactly blame the researchers either (at least I don’t assign full blame on individuals) because lying to oneself is easy enough even when you’re not basically encouraged to do it. I mentioned in the previous blog post how at least from my point of view, there is an epidemic of unnecessary politeness among colleagues in academia. Even if you don’t agree with someone’s interpretation of the world or research, people are more likely to say that everyone is entitled to their opinion than boldly challenge those presumptions.

“A single lie you tell yourself may seem plausible enough, when you don’t know any of the rules governing thoughts, or even that there are rules; and the choice seems as arbitrary as choosing a flavor of ice cream, as isolated as a pebble on the shore . . .

. . . but then someone calls you on your belief, using the rules of reasoning that they’ve learned. They say, “Where’s your evidence?””²

I was just criticized by a colleague for my conduct (unrelated to research) not a week ago long after the opportunity to amend my actions had passed. As I apologized and pointed out that it would have been nice to know if there was a problem while I still could do something about it, they told me that one of the reason for delaying is because giving negative feedback to one’s colleagues is so hard. And, while I don’t doubt it is, we should be asking for evidence and rigorous explanations of how our colleagues ended up with the results/beliefs they now hold to be true, even when it’s hard. Otherwise we become enablers to a web of lies, which in turn can divert us and others from the path towards better maps of the territory.

“Think of what it would take to deny evolution or heliocentrism—all the connected truths and governing laws you wouldn’t be allowed to know. Then you can imagine how a single act of self-deception can block off the whole meta-level of truthseeking, once your mind begins to be threatened by seeing the connections.”³

¹  Yudkowsky, Eliezer. ”Dark Side Epistemology” in Rationality: from AI to Zombies. Berkeley, MIRI (2015). 338.

² Yudkowsky, Eliezer. ”Dark Side Epistemology” in Rationality: from AI to Zombies. Berkeley, MIRI (2015).  336.

³ Yudkowsky, Eliezer. ”Dark Side Epistemology” in Rationality: from AI to Zombies. Berkeley, MIRI (2015).  337.

The W, W, W, W, and W

During my most recent PhD candidate peer meeting and again while interacting with my students, I began to consider the mental process of historians as they settle on a research question. Additionally – and quite crucially – when do they reach the point in their research process when they feel like they’ve gathered enough evidence in order confidently analyze it?

Natural sciences are both blessed and cursed by real world demands and limitations when they’re faced with these tasks. First, they mostly attempt to find answers to questions that will ultimately (and hopefully) benefit the society by pushing our cumulative knowledge that little bit further in the margins of an already marginal sub-field. Second, research questions of natural science articles often require less mulling over by the ultimate author of the article and the conductor of experiments, as new discoveries in these fields tend to open a Pandora’s Box of new questions for subsequent scientists to answer. From these questions what ends up getting picked by any given scientist or lab team then is then usually determined by their own limitations what comes to funding, equipment, staff, etc.

For historians, the process of coming up with a research question is quite different.

Not only is history a vast ocean of unattainable truths, whatever questions get picked and answered by historians rarely if ever have any tangible impact on our future. One could argue otherwise, but at the very least it seems obvious that the impact of one historian is miniscule unless their interpretations get a boost of support from the research community at large, and then further clout by mainstream popularity. The baseline expectation for the societal impact factor of historical articles relative to the impact factor of most other sciences should be that people who read it outside your own niche circles will go :- “Neat.”

The positive side to this insignificance is that we are quite free to do whatever we like. Even actual limitations regarding the availability of source material are usually only as restricting as we perceive them to be, and a creative and/or skilled writer will often make it seem like there never was any obstacles at all.

The downside to this freedom comes – as it often does – in the form of indecision. If there is a historian whose underlying motivation for becoming one wasn’t personal interest and passion, point them out to me because I’ve never met one before. But with passion also come a lot of options, and the wider one’s interest pool the more difficult it becomes to just go ahead and focus on one thing. Moreover, once you’ve made the choice there are pitfalls to a historical research question that should definitely be considered in advance of conducting any research. So many layers exist in any historical phenomenon that not keeping your eyes on the road can and will come and haunt you later.

And this brings us to the Big Five ‘W’ Questions (Who, What, When, Where, Why), and why it is so essential to consider them carefully before anything else. By clearly defining an answer to all these questions as they relate to your research topic, you are shaving off enormous amounts of excess sources, literature, and perspectives to consider from the get-go. Figuring out your scope as soon as possible will make it much less likely that your precious grant paid time will be wasted on dead-ends and fascinating but ultimately pointless detours.

So let’s look at them:

The Who: By narrowing down your research to a very specific group of people (or objects, as is the trendy thing to do these days) you give yourself a free pass on considering the experiences of people who are similar but fundamentally different. Essentially, in a room full of shouting people, you are deciding to focus on that guy first and foremost. It’s a limited story, but at least it will be cohesive.

Regarding my own dissertation, this was one of the most crucial and useful early eliminations. Focusing on regular officers’ instead of all officers’ perspective of their identity and role allows me to focus on a very homogenous group of people with comparable backgrounds, while it also allows me to navigate the sea of sources with a very cutthroat mentality about whose accounts of the war I will give the time of day.

The What: I don’t think anyone can really avoid defining the specifics of what they are studying, but the scope of the ‘whats’ that I’ve seen vary greatly. A good rule of thumb is to consider how easy it is to define the terms used in your research question. Terms like ‘meaning’, ‘influence’, ‘experience’ etc. are particularly precarious in how many ways they could be interpreted. You should look at your research question while trying to imagine how it could be interpreted: If two readers can have two completely different ideas of what exactly are you studying based on your research question, your what needs some work.

The When: Perhaps the most obvious of the five for a historian to establish, lest they want to be pulled into the void. However this is never as easy as it sounds, as historical events don’t tend to have a definitive beginning and end, on top of which you can’t get away with saying nothing of the stuff that led to the situation where your own study begins. It’s a pain to choose a cut-off point at either end, but it must be done. And it must be done sooner rather than later.

The Where: Along with The When, this limiter is usually one of the most obvious and luckily the easiest to define early on, while also yielding the research process vast amounts of material now safe to be discarded.

The Why: Finally, this is the most important question of all, both when deciding on a research question as well as when meta-thinking out your own motivations for embarking on the journey to answer it.

When considering your own motivations, knowing the why of settling down on your research question will help you navigate your own biases along the way. It will also illuminate for yourself something about yourself, and at this stage, revelations such as “I’m just doing it for the money” or “I’ve just always wanted to find out” will also help you put the research project within the proportions it deserves.

What comes to the why of your research question, it is what defines historical research. If you answer all the above questions and leave out the why, you’re a chronicler rather than a historian. And if you refuse to answer the why as a historian, rest assured someone else with likely much narrower perspectives and more unhinged ulterior motives will step in and do it for you.

Not Avoiding Debate

I just had a debate with an old friend about a certain current hot topic, about which we vehemently disagree with one another but also feel quite strongly about. We have clashed before regarding this issue, but this time both attempted to hear each other’s arguments out in long text format in hopes of convincing the other of their own position. At least that is how I proposed it and hope that is what she saw the interaction as well.

Well, neither of us convinced the other, and I do not think my position shifted even slightly towards my friend’s point of view. Usually this is not a good result when the argument contains many nuanced points, and I am not exactly proud about being so resolute. Yet I also am tempted to defend myself by presuming I have made myself more thoroughly familiar with the topic and as such just know more of the general arguments, research-based facts, and statistics regarding it. I also feel like my position has received more critical consideration even against my own values and beliefs compared to hers, as I’ve changed my mind about this topic once already after holding largely the same stance my friend still holds. I would like to think I’ve considered the topic rationally, and that despite no solution existing that would please everyone, the one I advocate for has the least suffering involved for all parties.

Anyhow, the point about the debate was that I was trying to debate in a way in which I was only arguing against the stronger points of my opponent, while not dwelling too much on some of the weaker points. I also tried to argue in abstract as much as possible without pulling in anecdotal evidence or statistics even when I knew them to support my claim in case they would have taken the spotlight off of the underlying points – which I felt stood strongly enough on their own even without concrete research.

”[W]hen it comes to spontaneous self-questioning, one is much more likely to spontaneously self-attack strong points with comforting replies to rehearse, then to spontaneously self-attack the weakest, most vulnerable points.”¹

I was rather hoping my friend would manage to shake me by attacking the weaker points of my position, since people are easily blind to them themselves, yet I did not feel conflicted during the course of the debate. I felt like I had good answers for everything she said. And I felt like her arguments were exceedingly weak.

Still, I think it’s rather good that we both have a clear position on this, and that neither is willing to just agree to disagree, which is a cop-out people who can afford to not get invested often make to maintain harmony in their own social relationships. As long as you are willing to take a stance, you are putting yourself out there to be challenged, and there’s the potential of you changing your mind. I myself tolerate points of views which I think to be false, but if the question comes up I never pretend that I find two contradicting positions equal. This gives rise to debates, and sometimes I do change my mind completely or at least shift my stance a bit. I find it rewarding to have my beliefs questioned.

Even worse than agreeing to disagree, however, is pretending you’re looking at an issue from a neutral perspective. Very few things in life are really matters of taste, so pretending like you don’t have an opinion and feeling superior for it signals to me of cowardice.

”It’s common to put on a show of neutrality or suspended judgment in order to signal that one is mature, wise, impartial, or just has a superior vantage point.”²

This is more true regarding real life socio-political issues than academic research, but in principle it still applies. Even if the difference between two historical interpretations has very few real life implications and as such does not demand fervor in order to inspire positive change in our environment, if a person who is well-versed on the topic hides behind “it’s complicated” they just seem like a liar or a coward to me. So, even if I found my friend’s arguments weak and unconvincing, she still receives more respect for me for engaging in debate than anyone who pathologically dodges arguments by posturing neutrality. There’s a time and place for debates, sure, but at least in the context of academia I think that time and place is – almost always – now.

¹ Yudkowsky, Eliezer. ”Avoiding Your Belief’s Real Weak Points” in Rationality: from AI to Zombies” Berkeley, MIRI (2015). 318.

² Yudkowsky, Eliezer. ”Pretending to be Wise” in Rationality: from AI to Zombies” Berkeley, MIRI (2015). 61.


When truth is not the truth

One thing that I witness in my own faculty’s history researchers more often than I would like is what I would call the pathological crutch of relativism.

I just recently gave my first-year history students the task of thinking about the concept of ‘truth’ as it relates to history, and in the answers I could see what this crutch has already done to them. Truth as a word is used by them interchangeably with fact, reality, experience, and belief, depending on the context. And while it is ‘sanctioned’ to speak of people’s personal experiences of the world as their own truths, for me it’s an abomination. Truth is truth, anything else is diluting the word. There is as clear a distinction between ‘past’ and ‘history’ – and somehow this difference is not difficult for historians to acknowledge. However, there is as great a difference between ‘truth’ and ‘experience’, yet as historians use the word ‘truth’ its meaning gets somewhat muddled in the minds of everyone. For something to be deemed true, it should be at least open to experimentation – and nothing in history is. We cannot even be certain that what someone claims in a written source is their honest belief.

I believe the route to the error of the pathological crutch of relativism and the subsequent confusion about the word ‘truth’ follows from the same cognitive paths, as does the adoption of relativism in beginning philosophy students, according to Michael Rooney:

When confronted with reasons to be skeptics, they instead become relativists. That is, when the rational conclusion is to suspend judgment about an issue, all too many people instead conclude that any judgment is as plausible as any other.¹

It is painfully obvious to even beginning historians that the past is something we can never really know the truth of, and as such, the history we write cannot claim to be truthful as those scientific articles of our colleagues from other experimenting fields can.

I myself try to practice humility in my day-to-day life not just in regards to my profession, but everything else as well. It both helps me appreciate everything a bit more, while also giving me a reason to strive forward. Admittedly I do have the help of a healthy amount of natural ambition and confidence to fuel this process, but I am quite sure that anyone would benefit from reminding themselves from time to time of the fact that no matter how knowledgeable or important they feel, there is a mountain of knowledge still for them to learn.

I try to maintain a mental picture of myself as someone who is not mature, so that I can go on maturing.²

I believe historians have a bit of an inferiority complex because of the way our discipline is categorized – standing against and among other sciences. Laymen consider it an academic pursuit requiring a degree of scientific rigour, and historians are expected to bear the burden of evidence and to adhere to the truth to the best of their abilities. Yet we are not dealing with truths, not by a long shot. We are trying to figure out what was true by examining sources from unreliable narrators owing to their flawed perception, flawed reasoning, and conscious agendas. We have to make guesses about the truth based on these accounts, and our guesses can never be verified.

This, I think, leads to the epidemic of sage proclamations of “multiple truths” existing in history. Referencing personal beliefs and experiences of the world as “truths”, and further allowing one’s own interpretation of the sources to be their own valid version of “truth” makes it all seem more credible and nice regarding one’s sensibilities of whether or not what we do has any sense in it.

The problem is that beliefs, no matter how fervent, are not ‘truths’ in the sense that the word is most commonly understood. There is a reason why the word ‘belief’ exists, and it irks me a bit to see beginning history students confusing themselves by attempting to understand the relation of truth and history while using the word in multiple different ways.

¹ https://www.lesswrong.com/posts/AdYdLP2sRqPMoe8fb/knowing-about-biases-can-hurt-people?commentId=Z9LacBsgsH7cPAnhu

² https://www.lesswrong.com/posts/rM7hcz67N7WtwGGjq/against-maturity

Remember the Direction of Causality

In his essay Three Fallacies of Teleology, Yudkowsky goes over Aristotle’s four definitions of the word aition:

”These were his four senses of aitia:  The material aition, the formal aition, the efficient aition, and the final aition.

The material aition of a bronze statue is the substance it is made from, bronze.  The formal aition is the substance’s form, its statue-shaped-ness.  The efficient aition best translates as the English word ”cause”; we would think of the artisan carving the statue, though Aristotle referred to the art of bronze-casting the statue, and regarded the individual artisan as a mere instantiation.

The final aition was the goal, or telos, or purpose of the statue, that for the sake of which the statue exists.”¹

Within most natural sciences, telos can be ignored completely in favour of ‘efficient aition’ because the objects of study are unconscious particles or systems. In modern academic circles, it is commonly understood that the ‘final aition’ is something reserved only for intelligent agents (humans), who can at least seemingly make decisions about their actions based on an abstract future goal. As such, even if future does not directly affect past actions, the beliefs about a possible future certainly do.

That being said, people’s cognitive motivations should be given very sparing credit for what actually drives their behaviour. Evolution has no foresight, but only takes the next greedy local step. While humans do have a concept of cognitive cause, our evolutionary adaptations have had a longer time to wire into our system than any of our current plots, and as such the next unanticipated greedy step towards feel-good hormones is likelier to throw a wrench in our plans than we think. Not to mention that in general people are not very good at planning or accurately assessing the motives of the people around them.

“Cognitive causes are ontologically distinct from evolutionary causes. They are made out of a different kind of stuff. Cognitive causes are made of neurons. Evolutionary causes are made of ancestors.”²

Because of our cognitive machinery, humans have a tendency to look at an outcome and start to look for a path which led there, as if the decisions on the way were made with the explicit unifying purpose of arriving at the ultimate destination. This line of thinking is difficult to avoid for us historians, because our study subjects were intelligent agents with some foresight and sense of purpose when choosing the course of their actions. However, telos is such a complicated concept that it can be detected even semi-reliably only in single persons, and even then one should keep in mind how rarely our own plans end up exactly where we intended them to go from the beginning. Not to mention how, regardless of where we end up – unless we are good diary keepers – we are often undercut by the biases of narrative memory, which makes it retrospectively feel like we were more on track all the time than we actually were. There are no historical masterminds, and people are much more reactionary than they think.

“The third fallacy of teleology is to commit the Mind Projection Fallacy with respect to telos, supposing it to be an inherent property of an object or system. Indeed, one does this every time one speaks of the purpose of an event, rather than speaking of some particular agent desiring the consequences of that event.”³

Historians do this all the time, and really, how could we not? History is constructed and understood in the narratives, and a narrative needs to be compressed to be compelling.  We are also writing to an audience with brain machinery just as biased as our own, so pandering to these biases will make the text resonate better for them. Thus we end up saying the purpose of the suffragette move was female emancipation, or that the purpose of the 13th amendment was to abolish slavery forever – even though these are in truth gross oversimplifications of even the various motives of any single agent advocating for these things.

Using language like this is understandable, but we should seek for more accurate ways to express ourselves whenever possible, lest we lead our own thinking astray. The shape of our words influence our thinking, so we should strive to be as clear as possible, even if that means giving up some of the rhetorical impact.

While if one is to subscribe to a causal universe without free will where the future is ’predestined’ by the links of causation, one should also always keep in mind that the causal arrow only works in one direction.

¹ https://www.lesswrong.com/posts/2HxAkCG7NWTrrn5R3/three-fallacies-of-teleology

² https://www.lesswrong.com/posts/epZLSoNvjW53tqNj9/evolutionary-psychology

³ https://www.lesswrong.com/posts/2HxAkCG7NWTrrn5R3/three-fallacies-of-teleology

Historians should not try to be Screenwriters

Historians deal with such vast sets of data that is very difficult to keep oneself from prematurely reaching conclusions, especially if the research interest was motivated in the first place by a pet hypothesis. Our assumptions guide us even before we have decided on a single source to study, and this process tends to start feeding itself the further we go on and the more pieces of evidence we find in favour of either our hypothesis, or the first plausible narrative that we latch ourselves onto.

Yudkowsky calls those who write down the bottom line first clever arguers as opposed to curious inquirers.¹ The clever arguer begins by writing the conclusion they wish to reach, and then begins to accumulate evidence and construct arguments in support of their preferred bottom line. This is also called rationalization, which has nothing to do with making the process of rationality. As far as cognitive biases go, it is one of my ‘favourites’ since it is so ubiquitous in everyday life. Yudkowsky would prefer to call rationalization “giant sucking cognitive black hole” to avoid confusion between it and rationality. Meanwhile, curious inquirers first gather all evidence before writing down their conclusions

As no one actually starts by writing down the bottom line physically and historians usually do spend quite a large amount of time going through sources both in favour and against their hypothesis, it is easy to flatter ourselves by considering ourselves curious inquirers. However, critically looking at our own thought and decision processes, it is easy to see this is not often so. And though it is bad, it is not the end of the world.

“Most legal processes work on the theory that every case has exactly two opposed sides and that it is easier to find two biased humans than one unbiased one. Between the prosecution and the defense, someone has a motive to present any given piece of evidence, so the court will see all the evidence; that is the theory.”²

This seems to be the way science is mostly done, especially in the humanities. Even though it would be better for people to be curious inquirers, it is good enough that we have enough biased arguers – as long as they are on opposing sides so that the reader may find all relevant pieces of evidence between them. The problem with this situation is that people are unlikely to seek out information that would disprove their own hypothesis, so instead of academic readers getting many sides of the argument, they usually just choose a camp and choose to read and cite articles from their cite accordingly, ignoring the other side’s contributions. And as much as that is partly on them, wouldn’t you want to be one of those curious inquirers who is not just getting cited by people who already believed your point before reading a word of your research, to prove their own point? Or do you want to be a tool?

In hard sciences, it is easier to counteract this effect of being drawn into being a clever arguer when hypotheses can be tested with novel experiments. It is easier for a scientist to change their mind or be driven by curiosity when there’s new evidence to be found entirely. Alas, with history not only can we never test our hypotheses, new sources are very rarely found that would carry enough significance to shake anyone’s convictions. In reality, all historical conclusions are probability estimates with no way of empirically testing whether the estimate is correct or not. The validity of historical arguments is in practice measured by how convincing a case the historian can make, and most people are not in a position to notice if key pieces of evidence are being ignored by a clever arguer. If you decide which sources to include based on whether it is favourable or unfavourable towards the narrative you are trying to construct, you have become a clever arguer. Yet it is so tantalizing to do this with historical narratives as it is exactly what one is expected to do with all creative forms of narrative entertainment. When writing compelling fictional narratives, you are supposed to subtract what doesn’t drive the plot forward, and you should be as consistent with the alignment of your themes and messages as possible to make the experience enjoyable and approachable for the reader/viewer of narrative stories.

So why not just succumb to the allure of becoming a clever arguer like most everyone else?

Why, because at least I didn’t really get into academics to become a tool, or the ‘second best solution’ in conducting academic debates. I’d rather my research stand on its own rational merits, even at the expense of the overarching narrative.

“You cannot obtain more truth for a fixed proposition by arguing it; you can make more people believe it, but you cannot make it more true.”³

I for one do not like to form hypotheses before I set out to study something – it is evident that I have a general idea of what I am going to find, but at least so far I have not been very attached to my pre-conceived notions and am just curious to find out what my sources reveal. Not every change is an improvement, but every improvement is necessarily a change. If I were to reach the same conclusions after I had spent N amount of time doing research as the one I had when I started, I would be incredibly disappointed.

¹ Yudkowsky, Eliezer. ”The Bottom Line” in Rationality: from AI to Zombies” Berkeley, MIRI (2015). 302–304.

² Yudkowsky, Eliezer. ”What Evidence Filtered Evidence” in Rationality: from AI to Zombies” Berkeley, MIRI (2015). 307.

³ Yudkowsky, Eliezer. ”Rationalization” in Rationality: from AI to Zombies” Berkeley, MIRI (2015). 310

Keep It Simple, Stupid!

As I was planning a twenty-minute teaching demonstration for my university pedagogy course, I came across something I want to share lest I forget it again.

It happened as I was rummaging through some of the old slides I had stored from lectures from a time when I was only starting my path as a student of history. One of these sets of slides was put together by a very well-known historian in Finland, Markku Kuisma. The slides themselves are ugly as sin, as Kuisma has since retired from his position at the university and is a product of another age, making his career during a time when the visual aesthetics of lecture slides were not on anyone’s priority list. At any rate, I found that the content of the slides themselves still checks out, especially in regards of two matter-of-fact rules of conduct for historians shortened to convenient abbreviations.

These rules were KISS and SS.

KISS stands for “Keep It Simple, Stupid!”

I find it a refreshingly blunt take on the importance for historians to use Occam’s Razor in our work. The simplest possible explanation is usually the likeliest one when trying to figure out motivations of historical actors, even if it is rarely the most interesting. Either finding ways to subscribe to already existing conspiracy theories or inventing your own is intellectually very rewarding for curious-minded people, but absence of evidence is actually evidence of absence.

 “If E is a binary event and P(H | E) > P(H), i.e., seeing E increases the probability of H, then P(H | ¬ E) < P(H), i.e., failure to observe E decreases the probability of H . The probability P(H) is a weighted mix of P(H | E) and P(H | ¬ E), and necessarily lies between the two.”¹

Conspiracy theories usually rely on multiple real-life things being entangled together in a convoluted way which, simply by the number of supposed pieces of ‘evidence’ start to make sense. However, very concrete pieces of evidence that would be difficult to explain in a simpler way are exceedingly rare, and in probability theory, absence of evidence is evidence of absence. Secrets are difficult to keep, and the difficulty only increases the more people are involved and the more water passes under the bridge.

SS is an abbreviation from the Finnish words ‘Sähläys’ and ‘Sattuma’, which mean ‘Fussing about’ and ‘Coincidence’ respectively. The rule exists to remind historians from attributing too much agency to historical actors, as they were likely more reactionary than even they themselves thought themselves to be, and most results of complicated causalities can be attributed to something that can adequately be described as coincidence.

Considering this rule and speaking of Razors, Hanlon’s Razor is another very useful and yet woefully underutilized reasoning tools for historians:

“Never attribute to malice that which can be adequately explained by stupidity.”²

Because of Hindsight Bias it is easy to see multiple possible paths a rational agent could have taken to reach the conclusions recorded in history, had they been of a mind to wreak some havoc. However, as I have stated before, both masterminds – evil or otherwise – are exceedingly rare even when speaking of one’s ability to conduct their own lives. Trying to extend one’s influence to the probabilities surrounding oneself in situation where multiple people and bureaucracies are involved makes it almost impossible for someone to reliably plan a course that would play out the way they envisaged. Even if one shouldn’t fall into the trap of considering people idiots either, stupidity is still a much more common trait than thought-out malice. Most of human life is about reacting to situations (e.g. fussing about) and this in turn just so happened to sometimes lead to coincidences that work out in one’s own favour. These people are then likely to construct narratives of their own lives where they really believe they were the orchestrator of their own success, which then leads to primary sources where an uncritical historian might take someone’s word for how the events and thought processes leading up to something occurred.  And this lies at the heart of why one shouldn’t forget the rule of SS.

It is striking to me that though, at least for my academic generation, these rules were introduced very early on our path for a history degree, I could not even remember having heard them before I went back to revisit old slides. I cannot imagine that many other students took them to heart either, which is a shame.

¹ Yudkowsky, Eliezer. ”Absence of Evidence is Evidence of Absence” in Rationality: from AI to Zombies” Berkeley, MIRI (2015). 107.

² https://rationalwiki.org/wiki/Hanlon’s_razor

Fiction and Nonfiction

History is not like most scientific disciplines in that it really does read like prose a lot of the time while seeming to only borrow some notes from the academic writing playbook. This is also – at least from my experience – the encouraged direction to take with one’s writing. I’ve mentioned before the old cliché discourse of whether history is more a genre of literature than a scientific discipline, and my position regarding it remains unchanged. I consider history a blend of both fiction and nonfiction, with the nonfiction parts enjoying varying and often low degrees of falsifiability. One can quite confidently state that it was a fact that the Chernobyl nuclear plant did in fact happen, and tangible evidence for it will remain there for curious minds to collect for years to come. However, when we step into the realm of private acts or go even further and consider the lived experiences of historical actors, I do not consider it such an insult for someone to suggest that we peddle in fiction. We understand little enough of our own experiences in the present moment, and there is no way one could ever without any reasonable doubt falsify an interpretation of what Alexander the Great actually felt towards Hephaestion. All we can do is own up to our own interpretations; make it clear when we are painting the garden as we see it from behind our window and when we are assuming gnomes in a blotch that has vague hues of blue and red to it.

On top of all that, owing to the vastness of history, a historian cannot escape selective narration in their writing and all that implies. If we follow Yudkowsky’s proposed definition that nonfiction conveys knowledge while fiction conveys experience, then history is always blending these two.¹ We want to transmit knowledge of the past to our audience, but this knowledge in itself often feels hollow unless you can at least imagine what it implied to the people to whom the knowledge had some tangible significance. Additionally, even the process of choosing what knowledge to include has the author making conscious narration choices, since there is never a clearly contained set of data that warrants full disclosure, while other data could be nonchalantly ignored. Everything is linked and nothing is obviously irrelevant.

This mixture of nonfiction and fiction is especially central to my own current research (and my overall research interests) as trying to understand past experiences is precisely at the center of my studies. I know I can get no falsifiable factual knowledge pertaining to my research question extracted from my sources, but by trying to understand my subjects on their own terms and by reflecting upon my own knowledge of psychology I try to translate what I think their experiences were into a narrative for modern readers. In other words, I am trying to convey experiences and thus dipping my toes into the realm of fiction, even if my interpretations are based on historical sources and real people instead of being completely made up. I also do not feel any lesser as a scholar because of this.

Considering this fine line between fiction and nonfiction in our writing, historians should be especially wary of further obscuring where the line is drawn. It should be apparent to the reader where the change from listing factual data to making non-falsifiable interpretations happens. The problem with this is that we often want to write enticing text that has a proper impact on our audience, much like novelists do. Unlike novelists, however, people often take our word as fact, because of our position of authority as experts. Especially lay readers do not stop for a minute to think of the words beyond the initial impression and do the legwork of separating the falsifiable from the non-falsifiable. We should be responsible and make the distinction obvious enough for any reader to understand.

“Muddled language is muddled thinking.”²

In academic circles we are surrounded by polite people with impostor syndromes, so it is rare occurrence one gets called out for mystifying language outside of workshops dedicated to improving one’s writing skills. More often academics will try to see if there might possibly be anything meaningful to a misleading phrase, giving you the benefit of a doubt and interpreting their lack of understanding as their own flaw rather than yours. You cannot be this lenient with your own writing. If you cannot be sure of what you mean or can imagine multiple interpretations for a sentence within your work, the sentence needs to go.

A good way to start to approach this is by writing your first, second, and thirds drafts (at least) with as simple language as possible. Clarity is king, and you should never lose sight of where the line between fiction and nonfiction runs in your own work.

“If you simplify your English, you are freed from the worst follies of orthodoxy. You cannot speak any of the necessary dialects, and when you make a stupid remark its stupidity will be obvious, even to yourself.”³

Additionally, to make your stupid remarks even more obvious, use a silly font like Comic Sans if you can bear it while working on your text. Times New Roman is like wearing a suit while Comic Sans is a clown face – these first impressions of how your text looks will matter by the time you submit it to a journal but while working on your text you should let the content of the message rule sovereign.

¹ Yudkowsky, Eliezer. ”Rationality and the English Language” in Rationality: from AI to Zombies” Berkeley, MIRI (2015).

² Yudkowsky, Eliezer. ”Human Evil and Muddled Thinking” in Rationality: from AI to Zombies” Berkeley, MIRI (2015).

³ George Orwell, “Politics and the English Language,” Horizon (April 1946).

Understanding is not Contamination

Most people can be understood as having lived morally grey lives; virtuous in some respects and desperately lacking in any decency in some others. These qualities tend to be especially polarized in people in powerful positions. The further one’s own society and personal opinions progress towards values that differ from whatever values were held and recognized at any given context in history, the darker the tones of grey seem to become at first glance for the modern historian.

We cannot escape the fact that the people they study were multi-layered individuals who made choices with moral implications for various reasons ranging from sociological to psychological. People have many reasons for doing the things we do, and the consequences of our actions have an even broader set of possible permutations that we may not have even considered as we act. Selfishness and altruism can and do exist within the same individuals, and people often miscalculate how their actions will reverberate in their surroundings after being set into motion.

What is important to remember is that nobody thinks of themselves as the villain.

Most people construct their life stories with themselves as the hero. Our narrative memories are built around our need to rationalize away everything that might cause cognitive dissonance. Clairvoyants do not exist, and people are notoriously bad at anticipating future consequences to present actions. Based on our own set of values, we navigate towards what we ourselves consider to be right and good. When people write down their stories, you can be sure that if they describe in detail an incident that seems to paint them in an unfavourable light, they did not think of it that way at the time. Raw honesty and confessional documents exist, but even when the intention of the document is self-flagellation, these accounts still have been forced through the author’s self-preserving cognition, which is always working to rationalize and make excuses for themselves. We cannot assess ourselves objectively, even in relation to our own abstract moral code.

One of the most important things to me personally as an academic is arguing in good faith. Whenever possible, I try to imagine the best possible intentions for a colleague, the people I study, and to whomever I disagree with. Admittedly, within today’s political climate it is not so rare that the intentions of people entering into arguments is mainly to ‘destroy’ the other side and consequently score points for their own side.

“Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back—providing aid and comfort to the enemy.”¹

However, as long as this is not the only viable interpretation of a situation, I refrain from it. Most often construing motivations that make your opponent look malevolent results in you being woefully wrong of what is actually going on inside their head. Moreover, this strategy will definitely win you nothing in terms of personal growth. This is true both when you are dealing with people living today, but in a sense even more important for a historian. We are not only often studying people who have been dead for a long while and cannot defend themselves, but our interpretations and voices have a disproportionate power over someone’s reputation once it reaches print. People are not going to go and come up with their own interpretations by themselves, they will trust the authority – in this case the historian.

When you accurately estimate the psychology of another person and come up with a reasonable moral code they may have followed, the decisions made by even the worst types of people you can imagine start to make more sense. It is possible that you will also come out of the experience feeling slightly unclean. It is not like you suddenly share the values of these people and would not have chosen a similar path to theirs, but when your estimate of their internal life is realistic, you can at least understand them. And that in itself can cause some initial discomfort. However, your map will now more closely resemble the territory, and you are better off for it. Being closer to the truth is reward enough to having to step into nauseating mindscapes. I think every self-respecting historian should strive to do this; to understand the people they are studying regardless of who they were or what it is they believed in or did.

Way too often I encounter one of two strategies employed by historians when they write about people whose beliefs or actions did not always align with the researcher’s own morality.

    1. The unsavoury bits of their existence is ignored and the focus is keenly kept on the ‘other important stuff’ the person was involved with, as if you could cut a part of the person out like a dark spot on a fruit and still call it a holistic interpretation.
    2. The unsavoury bits are placed under laser-focus and what is considered a dark spot on the person contaminates their whole being. There is nothing to be salvaged, as the person is made ‘unclean’ by the rotten parts of their legacy.

Neither of these approaches feels intellectually honest, and I feel even more remorse for the readers to whom the facts of the person in question are otherwise unknown. We should trust our readership more and refrain from frantic virtue signaling just so that nobody can say we ever agreed with Hitler.

¹ Yudkowsky, Eliezer. ”Politics is the Mind Killer” in Rationality: from AI to Zombies” Berkeley, MIRI (2015). 255.