Voice Your Disagreement

In this essay series, I will write down my own thoughts about Eliezer Yudkowsky’s essays on the Rationality: Ai to Zombies –series from the point of view of a historian. My reason for writing these is primarily to organize my own thoughts regarding the use of rationality as a mental tool in my own profession, and as such, I do not presume to even attempt to appeal to a very wide audience. However, if you are reading this disclaimer and find my essays insightful or entertaining, more the power to you and I implore you to go and read the original essays, if you have not already.

In Not Agreeing to Disagree, I mentioned Aumann’s Agreement Theorem, which proposes that no two perfectly rational agents can agree to disagree. From this theorem follows that if two people disagree with each other, at least one of them must be doing something wrong, or have limited data on the subject. I also brought up the Grievance Studies affair and how it relates to the primary issue I see with social constructionist point of view as it is being practiced in humanities right now. Namely, my problem with this approach is that you can make any interpretation valid as long as you have gone through the effort to construct a theoretical base upon which you can build entangled interpretations of the world. What gets forgotten here is that the original theory upon which all subsequent assertions are made, is often built on nothing but air. It is exceedingly difficult to figure out anything factual about the world, and complicated theories built on the internal logic of inevitably biased researchers is as inevitably bound to be – if not a straight lie – at least a perversion of reality.

“Once you tell a lie, the truth is your enemy; and every truth connected to that truth, and every ally of truth in general; all of these you must oppose, to protect the lie. Whether you’re lying to others, or to yourself.”¹

Research built on flaky theoretical bases is guaranteed to yield flaky results. With empirical sciences, there comes a watershed moment when theories get tested experimentally, and the flaky ones get washed away for not being able to hold up to scientific testing. With many fields in the humanities, however, actual experiments are never conducted, and the ultimate watershed moment is the peer review process, where it’s enough if you manage to make your reviewers think “this seems legit”. Logical consistency within the work itself, results that agree with what the reviewers would have liked to see, and all the minute little things interfering with a reviewers better judgement come into play here when bad research gets passed into publication.

But one can put only so much blame on the reviewers, if the entire trend of your field is to pull rabbits out of hats. And I don’t exactly blame the researchers either (at least I don’t assign full blame on individuals) because lying to oneself is easy enough even when you’re not basically encouraged to do it. I mentioned in the previous blog post how at least from my point of view, there is an epidemic of unnecessary politeness among colleagues in academia. Even if you don’t agree with someone’s interpretation of the world or research, people are more likely to say that everyone is entitled to their opinion than boldly challenge those presumptions.

“A single lie you tell yourself may seem plausible enough, when you don’t know any of the rules governing thoughts, or even that there are rules; and the choice seems as arbitrary as choosing a flavor of ice cream, as isolated as a pebble on the shore . . .

. . . but then someone calls you on your belief, using the rules of reasoning that they’ve learned. They say, “Where’s your evidence?””²

I was just criticized by a colleague for my conduct (unrelated to research) not a week ago long after the opportunity to amend my actions had passed. As I apologized and pointed out that it would have been nice to know if there was a problem while I still could do something about it, they told me that one of the reason for delaying is because giving negative feedback to one’s colleagues is so hard. And, while I don’t doubt it is, we should be asking for evidence and rigorous explanations of how our colleagues ended up with the results/beliefs they now hold to be true, even when it’s hard. Otherwise we become enablers to a web of lies, which in turn can divert us and others from the path towards better maps of the territory.

“Think of what it would take to deny evolution or heliocentrism—all the connected truths and governing laws you wouldn’t be allowed to know. Then you can imagine how a single act of self-deception can block off the whole meta-level of truthseeking, once your mind begins to be threatened by seeing the connections.”³

¹  Yudkowsky, Eliezer. ”Dark Side Epistemology” in Rationality: from AI to Zombies. Berkeley, MIRI (2015). 338.

² Yudkowsky, Eliezer. ”Dark Side Epistemology” in Rationality: from AI to Zombies. Berkeley, MIRI (2015).  336.

³ Yudkowsky, Eliezer. ”Dark Side Epistemology” in Rationality: from AI to Zombies. Berkeley, MIRI (2015).  337.