Practical Dos and Don’ts before the last stretch

Or: details and technicalities that students (and scholars) tend to overlook during the writing process – to their later exasperation

The most important part of an academic work is of course its contents: an analytically compelling presentation of the conducted research and its results. During their Master’s projects, students are therefore primarily focused on the research and writing process itself, as they should.

For better or worse, it can be an immersive process; things like reference styles or document layouts might be overlooked as trivial details and low-priority tasks that can be tackled later. However, ‘later’ always comes sooner than one expects, and unresolved technical details have a tendency to accumulate into a despair-inducing endeavour in the last minute.

This post is hence a kind reminder and warm recommendation to deal with those minor yet surprisingly time-consuming details in good time and in the course of the process. Trust me – you’ll want to spend the last days before submission proofreading your text for the umpteenth time, not fiddling with page numbers, tables of content or reference formatting in a fit of rage.

And you definitively do not want to be faced with corrupted documents, lost files or other data management disasters when your submission deadline is looming around the corner.

[Please note: This post is written by a qualitative historian/social scientist primarily for students with similar specialities; it hence does not necessarily correspond to practices and guidelines in quantitative research, let alone entirely different scientific fields and disciplines.]

References

Make a habit of diligently completing your references as you go, especially if you aren’t using a reference management software (why on earth?).

  • Look up all the details you need right away: author names, page numbers, journal volumes and issues…
  • Format your references in accordance with your chosen referencing style: author names, editor names, inverted commas, italicisation…
  • Also remember to include all references in the bibliography.

In other words, don’t leave unfinished references for your future self to tackle in the last minute! Make sure that you don’t leave a trail of half-baked reminders like “check this!”, “add more”, “author [which publication??]”, “pp. XX”, “where was this discussed?” – this strategy has magnificent potential to backfire. If possible, resolve all open issues at the end of each writing pass.

Of course, not all issues can be tackled at once from home, but might require a visit to the library, archive or similar. If you need to check something that’s relevant for the substance of your thesis, take care of it promptly. Otherwise, I recommend gathering detailed information on all unresolved issues in one online- or mobile-accessible place (e.g., Google Doc/Spreadsheet, note app on your mobile). Allocate a pessimistically approximated time slot for taking care of all visits at one go. Take it as a well-deserved break from the intense brainwork – if your schedule (and Covid restrictions) permit(s), enjoy a long lunch or coffee break, take the evening off.

Schedule this task relatively close to the submission date. For obvious reasons, don’t leave it for the very last days. However, if you want/need to optimise your use of time and get away with as few library visits etc. as possible, don’t jump the gun, either. Also remember to check and re-check the opening hours of all the places you intend to visit!

Microsoft Word

Harness Word’s automation features! The less you do manually and ad hoc, the better. This minimises inconsistencies and careless mistakes, saves time and hence frees your time and mental resources for more important stuff – namely, the actual content of your thesis.
I’ve included links to Word instructions at the end.

If you haven’t already made use of paragraph styles, go through your document and apply the applicable styles as you go: Heading 1, Heading 2, Quote etc. You can use Word’s default formatting; the most important thing is to have all headings in the correct style. This allows you to change the formatting (typeface, font style and size, paragraph spacing, line spacing and so on) for all same-level headings at once.

Furthermore, heading styles are necessary for creating automatic tables of contents in Word. They are a common cause of frustration among students, and TOC issues are often related to an incorrect use of styles. If your document only contains a body text style (e.g., Normal or No Spacing), Word cannot recognise headings and hence cannot create a table of content for them.

I recommend creating your table of contents and adjusting its settings as soon as you have your heading styles in place. You can then just update it whenever you’ve edited your document, and you’ll always have an accurate TOC.

Page numbering in Word is a perpetual source of grief and terror for students. Tackle the pagination demon in good time and with ample patience and determination. Don’t worry, it is by no means an impossible task, but you’re better set up if you’re not in the middle of a pre-submission meltdown.

Do not use your single thesis document as test piece! Practise tricky layout features like page numbers, TOCs, sections etc. in a dummy document first. Or at least remember to save and make a copy of your main document before you start – what you should be doing anyway.

Version management and backup

Don’t leave all your hard work hanging on one single document: make several copies of the main document in case of file corruption, data loss etc.

You’ll write a lot of stuff that won’t make it into the final work, but don’t let your hard work go to waste. As your project progresses, create clearly labelled versions of your main document and collect deleted bits into ‘killed darlings’ documents. You’ll probably need to revisit something at a later stage.

Most importantly: for the love of everything good and holy, set your documents to automatically back up into a cloud service.

Note: automatically. Your stressed-out brain will absolutely fail to remember uploading the latest version after every single use.

Choose whichever service and method you like, just as long as you do it.

Seriously.

Do it.

Microsoft Office instructions

Customize or create new styles:
https://support.microsoft.com/en-us/office/customize-or-create-new-styles-d38d6e47-f6fc-48eb-a607-1eb120dec563

Insert a table of contents:
https://support.microsoft.com/en-us/office/insert-a-table-of-contents-882e8564-0edb-435e-84b5-1d8552ccf0c0

Start page numbering later in your document:
https://support.microsoft.com/en-us/office/start-page-numbering-later-in-your-document-c73e3d55-d722-4bd0-886e-0b0bd0eb3f02

Add different page numbers or number formats to different sections: https://support.microsoft.com/en-us/office/add-different-page-numbers-or-number-formats-to-different-sections-bb4da2bd-1597-4b0c-9e91-620615ed8c05

But it’s real and quantifiable, how can it be a social construct?

Social constructionism is a topic that continues to cause confusion, anger and hilarity, especially among natural scientists. Given how ridiculous the most common misconceptions (or deliberate strawmen) make this approach sound, I can’t blame them.

But they are, as said, misconceptions. In this post, I’ll give a brief explanation of what social constructionism is and what it’s not. (This is a slightly modified version of my comment to one of my favourite podcasts, The Reality Check. Highly recommended!)

Firstly, social constructionism is nothing new and nothing marginal. It’s connected to the 1970s linguistic turn of social and humanist sciences (cf. logical empiricism) and postmodernism, which pretty much form the hegemonic paradigm at the moment. All scholars in these fields wouldn’t necessarily call themselves social constructionists, but I’d nevertheless say you would have a hard time finding a social scientist or humanist who swears by essentialist and transhistorical aspects of humanity. That is, social and cultural features that are so inherent to humans that they are found everywhere in the exact same form, regardless of time and place. These kinds of ideas will be met with extreme scepticism among social constructionists.

Simply put, social constructionism analyses social and cultural realities, i.e., humans’ ideas, beliefs and conceptions about themselves and the world: how these ideas, beliefs and conceptions are manifested and how they’ve come to being. In principle, anything that humans have an understanding of can be studied under the wide umbrella of social constructionism. Social and moral norms, hierarchies and power relations, fashion trends, ideologies, policies, medical instruments, cultural rituals, scientific theories, buildings, institutions, agricultural machines, professions, national flags, identities, superheroes … What do these things (appear to) mean in context X for actor Y? How have these meanings developed? How are they expressed and reinforced? By whom? And so on.
In practice, the availability and type of material limits the seemingly endless topics. Different types of materials answer different questions, and non-existing material means there’s no expression of a conception to analyse. It is crucial to remain critical and reflexive over what actually can be interpreted and how.

Painting of a pipe. Under the painting, text in French that means "This is not a pipe."(Source: Matteson Art)

René Magritte’s painting The Treachery of Images (1929) states: “This is not a pipe.” It points to the fact that it indeed is not a pipe, it is a painting of a pipe. It is a representation of an actual pipe in real life, or a representation of someone’s idea of a pipe.

While Magritte was a Modernist artist and not a postmodernist social scientist, his work rather nicely captures the relationship between social constructionism and the real world. When a social constructionist says something is socially constructed, they do not imply that it’s only socially constructed and doesn’t exist in real life. I emphasise: Social constructionism does not claim that physical/material reality doesn’t exist or is fake. However, social constructionists do not study material things themselves, but ideas of material things and meanings attributed to material things. Both aspects are ‘real’ in their own sense. In a human being’s reality, material things and their conceptions are inherently linked: it is impossible to be aware of the existence of something material and not have a conception of it. We have cultural conceptions even of the most mundane and trivial things; a pile of sand means something different for a person living in Libya than someone living in Svalbard. (Whether it’s interesting or meaningful to study a given conception is another matter.)

Social constructionism is particularly useful in identifying and analysing essentialised and naturalised phenomena – norms, ideas, practices and habits that are so engrained in a culture that they feel natural and obvious to the members of that culture, like truths or the only option. Some things are so naturalised that people aren’t even aware of them, but consciously recognised phenomena can also have the status of a cultural ‘truth’. For example, we know that once upon a time (not even that long ago), there was a world that wasn’t formed of nation-states. Nonetheless, we find it extremely difficult to imagine and believe in a world without nation-states. At most, it’s a mental exercise for us, not a credible and experiential premise.

Cultural ‘truths’ often involve a presentist and teleological understanding of history. The past is seen and even deliberately portrayed as a linear development to a specific phenomenon in the present. This approach is very human; we take our own present-day cultural reality for granted and instinctively interpret and explain things from its perspective. However, from a historiographical perspective, these narratives are very problematic. Deterministic interpretations are a form of winners’ history peppered with hindsight: they hide away actors and contingency, and disregard the notion that the present development wasn’t the only possibility. This doesn’t imply a revisionist or counterfactual interpretation of history – it just means acknowledging that the development that took place wasn’t fatalistically ‘meant to be’, but happened as the result of a number of factors and coincidence.

For further reading, I warmly recommend Ian Hacking’s The Social Construction of What? (1999). It’s a good introduction to the topic, arising from the ‘science/culture wars’ of the 1990s (which regrettably seem to be ongoing).

Academic skills: being wrong, bad, unsure and rejected

In the postpositivist academia of today, it is generally acknowledged that science is unavoidably subjective and biased, and objectivity and neutrality are rather seen as sacred but unattainable ideals. Science is hence recognised as a social process. Given its essentially human nature, science is also an inherently emotional process. In this post, I’ll discuss emotional integrity (the ability to recognise, accept, analyse and process one’s emotions) and acceptance of fallibility as an important personal skillset for scholars – and humans in general.

As demonstrated in the Facebook group Reviewer 2 Must Be Stopped!, scholars from all fields express a range of emotions: for example, anger, frustration and despair over mean and unreasonable reviews, as well as joy and relief over good feedback, published articles or granted funding. Emotions in academia are kind of self-evident – ask any scholar if they’ve ever been angry or excited while conducting research and you’ll get a ‘yes’.

Emotions in academia have been discussed and researched to some degree: for example, in the context of mental health and emotional labour in academia, in the wake of Tim Hunt’s comments about ‘emotional women’ in the lab, or in the context of the neoliberal academia of today. Emotions and academia are thus mostly discussed in the context of a (potentially) problematic situation to be critically addressed and solved.

Yet, emotions are also present in normal, everyday academia; in the research process and in scholars’ professional development. This aspect of emotions in academia is, however, rarely discussed, albeit with some exceptions, like Charlotte Bloch’s interesting study of the ‘culture of emotion’ in (Danish) academia. Nonetheless, a scholar’s tools for processing difficult personal emotions like uncertainty, anger, sadness, disappointment and envy, in particular, are rarely discussed in more detail. They are certainly not commonly discussed and taught at universities as a part of academic-professional education, not necessarily even in fields where the expert’s ability to process their own emotions can literally be a matter of life and death for someone, like in medicine.

While the ideal of the dispassionate, data- and fact-driven scientist still lingers, many scholars are passionate about and emotionally invested in their work. Everything related to their research is thereby also transformed into an issue about them personally, and research can indeed be emotionally taxing. All scholars face situations where they deliver poor-quality research and get harshly critiqued for it; have absolutely no idea what they are supposed to make of their data; receive rejection letters for articles and grant applications; lose their dream job to a dear colleague, and so forth. In short, scholars deal with professional failure and rejection that can feel deeply personal.

I argue that reflecting on one’s own fallibility and developing one’s emotional integrity can help in overcoming and learning from these situations. The pursuit of knowledge is the most essential task of a scholar, and failing at this task is a blow to a scholar’s professional identity and dignity. However, mistakes are also an inevitable part of research, and infallibility complexes and fear of failure can hamper one’s intellectual integrity (especially if they aren’t recognised). It is difficult to accept one’s own fallibility – nobody likes to be wrong, bad or rejected. Yet, I argue that it is important to try to detach fallibility from self-worth, both as a scholar and a human being. In other words, it is important to acknowledge that mistakes and failures, or admitting them to others, do not render you less worthy or respectable as a person. On the contrary; it is a part of professional and personal development.

Everyone has their own emotional patterns and mechanisms, and I of course cannot give a one-size-fits-all guideline for emotional integrity, particularly since it is and always will be an ongoing process. But I’ll share some thoughts and realisations that help me deal with some of the emotional challenges in academia.

Negative feedback. Let’s face it: it sucks, and it feels horrible. The hardest part for me is to accept the negative emotions that are unavoidably evoked by critique, as well as the feeling of being personally attacked. My immediate coping mechanism would be to rationalise my emotions away so that I don’t have to actually feel them. However, this leaves a lingering feeling of emotional dishonesty, which makes me both emotionally and intellectually constipated. So instead, I try to allow myself to feel angry, humiliated, frustrated, sad and desperate, and I console myself with the thought that they are, indeed, just emotions. They will eventually fade, and then I’ll be emotionally and intellectually better capable of analysing the feedback in order to take in the constructive and useful bits and to rise above unfounded criticism.

Rejection. This is by far the hardest pill to swallow. At best, it’s basically really negative feedback, like a rejection from a journal or conference. Feel the emotions; improve your work; try again. At worst, rejection is blunt and final: no justifications for the rejection (i.e., feedback for improving your work), and no possibility to appeal. Funding and job rejections are the absolute worst, since they basically mean that you are denied an opportunity to support yourself with your expertise. It is demoralising, especially when rejections start to pile up. Unfortunately, I don’t have a good recipe for dealing with these situations emotionally. I was unemployed for almost a year, and I felt like I’d hit a brick wall both professionally and personally. The only solution I have to offer is: get funded/employed. ‘Ha-ha’.

Competition. Dealing with funding/job rejections is particularly difficult when you are rivalled by people you know. It is hard to witness the success of others when you are left with nothing. However, for me, this is easier to handle than the rejection itself, albeit not easy. I again try to refrain from rationalising my feelings away. Instead, I try to accept, firstly, that I feel envious of others’ success, humiliated that I lost to them and angry that I’m being treated unfairly (which isn’t necessarily true). Secondly, I try to accept that I’m feeling horribly guilty for being envious or angry instead of being genuinely happy for my colleagues. Again, I try to remember that these are just emotions – I, as a person, am more than my emotions in a particular situation. I can choose to not express them, but can instead just congratulate my colleagues. Nonetheless, there’s no denying that losing to someone you know (instead of anonymous rivals) also makes your own loss somehow more tangible, which adds to a deteriorating sense of self-worth caused by recurring rejection. It is really hard.

Uncertainty. While scientific research isn’t necessarily perceived as a particularly creative field, my research projects certainly follow this process.

Being aware of this cycle is my consolation when I am tormented by anxiety and crippling self-doubt in phases three and four. This, too, shall pass. By now, I know to anticipate the stage when I am utterly overwhelmed by my material which yet again has turned out to be nothing I expected, forces me to rethink my entire research strategy, makes me question whether I’ll ever have anything analytically meaningful to say about my material, drives me to read another fifteen articles and books on my topic, makes me question whether I’ll ever have anything meaningful to say about anything, pushes me to consider leaving academia altogether, etc. Until I eventually get to the following phases, be it through a triumphant eureka moment or painstaking and laborious trial and error – and I can again discern some kind of future for myself and my research. It is a part of the process. It will be okay, and I will be okay.

I want to re-emphasise that the above are a few examples of my own processes and mechanisms for addressing and processing difficult emotions in my own everyday academic life. They might help someone with similar emotional patterns, or then not. Nonetheless, I hope that this post can, for its part, contribute to demystifying emotions in the seemingly rational world of academia. All scholars feel emotions related to their work and professional development; many experience very powerful emotions, indeed. It does hence not serve anything to pretend that personal emotions and failure aren’t an integral part of academic work, or to try and explain them away. At worst, poorly processed negative emotions find an unproductive outlet in the form of harsh and unfounded critique towards others or oneself, or other forms of intellectual dishonesty towards oneself or one’s peers.

It is okay and normal to err, and it is okay and normal to feel.

Further reading

Askins, Kye & Blazek, Matej (2017) ‘Feeling our way: academia, emotions and a politics of care’, Social & Cultural Geography 18 (8): 1086–1105, DOI: 10.1080/14649365.2016.1240224

Bloch, Charlotte (2012) Passion and Paranoia: Emotions and the Culture of Emotion in Academia. Ashgate, Burlington.

Chhabra, Arnav (2018) ‘Mental health in academia is too often a forgotten footnote. That needs to change’, ScienceMag.org 19.4.2018, https://www.sciencemag.org/careers/2018/04/mental-health-academia-too-often-forgotten-footnote-needs-change

Feldman, Margeaux (2015) ‘“There’s no crying in academia,” Acknowledging emotional labour in the academy’, Gender & Society blog, https://gendersociety.wordpress.com/2015/10/27/theres-no-crying-in-academia-acknowledging-emotional-labour-in-the-academy/

O’Donnell, Karen (2015) ‘Why academia needs emotional, passionate women’, The Guardian 23.7.2015, https://www.theguardian.com/women-in-leadership/2015/jul/23/why-academia-needs-emotional-passionate-women

Woolston, Chris (2018) ‘Feeling overwhelmed by academia? You are not alone’, Nature 557: 129–131, https://www.nature.com/articles/d41586-018-04998-1

Solid Arguments over Feelings

Actually, the title of this post should be:

Someone Changed “Men” to “Black People” in an Everyday Feminism Post, I Changed “Black People” to “White People” in that Text, and Here’s What Happened.

But it’s pretty long, so the title is an homage to the blog in which the altered text was published, entitled Facts over Feelings. (The irony amuses me.)

In case the clickbait title wasn’t revealing enough, the author of the altered text wants to convey, in short, the message that Everyday Feminism is based on hate. That point is demonstrated by changing “men”, “male” etc. into “black people” and “black culture”, indeed rendering the text sounding rather hateful, or, more bluntly: downright racist.

The problem with this is that the author’s analogy has a very fundamental flaw. By replacing “men” with “black people”, they are replacing a dominant/hegemonic actor/norm/culture with a marginalised one. In order for the analogy to work, “men” would need to be replaced by something that is similar in this respect. I thereby replaced “black” with “white”. And here’s what happened. (Spoiler/TL;DR: it suddenly doesn’t sound that disturbing anymore. edit: Well, it does sound disturbing, but not because of ‘reverse racism’ or hate, but because systemic racism is disturbing as it is, and the re-altered text highlights that.)

For a blog promoting moderation and reason, this was a remarkably poor performance.

(In addition, IMO, using ‘objective’ as an adjective for any human being is a rather radical notion.)

I’ve copypasted the entire thing below, except for section 2. Gendered oppression and violence is markedly different from racialised oppression and violence, which renders section 2 quite absurd (both in this version and the original ‘black people’ version). Apples and oranges, you know. It would require rewriting the entire thing, and I quite frankly don’t have time for that.

– – –

Dear Well-Meaning White People Who Believe Themselves to Be Safe, Thereby Legitimizing the “Not All White People” Argument,

Let’s start here, even though this should go without saying: We don’t think that all white people are inherently abusive or dangerous. Plenty of white people aren’t.

There are white people that we love very much – white people around whom we feel mostly safe and unthreatened; white people who, in fact, support, respect, and take care of us on familial, platonic, romantic, and sexual levels. Not every white person has violated us individually; for most of us, there are plenty of white people that we trust.

We know what you mean by “not all white people” – because on a basic level, we agree with you.

But the socialization of white people is such that even a good white person – a supportive white person, a respectful white person, a trusted white person – has within him the potential for violence and harm because these behaviors are normalized through white culture.

And as such, we know that even the white people that we love, never mind random white people who we don’t know, have the potential to be dangerous. Surely, all people have that potential. But in a world divided into the oppressed and the oppressors, the former learn to fear the latter as a defense mechanism.

So when you enter a space – any space – as a white person, you carry with yourself the threat of harm.

Of course, in most cases, it’s not a conscious thing. We don’t think that most white people move through the world thinking about how they can hurt us. We don’t believe white culture to be a boardroom full of white people posing the question “How can we fuck over ethnic minorities today?” You would be hard-pressed to find a POC activist who actively believes that.

But what makes (yes) all white people potentially unsafe – what makes (yes) all white people suspect in the eyes of racialized people – is the normalized violating behaviors that they’ve learned, which they then perform uncritically.

Make no mistake: When you use the phrase “not all white people” – or otherwise buy into the myth of it – you’re giving yourself and others a pass to continue performing the socially sanctioned violence of white culture without consequence, whether or not that’s your intention.

In truth, the only thing approaching defiance against this kind of violence is to constantly check and question your own learned entitlement – and that of other white people. But you can’t do that if you’re stuck in the space of believing that “not all white people” is a valid argument.
So we wanted to call you in, well-meaning white people, to talk about these four points that you’re missing when you claim “not all white people” as a way to eschew responsibility for white culture.

Because it is all white people, actually. And here’s why.

1. All White People Are Socialized Under (And Benefit From) White Culture

Here’s the truth: Most of the time, when we generalize and use the word white people, what we’re actually referring to is the effects of white culture. What we’re actually intending to communicate when we say “white people are horrible,” for instance, is “the ways in which white people are socialized under white culture, as well as how that benefits them and disadvantages everyone else, sometimes in violent ways, is horrible.”

But that’s kind of a mouthful, isn’t it? So we use white people as a linguistic shortcut to express that.

And before you come at us with “But that’s generalizing,” it’s actually not. Because it is true that all white people are socialized under and benefit, to some degree, from white culture.

That is to say, the only thing that we truly associate all white people with is white culture – and that’s hella reasonable, even though it affects white people differently, based on other intersections of identity.

Because here’s how it works, my friends: Living in the United States, every single one of us is socialized under white culture – a system in which white people hold more power than other races, in both everyday and institutionalized ways, therefore systematically disadvantaging anyone who isn’t a white person on the axis of ethnicity. As such, we all (all of us!) grow up to believe, and therefore enact, certain racialized messaging.

We all learn that white people deserve more than anyone else: more money, more resources, more opportunities, more respect, more acknowledgment, more success, more love. We all internalize that. To say that “not all white people” do is absurd – because, quite simply, all people do.

For people who aren’t white people, this means that we’re socialized to feel less-than and to acquiesce to the needs of the white people in our lives. And this doesn’t have to be explicit to be true.

When we find it difficult to say no to our white bosses when we’re asked to take on another project that we don’t have the time for, or to our white partners when they’re asking for emotional labor from us that we’re energetically incapable of, it’s not because we actively think, “Well, Jim is a white person, and as a not-white person, I can’t say no to him.”

It’s because we’ve been taught again and again and again since birth through observation (hey, social learning theory!) that we are not allowed – or will otherwise be punished for – the expression of no. In the meantime, what white people are implicitly picking up on is that every time they ask for something, they’re going to get it (hey, script theory!).

A sense of entitlement isn’t born out of actively believing oneself to be better than anyone else or more deserving of favors and respect. It comes from a discomfort with the social script being broken. And the social script of white culture is one that allows white people to benefit at the disadvantage of everyone else.

And all white people are at least passively complicit in this white culture system that rewards white entitlement. We see it every single day.
The thing about privilege is that it’s often invisible from the inside. It’s hard to see the scale and scope of a system designed to benefit you when it’s as all-encompassing as white culture. And that might lead you to buy into the idea of “not all white people.”

To those on the outside, however, the margins are painfully visible. That’s why white people who really want to aid in leveling the playing field have a responsibility to listen to people who can see the things they can’t.

When ethnic minorities tell you that you’re harming them, listen. Listen even when you don’t understand. Listen especially when you don’t understand.

You can’t see all the ways in which your whiteness distorts the fabric of society, but we can. And if you want to help dismantle white culture, you have to make the choice to accept that a thing isn’t less real just because you haven’t seen it – or don’t believe yourself to have experienced it.

[…]

3. The Impact of Your Actions Is More Significant Than the Intent

Cool. You didn’t mean to contribute to the objectification of that person of color when you made that racist joke. Perhaps you even think that you’re so “enlightened” as a “anti-racist white person” that we should just knowthat you “didn’t mean it like that.” In fact, maybe you even think that you were being “subversive” when you said it. Okay.

But from the perspective of a person of color, that doesn’t matter, because we still have to feel the effects of that mindset every single day – and your bringing that to the foreground has a negative impact on us, no matter what the hell your intent was.

Many white people don’t do hurtful things maliciously. They may be doing them subconsciously, adhering to the ways in which they’ve been taught to behave, as all of us do.

Other white people, of course, are intentionally violent. But the effects of both can be incredibly damaging.

Surely, we’re less likely to harbor resentment towards someone who stepped on our toes accidentally than we are towards someone who stomped on them with malevolence – especially when accountability is had and an apology is issued. But our goddamn toes still hurt.

To a person of color, there’s very little difference between the impact of inadvertent and intentional harm. A white person who makes you feel unsafe by accident is as harmful to you as one who does it on purpose.

So no matter how well-intentioned you are, you’re not off the hook when you hurt people. And because of everything we’ve discussed above, you are likely (yes, all white people) to hurt and violate. And you need to be willing to take responsibility for that.

4. The Depth of Work to Be Done Is Avoided By Most White People

It’s understandable that we react by distrusting even “safe” white peopleas a rule when even safe white people can hurt us – because even “safe” white people have been raised in and shaped by white culture that both actively and passively harms us every day. There’s no escaping that, regardless of anyone’s best intentions, so it’s useless to talk about intent as a mitigator of harm.

Add to that the constant stream of disappointment and hurt we feel when self-proclaimed “safe” or “anti-racist” white people do turn out to harm us – which happens way too often to be treated like an anomaly – and it’s easy to see why POC react with distrust or even outright hostility when “safe” white people show up in spaces dedicated to POC.

We want to trust that your good intentions will lead to positive actions, we do. But here’s what we need you to understand before that can possibly happen: What you’re asking us to accept from you will take a hell of a lot of work on your part – and we’ve seen over and over again that many self-proclaimed “allies” just aren’t willing to do it.

Being a “safe” white person – hell, being an anti-racist white person – is more than just believing yourself to be and collecting accolades from others about the minimal work that you’re doing not to be an asshole.

Doing the work means really doing the work – getting your hands dirty (and potentially having an existential crisis in the process).

Consider it like this: If you go through life assuming that your harmful behavior is appropriate and most of society provides a positive feedback loop, why would you stop to examine yourself? You’ve never been given any indication that you should.

If you never learn to see your behavior within the context of the broader harm done to ethnic minorities, what motivation will you have to change? And if you keep passively absorbing toxic attitudes towards white entitlement, will you really move to check bad behavior in other white people?

Because here’s the truth: Even when it’s not conscious, white entitlement is a choice – a choice to be uncritical, a choice to continue to passively benefit. And attempting to fight that entitlement is also a choice ­– one that has to be both conscious and ongoing. You’ve got to choose it every day, in every instance.

But how many well-meaning white people are truly choosing that path, instead of just insisting that it’s “not all white people” and that they’re “not like that?”

Hint: You are “like that” – especially if you’re not actively fighting white culture. And claiming that you’re “not like that” doesn’t negate white culture – it enforces it.

Fighting learned white entitlement means assuming the burden of vigilance – watching not just yourself, but other white people. It means being open to having your motives questioned, even when they’re pure. It means knowing you’re not always as pure as you think.
It means assessing the harm you’re capable of causing, and then being proactive in mitigating it.

Most of all, it’s a conscious decision to view every individual’s humanity as something exactly as valuable and inviolable as your own.

And it means doing it every single moment of your life. Point blank, period.

If you really want to stop the “all white people” cycle, that’s the only place to start.

***
Well-meaning white people, if we’re being honest, we love many of you. And those of you whom we don’t know, we want to believe and appreciate. We want to feel safe around you.

We don’t want to fear or distrust white people. We don’t want to have to perform risk assessments on every white person that we meet. Trust us – it’s a miserable life! We’d gladly abandon this work if it wasn’t absolutely necessary to our survival.

But it’s not our job to be vigilant against harmful behaviors that we can’t possibly hope to control, though. Nor is there anything that we alone can do about this. It’s incumbent upon white people to make themselves safer as a group.

And there’s no way that you can do that until you accept that yes, it is all white people – including you – and start working against it.

Critical thinking requires… critical thought

During the past month or so, the anti-vaccination movement has (re)surfaced as a topic of discussion in Finland. I had hoped that this line of thought wouldn’t gain much foothold in Finland, but, alas, I was wrong. According to Helsingin Sanomat, there are now parts of Finland in which vaccination coverage has dropped to a critical low, seriously damaging herd immunity and increasing the risk for dangerous epidemics.

I am alarmed and saddened by this development – even more so when I think of it being linked to a broader anti-intellectual attitude. This attitude manifests itself as reluctance towards expertise and expert knowledge, and as an emphasis on how individuals are the best experts of their own lives and should therefore have significant individual autonomy. This is labelled ‘critical thinking’: one should not take so-called official truths as a given, but question dominant paradigms (authoritative and normative information and knowledge) and educate oneself on marginal or alternative knowledge.

Formulated like that, I don’t disagree with the idea of critical thinking. On the contrary – it is of utmost importance that our hegemonic knowledge production and policy apparatus is under critical scrutiny and adequately contested. Information and knowledge certainly should not be taken as a given, but they should be constantly be challenged and updated.

But here’s the problem: in order to be able to critically evaluate and challenge something, you need to have a profound understanding of that said something. Mere reluctance or hostility towards prevailing scientific and expert conceptions or policies does not count as critical questioning. Scepticism or doubtfulness, sure (and that’s not necessarily a bad thing), but critical questioning implies that you actually have valid criticism to offer. For this, your knowledge base needs to be on par with what you are criticising.

In various conversations, I have encountered the disgruntled claim that experts undermine the layperson’s ability to understand scientific knowledge (e.g., non-popularised scientific articles). It is perceived as elitist and patronising, and I get that. Scientists and other experts are not always masters of communication and rhetoric – especially if they don’t bother to properly argument and explain their cause but refer to their expert authority, it comes across as an arrogant ‘Because I say so, end of discussion’ attitude.

Nevertheless, in all their obnoxiousness, these experts are right – really understanding scientific knowledge is difficult. Sure, you might understand what the abstract and conclusions state, i.e., the sentences make somewhat sense to you and you get a grasp of what the researcher is trying to say. This, however, is not to really-really understand the text.

While academic education by default trains the skill of critical thought, holding a degree does not come with a magic ability to critically scrutinise and contribute to any and all kind of knowledge. I, for example, can with intellectual dignity say something meaningful about a ridiculously small portion of scientific knowledge. Heck, I can’t even evaluate most of the knowledge produced in historical research, my own discipline! So how on earth could masses of laypeople be equipped with the skills for digesting papers in, say, medicine or nutrition? Well, they can’t. Introducing level 1 of critical thought: understanding the limits of one’s own knowledge. Overlooking this is intellectually insincere arrogance.

To illustrate what I mean, here are some central issues on which I can’t say much anything for the majority of the research out there. Firstly: who are the authors? Are they specialised in this topic, what have they done previously? What are their past and present affiliations, and who have they collaborated with? In short, who are they as scholars? Secondly: what is the study actually about, what is its context? What academic discussion and discourse does it contribute to? Why has this particular problem or phenomenon been taken into focus? Has this already been researched? If yes, how is this study related to that research? Does the study position itself in regard to key studies in the field or theme? What kind of reputation does the journal or publisher have? Thirdly: execution of the research – does the study make sense? Are the research questions (and possible hypotheses) clearly and sensibly formulated? Are the data or research material and the research questions compatible? Can the research questions actually be answered on the basis of the data/material? Is the data/material appropriate for this study, or are there critical redundancies or, worse, omissions? How, exactly, is the data gathered? What can the data actually reveal and what not? Are the selected methods and theories appropriate, or would some other theory or method be better suited? Are the methods and theories well applied? Are the findings and interpretations (conclusions) transparently and credibly represented and argued? Are the authors open about possible shortcomings? Are there any ethical considerations to take into account?

And so on, and so on; the list is far from exhaustive. It serves to show how critically contesting scientific knowledge requires an understanding of not only the subject matter, but also of the entire academic context, culture and discourse in which knowledge is produced – at least if criticism is to be given with intellectual dignity (and academic self-respect, if the contributor is a scientist). This is level 2 of critical thinking, and as fun and informative as Wikipedia surfing is, it generally does not provide these skills. There is, however, absolutely no shame in not reaching level 2 or being just a sporadic visitor. Then, in accordance with level 1, you just have to humbly accept that the most you can say is ¯\_(ツ)_/¯ and you just have to trust what experts are saying. Or at least admit that your scepticism isn’t necessarily justified and you could be wrong, but you just can’t shake the feeling of reluctance. Level 1 definitely provides enough challenge!

Scientific knowledge production has many built-in problems, which I won’t discuss in detail in this post but nevertheless want to briefly acknowledge. For starters, research funding guides research trends and isn’t evenly distributed. While it is highly unethical and unprofessional to deliberately try to conform to the interests of the sponsor, even non-shady funding skews the academic operating environment. Sadly, scientific knowledge is subject to politics: some research topics and disciplines are favoured over others, and not necessarily or only on academic grounds. Many important issues go unstudied because they do not get funded; they are not science, media or policy sexy (for instance, don’t fit in the latest scientific ‘it’ themes, or can’t be commercialised = turned into money). Secondly, not all science is good science. Some studies are just unintentionally poorly executed. Some studies deliberately apply questionable methods in order to get ‘better’ results (i.e., easily publishable): selective samples; ambiguous, misleading and non-transparent representation of results and arguments; tinkering with results, e.g., selecting an ill-suited method, omitting relevant information, p hacking… Not to mention ingrained big problems, such as the overall issue with p values; the replication crisis; the reluctance to fund, conduct and publish null studies; or predatory journals, and how they are monitored (Beall’s list).

As you can see, the field that by definition should be ethically and professionally committed to producing, critically evaluating and correcting knowledge is far from objective or unproblematic. Yet, it is the most rigorous system we have in terms of knowledge production. All other forms have it easier. Especially nowadays, anyone can publish and spread anything online with whatever criteria (or lack thereof) they want. They are free to call it ‘critical’, ‘true’, and ‘real’, and nevertheless rely merely on persuasive rhetoric or deliberately flawed data. To an alarming extent, they are even free to cash in on such activities.

In the era of ‘Google expertise’ and ‘experts of their own life’, there is a call for treating all opinions equal. But all opinions aren’t equal – some opinions are better educated, argued and justified than others. And this certainly holds true for information and knowledge as well. The internet is ridden with dangerous and harmful misinformation, and it is produced and spread through uncritical enthusiasm and gullibility as well as ruthless maliciousness. The detrimental effect of misinformation can only be prevented and mitigated through… critical thought. Both level 1 and level 2 are utterly important – and trust me, you can never have too much XP on either level.

(For more on this topic, I recommend the blog post The Death of Expertise by Tom Nichols. A slightly longer read, but worth it.)

Your favourite dictator now also in English!

Welcome, English-speaking reader, to this international dimension of my blog.

I usually touch upon topics related to Finland, which makes Finnish a more convenient language. But now that I have a more general issue to write about, I decided to broaden the language base of my blog. I will apply this division (general – English; Finland-related – Finnish) also in the future; my English posts will be labelled with the tag (drumroll) ‘in English’.

For my no doubt immense international audience, a short introduction: I am a PhD candidate in political history at the University of Helsinki. In my dissertation, I study Finnish social policy NGOs as experts during the 1940s–1950s. I analyse how the associations have utilised their expertise in setting and influencing political agendas, i.e., in acting as political actors. I study these activities also as a process of constructing expertise and an expert status. My overall research interests include social policy and social policy history, gender history and conceptual history, among others.

If all goes according to plan, in 2018, I’ll be Sophy with a ph…d. I’m not even sorry. (H/T Stephen.)

The name of my blog translates to ‘If I were the dictator of the world’, which refers to this blog as a platform for me to express my universally superior views on topics related to social policy, social justice and inequality, (intersectional) feminism, etc. The world would obviously be a much better place if only the world population understood to hand over all power to me. I promise to be a really nice dictator.