Methods Circle old topics

  • Qualitative Comparative Analysis
  • Digital Discrimination

On Friday, November 27 we learned about Qualitative Comparative Analysis

Topic was Set-Theoretic Methods (Qualitative Comparative Analysis – QCA) and speaker was be Laura Sibinescu (slides of the presentation).

Set-theoretic methods differ from other methodological approaches by examining social phenomena in terms of sets and set-relations. Qualitative comparative analysis (QCA) uses the principles of necessity and sufficiency to establish relationships between conditions and outcomes. Its aim is to establish different explanatory paths leading to the same outcome and and open research topics to richer interpretations. It’s well suited for comparative research and works well in combination with other methods, such as case studies and process tracing.

In this short workshop we’ll talk about the basic concepts and principles of QCA and go through a concrete example to understand the typical workflow it involves. We’ll also briefly look at different variants of QCA and some important resources, such as literature, software and where to get methodological training.

This will not be a very technical discussion. The idea is to gain a general understanding of what QCA can and can’t do, and whether it might be suitable for your own research.


Our first meeting took place on Friday, October 23

.

The topic was Digital Discrimination. This is certainly related to novel forms of discrimination caused by technological development in society, but we’ll go even further and discuss about discrimination that may lurk into our research methods.

We have a visitor speaker, Dr. Indrė Žliobaitė. She is a researcher in computer science at Aalto University, HIIT, and University of Helsinki. Her research interests include predictive modeling with streaming/sensory data; fairness, transparency and accountability in machine learning; and computational data analysis applications in general.

Her presentation will give an overview of the current state and research trends on fairness-aware machine learning and data mining that is an emerging discipline at an intersection of computer science, law and social sciences, aiming at understanding, diagnosing and preventing such discrimination. More specific topic of her talk will be (click the topic for slides):

How can decision making by algorithms discriminate people, and how to prevent that

  • Big data driven algorithms are increasing used in many situations of our life, for example, they can decide the prices we pay, select the ads we see, the news we read online or the people we meet, match job descriptions and candidate CVs, decide who gets a loan, who goes to an extra airport security check, or who gets released on a parole.
  • Yet growing evidence suggests that decision making by inappropriately trained algorithms can potentially discriminate people. This may happen even if the computing process is fair and well-intentioned.

After the talk we’ll have free discussion on the subject. Is this kind of discrimination something we should be worried of and should it be studied more? Is it possible that even our more traditional methods are vulnerable to similar discrimination and what can be done to circumvent such biases in our research?