Discussion on digitalization and AI belongs to everyone – and here’s why

by Minna Vasarainen

During the past week, Hannele,  Liubov and I took part to the in the  WORK2019 Conference. The topic of the conference was “Real Work in Virtual World”. We had our own presentations, Hannele on the emerging role of building information modeling (BIM) coordinator, Liubov on changes in academic work and I on extended reality technologies in working life. The conference was well organized – the only thing we missed was a dessert at the reception of Helsinki City(!). So, huge thanks to the organizers!

WORK 2019 Conference Opening. Welcome words by Timo Harakka, Finnish Minister of Employment

Work 2019 conference included altogether six interesting keynotes. They all were intriguing, but I was especially impressed by Valerio de Stefano’s lecture on Thursday. De Stefano’s keynote title was “Labour regulation for the Future: automation, artificial intelligence and human rights at work.” Before the presentation, I was not sure how much this lecture would interest me, but it appeared to be so relevant I wanted to share it in a blog post.

De Stefano presented the idea of granting some legal rights to AI applications or so called “electronic legal personality”. This is familiar to us through corporations, who have their own legal personality: they can own property and they can be sued, but in legal terms, they are separate from what we call natural person. As the possibility is still on the level of idea (apart from some exceptions) , it includes multiple questions to answer and problems to solve. For instance, applications based partly on machine learning are not entirely predictable (as the learning outcome might be different from expected one), and, according to De Stefano’s keynote, there have been cases of discrimination, which have led to entire deletion of the app (or algorithm, to be more precise).

These cases of discrimination conducted by AI rise questions on where the responsibility lies.  Aren’t AI applications merely making already existing, but not always necessarily recognized injustices visible? De Stefano also noted that we do not know enough about AI to regulate it properly, and we need more discussion and research on the topic. This leads us to the most important part of the message I want to convey.

To have an AI discussion, we need everyone. Even if you do not care that much about AI, or digitalization overall. Everyone will feel the consequences of these changes in their lives. AI-based applications are slowly changing the way we connect to each other, find new places, products, and friends – how we exist and live in this world. These changes do not necessarily have to be embraced, but they have to be addressed and discussed.

Despite the way we often speak about AI and digitalization, it is not something that is happening beyond our control and understanding. True, we do not fully understand how to make effective systematic change (or corruption would not be an issue) or sometimes an individual algorithm’s working mechanism, but we do have power to direct the change.

The problem is how to give everyone a possibility to participate, but first step towards that is realizing that you are fully capable of taking part to this discussion even though AI or new technical devices altogether would not be your area of expertise.

Generalization in ethnography: mission impossible?

Liubov Vetoshkina

Right now I am sitting on a train on my way from the Ethnography with a Twist conference – a conference, completely dedicated to ethnography as a method with no disciplinary boundaries. What one would expect from an ethnographic conference, even though it is “with a twist”? Of course, a lot of situatedness, thick descriptions, descriptive analyses and so on. It was there, one cannot avoid it. I also surprisingly bumped into an opposite tendency: a kind of positivist-style generalizations, using ethnographic and qualitative methods. For instance, an attempt to study a phenomenon at workplace, completely eliminating the entire context – even the type of work under study or expert interviews with no specific field of the experts.

Generalization in scientific inquiry supposes drawing broad conclusions from particular scientific results. It is one of the issues in “qualitative social sciences”, especially using situated approaches (like ethnography), as their goal is to provide a detailed and contextualized understanding of a certain phenomenon. In quantitative approaches in social sciences generalization is also an issue with own traps, but it is (a bit) more clear.

This how generalization happens in the qualitative paradigm

Various models and methods on generalization in qualitative studies exist. So why there are still attempts on applying quantitative models for generalization of results of qualitative inquiry? Should we blame the existing stereotype that quantitative methods are more “scientific”? Or is it dissatisfaction with the existing methods of generalization in qualitative approaches?

The dialectical understanding of generalization (which I discussed in my dissertation following work of my colleagues), discusses two types of generalization in science. First, suitable for quantitative paradigm, is abstract-empirical. It is useful for establishing cause-effect relationships when the relationships between variables and factors are stable. Another one, which is suitable for qualitative paradigm, is theoretical-genetic. It focuses on revealing the roots of phenomena and its’ functional relationships. The aim is to apply a new principle in a different context.

This is what ethnographic research should be after: revealing the inner workings of various phenomena, then expanding and applying both the revealed principles and the principles of how to reveal the inner workings to other fields.  Unfortunately, with this idea there is no pre-given method or recipe for generalization. We should craft it for each study, depending on field, data, theory and many other factors. The only universal thing for this type of generalization is research integrity – we need to be open about our research choices and research process. As simple as that.

Rage against the machine

By Liubov Vetoshkina

Today I watched HBO’s second season finale of Westworld, one of my personally favorite series. It tells a story of a future American Old West theme park populated by androids, which are programmed to fulfill all human desires. Apart from being a truly stunning series with a fascinating plot, it touches upon philosophical and ethical questions with regard to technology. One of the issues is not just in possible changes and threats technologies may bring us, but in the way how we, humans, should treat new, “human-like” technologies, like AI and robots.

Westworld may lack scientific feasibility on the issues of consciousness, but rises important ethical and philosophical questions. The plot, visuals and actors are also stunning.  Picture source: https://www.hbo.com/westworld

Recurrently, one can find different trends or questions in discussing new  technologies. Not only modern ones, like AI or robotics. Atomic bomb, assembly line. Wheel, I suppose. The trends often find reflection in plots of movies, games and literature. Generally, I am rather critical about looking at general trends. Putting it simply, things are a little bit different in Silicon Valley and in Krasnogor village in central Russia (the place is real, my aunt lives there). Things are even different for separate activities and communities.  But the general trends though provide a certain background for concrete activities, connected to technologies, and need to be addresses and discussed.

One of the recurring moral questions, reflected in movies, is “how we can harm other people with certain technology or using a certain technology”. Canonical movie example will be the already mentioned atomic bomb – represented, for instance in Stanley Cubrik’s Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb. A less evident example is the moral choice to use Uber or Amazon – the companies having their dark side and exploiting their workers.

Another recurring question is the fear of new technology – “how can machines harm us”? The Terminator series is the classic example with robots rebelling against humans. Discussions on whether humans at workplaces will be replaced by AI can be put into the same category.

But Westworld tackles more complex philosophical and ethical questions concerning humanity: “how we should treat AI/ robots?”, “are they equal to humans and what rights and responsibilities do they have?”. These questions have been present in a variety of recent movies, from Ex machina to Bladerunner 2049.

It is a question of humans being cruel (like ones kicking food delivery robots) and exploiting robots.  This issue goes beyond a simple question of the intelligence of machines – are the machines “smart”? It goes as far as their free will and their nature – their similarity and difference to humans. It is a question whether robots and AI be have same rights and responsibilities as humans. Should the self-driving Uber car be put to jail for killing a pedestrian?  Or the human overseeing it (or is it even it)?

The solutions and answers should be discussed now – until it is too late, like in Westworld (spoiler alert!), where android hosts, subjects of constant abuse by humans, rebel and kill almost everyone in the amusement park. Humankind is creating something new and exiting, the problem is to avoid abuse – from all the sides.