Research groups

TRU (Traffic Research Unit)

https://tuhat.helsinki.fi/portal/en/organisations-researchgroups/tru-traffic-researc(4e6f4fbf-e6c2-4583-841f-f97512cb3c31).html

The Traffic Research Unit of the University of Helsinki was founded in 1971.

With a 40-plus-years’ track record on experimental work in challenging naturalistic settings, its mission is to understand the human mind, brain and behavior “on the move”. We study human behaviour across the entire life span, and at all levels of skill and learning, from novice to expert.

Research methods combine controlled laboratory and simulator experiments, naturalistic field experiments, cognitive modelling, as well as survey methodologies and qualitative methods. The group is especially strong on rigorous computational analysis and modelling methods, and publishes in high-quality international peer reviewed journals. TRU has a strict Open Source Code, Open Data & Open Access policy.

The research has applications in basic and advanced driver education, vehicle engineering, the design of autonomous vehicles, driver assessment and licensing, road and traffic design and safety efforts.

Philosophy of Intelligent Cognitive Systems (PICS)

Philosophy of Intelligent Cognitive Systems (PICS) studies the philosophical and theoretical aspects of contemporary cognitive science and artificial intelligence (AI) research. Our work is centered around the topics of computational explanations and models; nature and role of cognitive representations; explainability of AI; cognitive dynamics in human-machine interaction; machine learning, deep learning, dissection, and manipulation methods.

Moralities of Intelligent Machines (MOIM)

http://www.moim.fi/

Moralities of Intelligent Machines is a research group studying the moral psychology of robotics and artificial intelligence (AI).

In modern societies, autonomous industrial machines, self-driving cars and healthcare robots are making increasingly many decisions with moral ramifications. The moral “code of conduct” of these AIs needs to be programmed and implemented by humans. However, there are no agreed upon rules to guide the development of moral robotics; currently, this development rests almost solely on the shoulders of large companies with minimal input from the scientific community or general public.

MOIM is particularly interested in how humans perceive robots that make moral decisions, and what type of morality humans would ideally like robots to abide by. Currently, MOIM is using an array of tools in experimental social psychology and cognitive science to study human behavior and perception in situations where robots make moral decisions, such as decisions involving human lives. MOIM also actively participates in societal discussion, both at the governmental and public level.