The Multisensory Signals and Meanings research group was formed in 2010 by a community of researchers from different fields (vision, hearing, language, mathematics, phonetics, and motor control) at the Insitute of Behavioural Sciences at the University of Helsinki. It’s goal is to advance the understanding of human communication and interaction mediated via vision and audition. We study these topics in vision, hearing and speech with a multidisciplinary approach, using methods of psychophysics, psycholinguistics, brain imaging and computational modeling in co-operation with the Visual Science Group. Research topics in vision include neural and perceptual interactions at early and intermediate processing levels of the visual system, planning and control of goal-directed hand movements based on visual information, and memory of visual features and shapes. Research topics in hearing, speech and language include the role of prosody, sentence structure and reference in spoken language processing, interaction of linguistic and visual information in discourse comprehension, mathematical modeling of hearing and modeling of speech production through high-quality naturalistic speech synthesis. In the future, the emphasis will shift to studying the interaction between sensory and motor systems in the extraction of meaning, thus merging these previously separate research fields.

The doctoral students participate in the activities of the research community, e.g., seminars and dissemination of the research, as full members of the group. The multidisciplinary nature of the research entails that the students acquire knowledge and skills in various traditionally distinct areas, starting from signal processing and programming to understanding mental representations. Research training has a further emphasis on technical skills, so that having completed their studies, students are competent to independently carry out all phases of research from setting up the laboratory, designing and conducting experiments, to scientific publishing.

The research community combines the expertise of vision, hearing and language researchers to study information processing at different levels of human sensory and cognitive systems through experimentation and modeling in order to unravel how meaning emerges from the interplay between visual, auditory and motor signals in a multisensory environment.

The people associated with the project are listed at the Research Database Tuhat.

Recent Posts