Core work of the lab, PI Cowley

To study the neural correlates of High Performance Cognition (HPC), we must be able to track its precise temporal profile in a correlated set of observables, from task beginning to end. To achieve this, I aim to build an integrated framework of observations, including: 1 behavior (decision actions & context), 2 psychology (temperamental & physiological proneness), 3neurophysiology (neural responses), and importantly, 4 phenomenology. I have long argued that play is a great model for studying HPC, so I aim to deploy the framework in engaging gamified computer simulations.

This is the grand plan – in the meantime you’ll find us working on the component parts, in terms of methods of psychophysiology, player modelling, and also basic research on attention.

See our Research Topic on Frontiers!


Upcoming project, PI Cowley

Artificial Intelligence (AI) will soon be influential throughout technological society, from transport to healthcare, civil engineering to defense. However, it is often very hard to understand how and why AIs make decisions – their algorithms are black boxes. A major hurdle for understanding AI decision making is how it differs from human cognition. To address this problem, we propose using online multiplayer games as an effective way to model and study AI decision making. In games, both human and AI agents are constrained by the same set of rules. How they solve the problems set by the game constitutes their ‘player personality’. AIPerCog combines computer science methodologies, cognitive science, and humanistic game culture studies, to examine patterns of play and distributions of play personality in massive online datasets from games like StarCraft II and Hearthstone.

Thus, we aim to study how AI learns to interact with and against humans, and to make AI decisions transparent.

Figure: Quantitative and qualitative ways to derive play patterns. Top Row: machine learning applied to games. (a) the game state is converted from the game description language’s internal data structures into a vector s; (b) the state vector is given as an input to the LSTM neural network; and (c) the neural network produces both an estimate of the expected outcome of the game and a probability distribution for the possible actions. Bottom Row: Behavlet play modelling. (d) a domain expert observes characteristic patterns of play and associated game utilities; (e) then identifies potential rule-based encodings of the observed pattern and labels the pattern with a playing style, e.g. aggressive, cautious; (f) the encoding is expressed as an action-sequence in terms of raw game-log data.

UPP Performance

Collaboration: Academy of Finland project, PIs Otto Lappi  & Lauri Oksama
Humans are very efficient in many complex dynamic tasks. Apparently easy and simple activities like picking a cup, walking in a crowd or driving are in fact underpinned by sophisticated information processing, of which we are not usually aware. This it is starkly revealed in artificial intelligence and robotics, and humans still vastly outperform computers on such sensorimotor tasks.
This suggests techniques for organizing perception and action discovered by the human brain during development and evolution that could be highly valuable to the development of future AI. We develop a unified computational model of visual target interception and avoidance – a core human sensorimotor capacity. More life-like robotics, autonomous vehicles, aerospace pilot and driver training, and sports psychology are applications of such knowledge. The fundamental interest lies in the revealing the ways our brain allows us to interact with complex dynamic situations so efficiently and effortlessly.