Core work of the lab, PI Cowley
To study the neural correlates of High Performance Cognition (HPC), we must be able to track its precise temporal profile in a correlated set of observables, from task beginning to end. To achieve this, I aim to build an integrated framework of observations, including: 1 behavior (decision actions & context), 2 psychology (temperamental & physiological proneness), 3neurophysiology (neural responses), and importantly, 4 phenomenology. I have long argued that play is a great model for studying HPC, so I aim to deploy the framework in engaging gamified computer simulations.
This is the grand plan – in the meantime you’ll find us working on the component parts, in terms of methods of psychophysiology, player modelling, and also basic research on attention.
Upcoming project, PI Cowley
Artificial Intelligence (AI) will soon be influential throughout technological society, from transport to healthcare, civil engineering to defense. However, it is often very hard to understand how and why AIs make decisions – their algorithms are black boxes. A major hurdle for understanding AI decision making is how it differs from human cognition. To address this problem, we propose using online multiplayer games as an effective way to model and study AI decision making. In games, both human and AI agents are constrained by the same set of rules. How they solve the problems set by the game constitutes their ‘player personality’. AIPerCog combines computer science methodologies, cognitive science, and humanistic game culture studies, to examine patterns of play and distributions of play personality in massive online datasets from games like StarCraft II and Hearthstone.
Thus, we aim to study how AI learns to interact with and against humans, and to make AI decisions transparent.
Figure: Quantitative and qualitative ways to derive play patterns. Top Row: machine learning applied to games. (a) the game state is converted from the game description language’s internal data structures into a vector s; (b) the state vector is given as an input to the LSTM neural network; and (c) the neural network produces both an estimate of the expected outcome of the game and a probability distribution for the possible actions. Bottom Row: Behavlet play modelling. (d) a domain expert observes characteristic patterns of play and associated game utilities; (e) then identifies potential rule-based encodings of the observed pattern and labels the pattern with a playing style, e.g. aggressive, cautious; (f) the encoding is expressed as an action-sequence in terms of raw game-log data.