The organizing principle of this thesis is that human emotion understanding reflects a model-based solution to a large class of ill-posed inverse problems. To interpret someone's expression, or predict how that person would react in a future situation, observers reason over a logically- and causally-structured intuitive theory of other minds. For this work, I chose a domain that is perceptually and socially rich, yet highly constrained: a real-life high-stakes televised one-shot prisoner's dilemma.
In the first set of studies, I illustrate that forward predictions play a critical role in emotion understanding. Intuitive hypotheses about what someone is likely to feel guide how observers interpret and reason about expressive behavior. By simulating human causal reasoning as abductive inference over latent emotion representations, a parameter-free Bayesian model captured surprising patterns of social cognition.
In the second set of studies, I formalize emotion prediction as a probabilistic generative model. Mental contents inferred via the inversion of an intuitive theory of mind generate the basis for inferring how others will evaluate, or 'appraise', a situation. The Inferred Appraisals model extends inverse planning to simulate how observers infer others' reactions, in the terms of utilities, prediction errors, and counterfactuals on rich social preferences for fairness and reputation. I show that the joint posterior distribution of inferred appraisals provides a powerful method for discovering the latent structure of the human intuitive theory of emotions.
In the third set of studies, I build a stimulus-computable model of emotion understanding. This work emphasizes the importance of testing whether computational models can use emotion-relevant information in service of social cognition. I suggest that building computer systems that approach human-level emotional intelligence requires generative models, where inferred appraisals function as latent causal explanations that link behavior, mental contents, and world states.
Ph.D.