CaltechTHESIS
  A Caltech Library Service

Neural and computational representations of decision variables

Citation

McNamee, Daniel Ciarán (2015) Neural and computational representations of decision variables. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/Z96971HZ. http://resolver.caltech.edu/CaltechTHESIS:02242015-022259327

Abstract

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:decision-making; learning; representation; fmri; mvpa; vmpfc; striatum
Degree Grantor:California Institute of Technology
Division:Engineering and Applied Science
Major Option:Computation and Neural Systems
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • O'Doherty, John P.
Thesis Committee:
  • Adolphs, Ralph (chair)
  • Rangel, Antonio
  • Andersen, Richard A.
  • Perona, Pietro
  • O'Doherty, John P.
Defense Date:14 November 2014
Funders:
Funding AgencyGrant Number
National Institutes of Health (NIH)DA033077-01
National Science Foundation1207573
Record Number:CaltechTHESIS:02242015-022259327
Persistent URL:http://resolver.caltech.edu/CaltechTHESIS:02242015-022259327
DOI:10.7907/Z96971HZ
Related URLs:
URLURL TypeDescription
http://dx.doi.org/10.1016/j.cobeha.2014.10.004DOIArticle adapted for subsections of ch. 1
http://dx.doi.org/10.1038/nn.3337DOIArticle adapted for ch. 2
http://dx.doi.org/10.1371/journal.pcbi.1002918DOIArticle adapted for ch. 4
ORCID:
AuthorORCID
McNamee, Daniel Ciarán0000-0001-9928-4960
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:8773
Collection:CaltechTHESIS
Deposited By: Daniel McNamee
Deposited On:26 Feb 2015 17:00
Last Modified:12 Apr 2016 17:43

Thesis Files

[img]
Preview
PDF - Final Version
See Usage Policy.

4Mb

Repository Staff Only: item control page