A Caltech Library Service

The Neural Mechanisms of Value Construction


Cross, Logan Matthew (2022) The Neural Mechanisms of Value Construction. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/x5kk-8h27.


Research in decision neuroscience has characterized how the brain makes decisions by assessing the expected utility of each option in an abstract value space that affords the ability to compare dissimilar options. Experiments at multiple levels of analysis in multiple species have localized the ventromedial prefrontal cortex (vmPFC) and nearby orbitofrontal cortex (OFC) as the main nexus where this abstract value space is represented. However, much less is known about how this value code is constructed by the brain in the first place. By using a combination of behavioral modeling and cutting edge tools to analyze functional magnetic resonance imaging (fMRI) data, the work of this thesis proposes that the brain decomposes stimuli into their constituent attributes and integrates across them to construct value. These stimulus features embody appetitive or aversive properties that are either learned from experience or evaluated online by comparing them to previously experienced stimuli with similar features. Stimulus features are processed by cortical areas specialized for the perception of a particular stimulus type and then integrated into a value signal in vmPFC/OFC.

The project presented in Chapter 2 examines how food items are evaluated by their constituent attributes, namely their nutrient makeup. A linear attribute integration model succinctly captures how subjective values can be computed from a weighted combination of the constituent nutritive attributes of the food. Multivariate analysis methods revealed that these nutrient attributes are represented in the lateral OFC, while food value is encoded both in medial and lateral OFC. Connectivity between lateral and medial OFC allows this nutrient attribute information to be integrated into a value representation in medial OFC.

In Chapter 3, I show that this value construction process can operate over higher-level abstractions when the context requires bundles of items to be valued, rather than isolated items. When valuing bundles of items, the constituent items themselves become the features, and their values are integrated with a subadditive function to construct the value of the bundle. Multiple subregions of PFC including but not limited to vmPFC compute the value of a bundle with the same value code used to evaluate individual items, suggesting that these general value regions contextually adapt within this hierarchy. When valuing bundles and single items in interleaved trials, the value code rapidly switches between levels in this hierarchy by normalizing to the distribution of values in the current context rather than representing all options on an absolute scale.

Although the attribute integration model of value construction characterizes human behavior on simple decision-making tasks, it is unclear how it can scale up to environments of real-world complexity. Taking inspiration from modern advances in artificial intelligence, and deep reinforcement learning in particular, in Chapter 4 I outline how connectionist models generalize the attribute integration model to naturalistic tasks by decomposing sensory input into a high dimensional set of nonlinear features that are encoded with hierarchical and distributed processing. Participants freely played Atari video games during fMRI scanning, and a deep reinforcement learning algorithm trained on the games was used as an end-to-end model for how humans evaluate actions in these high-dimensional tasks. The features represented in the intermediate layers of the artificial neural network were found to also be encoded in a distributed fashion throughout the cortex, specifically in the dorsal visual stream and posterior parietal cortex. These features emerge from nonlinear transformations of the sensory input that connect perception to action and reward. In contrast to the stimulus attributes used to evaluate the stimuli presented in the preceding chapters, these features become highly complex and inscrutable as they are driven by the statistical properties of high-dimensional data. However, they do not solely reflect a set of features that can be identified by applying common dimensionality reduction techniques to the input, as task-irrelevant sensory features are stripped away and task-relevant high-level features are magnified.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:computational neuroscience; decision neuroscience; value; reward; decision making; learning; reinforcement learning; deep reinforcement learning
Degree Grantor:California Institute of Technology
Division:Biology and Biological Engineering
Major Option:Computation and Neural Systems
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • O'Doherty, John P.
Thesis Committee:
  • Adolphs, Ralph (chair)
  • Yue, Yisong
  • Mobbs, Dean
  • O'Doherty, John P.
Defense Date:30 March 2022
Non-Caltech Author Email:locross93 (AT)
Funding AgencyGrant Number
NIHP50 MH094258
Record Number:CaltechTHESIS:04012022-011925533
Persistent URL:
Related URLs:
URLURL TypeDescription adapted for Chapter 4: Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments adapter for Chapter 2: Elucidating the underlying components of food valuation in the human orbitofrontal cortex
Cross, Logan Matthew0000-0002-5248-9499
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:14537
Deposited By: Logan Cross
Deposited On:05 Apr 2022 19:23
Last Modified:12 Apr 2022 19:11

Thesis Files

[img] PDF - Final Version
See Usage Policy.


Repository Staff Only: item control page