CaltechTHESIS
  A Caltech Library Service

Essays on learning and econometrics

Citation

Kayaba, Yutaka (2013) Essays on learning and econometrics. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechTHESIS:09242012-133200460

Abstract

This dissertation consists of two essays that focus on learning under state uncertainty and econometric applications for it, in which agents learn hidden state variables from their noisy measures sequentially.

In Chapter 2, "Nonparametric Learning Rules from Bandit Experiments: The Eyes Have It!", which is coauthored with Yingyao Hu and Matthew Shum, we assess, in a model-free manner, subjects' belief dynamics in a two-armed bandit learning experiment. A novel feature of our approach is to supplement the choice and reward data with subjects' eye movements during the experiment to pin down estimates of subjects' beliefs. Estimates show that subjects are more reluctant to "update down" following unsuccessful choices, than "update up" following successful choices. The profits from following the estimated learning and decision rules are smaller (by about 25% of typical experimental earnings) than what would be obtained from an fully rational Bayesian learning model, but comparable to the profits from alternative non-Bayesian learning models, including reinforcement learning and a simple "win-stay" choice heuristic.

In Chapter 3, I examine the optimal learning models for predicting price dynamics under outlier risk. Two kinds of outlier risk in price processes are considered here; A price process in which outliers occur as its fundamental value has changed, and that with little fundamental change. In the latter process outliers occur as observation error, which are often referred as price anomalies in behavioral finance. The two optimal learning models are characterized with non-Gaussian Kalman filter as Bayesian reinforcement learning and are solved numerically using sequential Monte Carlo sampling. Several key features are summarized in their learning rate and prediction error; The learning rate with outlier risk in fundamental value is a monotonically increasing function of absolute value of prediction error, while the learning rates with outlier risk in observation noise is a monotonically decreasing function. Interestingly, the uncertainly of the learning is seemingly identical among the two models, having a hump-shaped function of absolute value of prediction error.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:learning, experiments, eye-tracking, Bayesian vs. non-Bayesian learning, outlier risk
Degree Grantor:California Institute of Technology
Division:Humanities and Social Sciences
Major Option:Social Science
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Bossaerts, Peter L.
Thesis Committee:
  • Bossaerts, Peter L. (chair)
  • Shum, Matthew S.
  • Camerer, Colin F.
  • Gillen, Benjamin J.
Defense Date:13 September 2012
Funders:
Funding AgencyGrant Number
Nakajima FoundationUNSPECIFIED
Record Number:CaltechTHESIS:09242012-133200460
Persistent URL:http://resolver.caltech.edu/CaltechTHESIS:09242012-133200460
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:7210
Collection:CaltechTHESIS
Deposited By: Yutaka Kayaba
Deposited On:14 Nov 2012 17:34
Last Modified:26 Dec 2012 04:45

Thesis Files

[img]
Preview
PDF - Final Version
See Usage Policy.

1087Kb

Repository Staff Only: item control page