CaltechTHESIS
  A Caltech Library Service

Modeling and predicting object attention in natural scenes

Citation

Spain, Merrielle (2011) Modeling and predicting object attention in natural scenes. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechTHESIS:05262011-172742472

Abstract

Humans automatically attend to certain objects in a scene. Better understanding this process could improve a computer's ability to parse scene images and convey information about them to humans. This thesis is arranged in three parts. The first part explores how important a particular object is in a photograph of a complex scene. We propose a definition of importance and present two methods for measuring object importance from human observers. Using this ground truth, we fit a function for predicting the importance of each object directly from a segmented image; our function combines many object-related and image-related features. We validate our importance predictions on a large set of objects and find that the most important objects may be identified automatically. We find that object position and size are particularly informative, while a popular measure of saliency is not. The second part explores the relationship between object naming, eye movements, and saliency maps. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Saliency maps, are often built on the assumption that ''early'' features (e.g., color, contrast, orientation, and motion) as opposed to objects themselves drive attention. We measure the eye position of humans viewing scenes and then ask them to recall objects that they saw in each scene. Weighted with recall frequency or maximum saliency, these objects predict fixations in individual images better than early saliency, suggesting that early saliency may have an indirect effect on attention, acting through detected objects. The third part explores the problem of locating objects in a scene irrespective of category. We introduce the first benchmark for category-independent object detection. It is composed of a large public dataset of annotated high-resolution scene images and suitable metrics for performance evaluation. We demonstrate our benchmark by comparing three methods for generalized object detection against a baseline and an upper bound.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:Attention, Visual recognition, Object recognition, Object importance, Perception, Keywording, Rank aggregation, Amazon Mechanical Turk, Saliency, Eye Movements, Object Detection
Degree Grantor:California Institute of Technology
Division:Engineering and Applied Science
Major Option:Computation and Neural Systems
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Perona, Pietro
Thesis Committee:
  • Koch, Christof (chair)
  • Perona, Pietro
  • Abu-Mostafa, Yaser S.
  • Shimojo, Shinsuke
  • Belongie, Serge J.
Defense Date:13 May 2011
Author Email:spain (AT) vision.caltech.edu
Funders:
Funding AgencyGrant Number
NSF Graduate Research FellowshipUNSPECIFIED
National Institute of Mental Health CNS Training GrantUNSPECIFIED
Record Number:CaltechTHESIS:05262011-172742472
Persistent URL:http://resolver.caltech.edu/CaltechTHESIS:05262011-172742472
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:6459
Collection:CaltechTHESIS
Deposited By: Merrielle Spain
Deposited On:27 May 2011 22:12
Last Modified:26 Dec 2012 04:36

Thesis Files

[img]
Preview
PDF - Final Version
See Usage Policy.

10Mb

Repository Staff Only: item control page