A Caltech Library Service

Recognition of visual object classes


Burl, Michael C. (1997) Recognition of visual object classes. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/96P7-6E62.


Humans can look at a scene or a photograph and easily recognize objects. Outside my window I can see cars, people walking a dog on a brick pathway, trees, buildings, etc. This perception is so effortless that it belies the difficulty of the task. Visual perception begins with light that is reflected from the scene into the eye. The light impinges upon the retina and is transduced by a two-dimensional array of photoreceptors into noisy electrical signals. The brain must then accomplish the difficult task of transforming from this low-level representation to a higher-level understanding of the scene in terms of regions, surfaces, textures, and objects.

For computer vision the problem is the same, but the hardware is different. A camera approximates the function of the eye and retina; that is, the camera produces a two-dimensional array of numbers (pixel values) representing the intensity of light reflected from the scene. The fundamental question addressed in this thesis is the following: what mathematical processing should be applied to the pixel values in order for a computer to recognize objects? The methods we propose are not intended as a model of human brain function, although they may provide some insight. We are simply trying to solve the same visual recognition problems as the brain without concern for whether (or how) our algorithms could be realized in neuronal "hardware."

We have developed a new framework for recognizing visual object classes in which the class members consist of characteristic parts in a deformable spatial configuration. Human faces are an object class of this type, since faces consist of eyes, nose, and mouth arranged in a configuration that varies depending on expression and pose and also from one person to another. A second object class is cursive handwriting, which consists of loops, cusps, crossings, etc. arranged in a deformable pattern. In our approach, the allowed object deformations are represented through shape statistics, which are learned from examples. Instances of an object in an image are detected by finding the appropriate features in the correct spatial configuration. Our algorithm is robust with respect to partial occlusion, detector false alarms, and missed features.

Potential applications include intelligent tools for finding objects in image data-bases, human-machine interfaces, user authentication, intelligent data gathering and compression, signature verification, and keyword spotting. Experimental results will be presented for two problems: (1) locating quasi-frontal views of human faces in cluttered scenes and with occlusions and (2) spotting keywords in on-line cursive handwriting data.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:object recognition, shape statistics, deformable spatial configuration of parts, constellation model, volcanoes on Venus
Degree Grantor:California Institute of Technology
Division:Engineering and Applied Science
Major Option:Electrical Engineering
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Perona, Pietro
Thesis Committee:
  • Perona, Pietro (chair)
  • Psaltis, Demetri
  • Franklin, Joel N.
  • Simon, Marvin K.
  • Smyth, Padhraic
  • Fayyad, Usama M.
  • Abu-Mostafa, Yaser S.
Defense Date:11 November 1996
Non-Caltech Author Email:Michael.C.Burl (AT)
Funding AgencyGrant Number
Center for Neuromorphic Systems EngineeringUNSPECIFIED
California Trade and Commerce AgencyUNSPECIFIED
Intel CorporationUNSPECIFIED
JPL DDF Grant61584
Record Number:CaltechETD:etd-01092008-094943
Persistent URL:
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:93
Deposited By: Imported from ETD-db
Deposited On:25 Jan 2008
Last Modified:21 Dec 2019 04:35

Thesis Files

PDF (Burl_mc_1997.pdf) - Final Version
See Usage Policy.


Repository Staff Only: item control page