Abstract
A theory of how to label the discontinuities in terms of depth, orientation, albedo, illumination, and specular discontinuities is outlined. Labeling results using a simple linear classifier operating on the output of the Markov random field (MRF) associated with each vision module and coupled to the image data are presented. The classifier has been trained on a small set of a mixture of synthetic and real data. The authors suggest the use of a coupled MRF at the output of each module (image cues)—stereo, motion, color, and texture—to achieve two goals: 1) to counteract the noise and fill-in sparse data and 2) to integrate the image within each MRF to find the module discontinuities and align them with the intensity edges.
Original language | English (US) |
---|---|
Pages (from-to) | 1576-1581 |
Number of pages | 6 |
Journal | IEEE Transactions on Systems, Man and Cybernetics |
Volume | 19 |
Issue number | 6 |
DOIs | |
State | Published - 1989 |
ASJC Scopus subject areas
- General Engineering