TY - GEN
T1 - Feature transform for ATR image decomposition
AU - Geiger, Davi
AU - Hummel, Robert
AU - Baldwin, Barney
AU - Liu, Tyng Luh
AU - Parida, Laxmi
PY - 1995
Y1 - 1995
N2 - We have developed an approach to image decomposition for ATR applications called the `feature transform.' There are two aspects to the feature transform: (1) A collection of rich, sophisticated feature extraction routines, and (2) the orchestration of a hierarchical decomposition of the scene into an image description based on the features. We have expanded the approach into two directions, one considering local features and the other considering global features. When studying local features, we have developed for (1) corner, T-junctions, edge, line, endstopping, and blob detectors as local features. A unified approach is used for all these detectors. For (2), we make use of the theory of matching pursuits and extend it to robust measures, using results involving L p norms, in order to build an iterative procedure in which local features are removed from the image successively, in a hierarchical manner. We have also considered for (1) global shape features or modal features, i.e., features representing the various modes of the models to be detected. For (2) a multiscale strategy is used for moving from the principal modes to secondary ones. The common aspect of both directions, local and global feature detection, is that the resulting transformations of the scene decomposes the image into a collection of features, in much the same way that a discrete Fourier transform decomposes an image into a sum of sinusoidal bar patterns. With the feature transform, however, the decomposition uses redundant basis functions that are related to spatially localized features or modal features that support the recognition process.
AB - We have developed an approach to image decomposition for ATR applications called the `feature transform.' There are two aspects to the feature transform: (1) A collection of rich, sophisticated feature extraction routines, and (2) the orchestration of a hierarchical decomposition of the scene into an image description based on the features. We have expanded the approach into two directions, one considering local features and the other considering global features. When studying local features, we have developed for (1) corner, T-junctions, edge, line, endstopping, and blob detectors as local features. A unified approach is used for all these detectors. For (2), we make use of the theory of matching pursuits and extend it to robust measures, using results involving L p norms, in order to build an iterative procedure in which local features are removed from the image successively, in a hierarchical manner. We have also considered for (1) global shape features or modal features, i.e., features representing the various modes of the models to be detected. For (2) a multiscale strategy is used for moving from the principal modes to secondary ones. The common aspect of both directions, local and global feature detection, is that the resulting transformations of the scene decomposes the image into a collection of features, in much the same way that a discrete Fourier transform decomposes an image into a sum of sinusoidal bar patterns. With the feature transform, however, the decomposition uses redundant basis functions that are related to spatially localized features or modal features that support the recognition process.
UR - http://www.scopus.com/inward/record.url?scp=0029544538&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0029544538&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:0029544538
SN - 0819418374
SN - 9780819418371
T3 - Proceedings of SPIE - The International Society for Optical Engineering
SP - 512
EP - 523
BT - Proceedings of SPIE - The International Society for Optical Engineering
T2 - Signal Processing, Sensor Fusion, and Target Recognition IV
Y2 - 17 April 1995 through 19 April 1995
ER -