This chapter describes an empirical methodology for building and testing probability models for discretized (pixelated) images. Currently available digital cameras record images typically containing millions of pixels. Naively, one could imagine examining a large set of such images to try to determine how they are distributed. But a moment's thought leads one to realize the hopelessness of the endeavor. The amount of data needed to estimate a probability distribution of samples grows exponentially in D, the dimensionality of the space. This is known as the curse of dimensionality. Another common assumption is scale-invariance: resizing the image does not alter the probability structure. This may also be loosely justified by noting that adjusting the focal length (zoom) of a camera lens approximates (apart from perspective distortions) image resizing. As with translation-invariance, scale-invariance will clearly fail to hold at certain boundaries. Specifically, scale-invariance must fail for discretized images at fine scales approaching the size of the pixels. And similarly, it will also fail for finite-size images at coarse scales approaching the size of the entire image.
|Original language||English (US)|
|Title of host publication||The Essential Guide to Image Processing|
|Number of pages||19|
|State||Published - 2009|
ASJC Scopus subject areas
- Computer Science(all)