We propose an approach to include contextual features for labeling images,
in which each pixel is assigned to one of a finite set of labels. The
features are incorporated into a probabilistic framework which combines
the outputs of several components. Components differ in the information
they encode. Some focus on the image-label mapping, while others focus
solely on patterns within the label field. Components also differ in their
scale, as some focus on fine-resolution patterns while others on coarser,
more global structure. A supervised version of the contrastive divergence
algorithm is applied to learn these features from labeled image data. We
demonstrate performance on two real-world image databases and compare it
to Markov random field model.