In this paper, we present a novel Spatial Contextual Superpixel Model (SCSM) for vegetation classification in natural roadside images. The SCSM accomplishes the goal by transforming the classification task from a pixel into a superpixel domain for more effective adoption of both local and global spatial contextual information between superpixels in an image. First, the image is segmented into a set of superpixels with strong homogeneous texture, from which Pixel Patch Selective (PPS) features are extracted to train class-specific binary classifiers for obtaining Contextual Superpixel Probability Maps (CSPMs) for all classes, coupled with spatial constraints. A set of superpixel candidates with the highest probabilities is then determined to represent global characteristics of a testing image. A superpixel merging strategy is further proposed to progressively merge superpixels with low probabilities into the most similar neighbors by performing a double-check on whether a superpixel and its neighour accept each other, as well as enhancing a global contextual constraint. We demonstrate high performance by the proposed model on two challenging natural roadside image datasets from the Department of Transport and Main Roads and on the Stanford background benchmark dataset.
Funding
Category 1 - Australian Competitive Grants (this includes ARC, NHMRC)