Konferans bildirisi Açık Erişim
Ates, Hasan F.; Sunetci, Sercan
Semantic segmentation (i.e. image parsing) aims to annotate each image pixel with its corresponding semantic class label. Spatially consistent labeling of the image requires an accurate description and modeling of the local contextual information. Superpixel image parsing methods provide this consistency by carrying out labeling at the superpixel-level based on superpixel features and neighborhood information. In this paper, we develop generalized and flexible contextual models for superpixel neighborhoods in order to improve parsing accuracy. Instead of using a fixed segmentation and neighborhood definition, we explore various contextual models to combine complementary information available in alternative superpixel segmentations of the same image. Simulation results on two datasets demonstrate significant improvement in parsing accuracy over the baseline approach.
Dosya adı | Boyutu | |
---|---|---|
bib-c97f985f-a8ab-4bac-9b2e-0cbc46b14475.txt
md5:be4cf7741c5fe2f5627a91a8f846768d |
145 Bytes | İndir |
Görüntülenme | 40 |
İndirme | 7 |
Veri hacmi | 1.0 kB |
Tekil görüntülenme | 38 |
Tekil indirme | 7 |