Dergi makalesi Açık Erişim
Kilickaya, Mert; Akkus, Burak Kerim; Cakici, Ruket; Erdem, Aykut; Erdem, Erkut; Ikizler-Cinbis, Nazli
In the past few years, automatically generating descriptions for images has attracted a lot of attention in computer vision and natural language processing research. Among the existing approaches, data-driven methods have been proven to be highly effective. These methods compare the given image against a large set of training images to determine a set of relevant images, then generate a description using the associated captions. In this study, the authors propose to integrate an object-based semantic image representation into a deep features-based retrieval framework to select the relevant images. Moreover, they present a novel phrase selection paradigm and a sentence generation model which depends on a joint analysis of salient regions in the input and retrieved images within a clustering framework. The authors demonstrate the effectiveness of their proposed approach on Flickr8K and Flickr30K benchmark datasets and show that their model gives highly competitive results compared with the state-of-the-art models.
Dosya adı | Boyutu | |
---|---|---|
bib-7c78a448-8fcd-44ce-95ce-cf8dbe80af73.txt
md5:6dea07cf2094b24c43c0bb356282e8c8 |
183 Bytes | İndir |
Görüntülenme | 39 |
İndirme | 8 |
Veri hacmi | 1.5 kB |
Tekil görüntülenme | 38 |
Tekil indirme | 8 |