Konferans bildirisi Açık Erişim
Cokelek, Mert; Imamoglu, Nevrez; Ozcinar, Cagri; Erdem, Erkut; Erdem, Aykut
Virtual and augmented reality (VR/AR) systems dramatically gained in popularity with various application areas such as gaming, social media, and communication. It is therefore a crucial task to have the know-how to efficiently utilize, store or deliver 360 degrees videos for end-users. Towards this aim, researchers have been developing deep neural network models for 360 degrees multimedia processing and computer vision fields. In this line of work, an important research direction is to build models that can learn and predict the observers' attention on 360 degrees videos to obtain so-called saliency maps computationally. Although there are a few saliency models proposed for this purpose, these models generally consider only visual cues in video frames by neglecting audio cues from sound sources. In this study, an unsupervised frequency-based saliency model is presented for predicting the strength and location of saliency in spatial audio. The prediction of salient audio cues is then used as audio bias on the video saliency predictions of state-of-the-art models. Our experiments yield promising results and show that integrating the proposed spatial audio bias into the existing video saliency models consistently improves their performance.
| Dosya adı | Boyutu | |
|---|---|---|
|
bib-4e5e9f77-d917-4d9d-a5c5-5d3804a6ee87.txt
md5:225b76de0f5cae5c29d69975cd83894f |
268 Bytes | İndir |
| Görüntülenme | 50 |
| İndirme | 20 |
| Veri hacmi | 5.4 kB |
| Tekil görüntülenme | 48 |
| Tekil indirme | 19 |