Dergi makalesi Açık Erişim

Enhancement of Video Anomaly Detection Performance Using Transfer Learning and Fine-Tuning

Esma DİLEK; Murat DENER


MARC21 XML

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Enhancement of Video Anomaly Detection Performance Using Transfer Learning and Fine-Tuning</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="4">
    <subfield code="p">IEEE Access</subfield>
    <subfield code="v">12</subfield>
    <subfield code="c">73304-73322</subfield>
  </datafield>
  <controlfield tag="001">274105</controlfield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;The use of surveillance cameras is a common solution for the need to manage the urban traffic that arises with the effect of the increment in the population in the cities and to provide security. As the number of surveillance cameras rises, video streams that create big data are recorded. Analysis of video streams collected from those traffic surveillance cameras, automatic detection of unusual, suspicious events, as well as a range of harmful activities, become crucial because it is impossible to observe, analyze, and comprehend the contents of these movies using human labor. Recent studies have shown that deep learning (DL)-based artificial intelligence (AI) techniques, particularly machine learning (ML) techniques, are used in video anomaly detection (VAD) studies. In this study, an efficient frame-level VAD method is proposed based on transfer learning (TL) and fine-tuning (FT) approach and anomalies were detected using 20 popular convolutional neural networks (CNN) based DL models where variants of VGG, Xception, MobileNet, Inception, EfficientNet, ResNet, DenseNet, NASNet and ConvNeXt base models were trained using TL and FT approach. The proposed approach was tested using CUHK Avenue, UCSD Ped1 and UCSD Ped2 datasets and the performance of the models were measured using Area Under Curve (AUC), accuracy, precision, recall, and F1-score metrics. The highest AUC scores measured were 100%, 100% and 98.41% for the UCSD Ped1, UCSD Ped2 and CUHK Avenue datasets respectively. Compared to existing techniques in the literature, experimental results show that the suggested method offers state-of-the-art (SOTA) VAD performance.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="2">opendefinition.org</subfield>
    <subfield code="a">cc-by</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="0">(orcid)0000-0001-5746-6141</subfield>
    <subfield code="u">Gazi Üniversitesi</subfield>
    <subfield code="a">Murat DENER</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="b">article</subfield>
    <subfield code="a">publication</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="0">(orcid)0000-0002-7994-0294</subfield>
    <subfield code="u">Gazi Üniversitesi</subfield>
    <subfield code="a">Esma DİLEK</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">CUHK Avenue</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">deep learning</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">fine-tuning</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">transfer learning</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">UCSD Ped1</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">UCSD Ped2</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">video anomaly detection</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2024-05-23</subfield>
  </datafield>
  <controlfield tag="005">20241208122037.0</controlfield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="o">oai:aperta.ulakbim.gov.tr:274105</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="z">md5:e0dfd6e27b98064cd9cdfc772f763b4b</subfield>
    <subfield code="s">6109605</subfield>
    <subfield code="u">https://aperta.ulakbim.gov.trrecord/274105/files/IEEE Access-Manuscript-Published.pdf</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">http://www.opendefinition.org/licenses/cc-by-sa</subfield>
    <subfield code="a">Creative Commons Attribution Share-Alike</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.1109/ACCESS.2024.3404553</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="041" ind1=" " ind2=" ">
    <subfield code="a">eng</subfield>
  </datafield>
</record>
32
10
görüntülenme
indirilme
Görüntülenme 32
İndirme 10
Veri hacmi 61.1 MB
Tekil görüntülenme 27
Tekil indirme 9

Alıntı yap