Dergi makalesi Açık Erişim
Akay, Simge; Arica, Nafiz
<?xml version='1.0' encoding='utf-8'?> <resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd"> <identifier identifierType="URL">https://aperta.ulakbim.gov.tr/record/237506</identifier> <creators> <creator> <creatorName>Akay, Simge</creatorName> <givenName>Simge</givenName> <familyName>Akay</familyName> <affiliation>Bahcesehir Univ, Istanbul, Turkey</affiliation> </creator> <creator> <creatorName>Arica, Nafiz</creatorName> <givenName>Nafiz</givenName> <familyName>Arica</familyName> <affiliation>Bahcesehir Univ, Istanbul, Turkey</affiliation> </creator> </creators> <titles> <title>Stacking Multiple Cues For Facial Action Unit Detection</title> </titles> <publisher>Aperta</publisher> <publicationYear>2021</publicationYear> <dates> <date dateType="Issued">2021-01-01</date> </dates> <resourceType resourceTypeGeneral="Text">Journal article</resourceType> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://aperta.ulakbim.gov.tr/record/237506</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1007/s00371-021-02291-3</relatedIdentifier> </relatedIdentifiers> <rightsList> <rights rightsURI="http://www.opendefinition.org/licenses/cc-by">Creative Commons Attribution</rights> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract">In this study, we develop a deep learning-based stacking scheme to detect facial action units (AU) in video data. Given a sequence of video frames, it combines multiple cues extracted from the AU detectors employing in frame, segment, and transition levels. Frame-based detector takes a single frame to determine the existence of AU by employing static face features. Segment-based detector examines various length of subsequences in the neighborhood of a frame to detect whether that frame is an element of an AU segment. Transition-based detector attempts to find the transitions from neutral faces containing no AUs to emotional faces or vice versa, by analyzing fixed size subsequences. The frame subsequences in segment and transition detectors are represented by motion history image, which models the temporal changes in faces. Each detector employs a separate convolutional neural network and, then their results are fed into a meta-classifier to learn the combining method. Combining multiple cues in different levels with a framework containing entirely deep networks improves the detection performance by both locating subtle AUs and tracking small changes in the facial muscles' movements. In performance analysis, it is shown that the proposed approach significantly outperforms the state of the art methods, when compared on CK+, DISFA, and BP4D databases.</description> </descriptions> </resource>
Görüntülenme | 30 |
İndirme | 8 |
Veri hacmi | 816 Bytes |
Tekil görüntülenme | 26 |
Tekil indirme | 8 |