TY - GEN
T1 - DeepRhythm
T2 - 28th ACM International Conference on Multimedia, MM 2020
AU - Qi, Hua
AU - Guo, Qing
AU - Juefei-Xu, Felix
AU - Xie, Xiaofei
AU - Ma, Lei
AU - Feng, Wei
AU - Liu, Yang
AU - Zhao, Jianjun
N1 - Funding Information:
This research was supported by JSPS KAKENHI Grant No. 20H04168, 19K24348, 19H04086, JST-Mirai Program Grant No. JPMJMI18BB, Japan. It was also supported by Singapore National Cybersecurity R&D Program No. NRF2018NCR-NCR005-0001, National Satellite of Excellence in Trustworthy Software System No. NRF2018NCR-NSOE003-0001, NRF Investigatorship No. NRFI06-2020-0022, and the National Natural Science Foundation of China under contracts Nos. 61871258 and U1703261 and the National Key Research and Development Project under contracts No. 2016YFB0800403. We gratefully acknowledge the support of NVIDIA AI Tech Center (NVAITC) to our research.
Publisher Copyright:
© 2020 ACM.
PY - 2020/10/12
Y1 - 2020/10/12
N2 - As the GAN-based face image and video generation techniques, widely known as DeepFakes, have become more and more matured and realistic, there comes a pressing and urgent demand for effective DeepFakes detectors. Motivated by the fact that remote visual photoplethysmography (PPG) is made possible by monitoring the minuscule periodic changes of skin color due to blood pumping through the face, we conjecture that normal heartbeat rhythms found in the real face videos will be disrupted or even entirely broken in a DeepFake video, making it a potentially powerful indicator for DeepFake detection. In this work, we propose DeepRhythm, a DeepFake detection technique that exposes DeepFakes by monitoring the heartbeat rhythms. DeepRhythm utilizes dual-spatial-temporal attention to adapt to dynamically changing face and fake types. Extensive experiments on FaceForensics++ and DFDC-preview datasets have confirmed our conjecture and demonstrated not only the effectiveness, but also the generalization capability of DeepRhythm over different datasets by various DeepFakes generation techniques and multifarious challenging degradations.
AB - As the GAN-based face image and video generation techniques, widely known as DeepFakes, have become more and more matured and realistic, there comes a pressing and urgent demand for effective DeepFakes detectors. Motivated by the fact that remote visual photoplethysmography (PPG) is made possible by monitoring the minuscule periodic changes of skin color due to blood pumping through the face, we conjecture that normal heartbeat rhythms found in the real face videos will be disrupted or even entirely broken in a DeepFake video, making it a potentially powerful indicator for DeepFake detection. In this work, we propose DeepRhythm, a DeepFake detection technique that exposes DeepFakes by monitoring the heartbeat rhythms. DeepRhythm utilizes dual-spatial-temporal attention to adapt to dynamically changing face and fake types. Extensive experiments on FaceForensics++ and DFDC-preview datasets have confirmed our conjecture and demonstrated not only the effectiveness, but also the generalization capability of DeepRhythm over different datasets by various DeepFakes generation techniques and multifarious challenging degradations.
UR - http://www.scopus.com/inward/record.url?scp=85106960135&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85106960135&partnerID=8YFLogxK
U2 - 10.1145/3394171.3413707
DO - 10.1145/3394171.3413707
M3 - Conference contribution
AN - SCOPUS:85106960135
T3 - MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia
SP - 1318
EP - 1327
BT - MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
Y2 - 12 October 2020 through 16 October 2020
ER -