TY - GEN
T1 - Security Evaluation of Deep Neural Network Resistance against Laser Fault Injection
AU - Hou, Xiaolu
AU - Breier, Jakub
AU - Jap, Dirmanto
AU - Ma, Lei
AU - Bhasin, Shivam
AU - Liu, Yang
N1 - Funding Information:
National Research Foundation (NRF) Singapore, Prime Ministers Office under its National Cyber-security R&D Program (Award No. NRF2014NCRNCR001- 30 and No. NRF2018NCR-NCR005-0001), National Research Foundation (NRF) Singapore, National Satellite of Excellence in Trustworthy Software Systems under its Cybersecurity R&D Program (Award No. NRF2018NCR-NSOE003-0001), and National Research Foundation Investigatorship Singapore (Award No. NRF-NRFI06-2020-0001). The authors acknowledge the support from the 'National Integrated Centre of Evaluation' (NICE); a facility of Cyber Security Agency, Singapore (CSA).
Funding Information:
National Research Foundation (NRF) Singapore, Prime Ministers Office under its National Cyber-security R&D Program (Award No. NRF2014NCR-NCR001-30 and No. NRF2018NCR-NCR005-0001), National Research Foundation (NRF) Singapore, National Satellite of Excellence in Trustworthy Software Systems under its Cybersecurity R&D Program (Award No. NRF2018NCR-NSOE003-0001), and National Research Foundation Inves-tigatorship Singapore (Award No. NRF-NRFI06-2020-0001). The authors acknowledge the support from the ’National Integrated Centre of Evaluation’ (NICE); a facility of Cyber Security Agency, Singapore (CSA). [6] S. Hong, P. Frigo, Y. Kaya, C. Giuffrida, and T. Dumitras,, “Termi-nal brain damage: Exposing the graceless degradation in deep neural networks under hardware fault attacks,” in 28th {USENIX} Security Symposium ({USENIX} Security 19), 2019, pp. 497–514.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/7/20
Y1 - 2020/7/20
N2 - Deep learning is becoming a basis of decision making systems in many application domains, such as autonomous vehicles, health systems, etc., where the risk of misclassification can lead to serious consequences. It is necessary to know to which extent are Deep Neural Networks (DNNs) robust against various types of adversarial conditions. In this paper, we experimentally evaluate DNNs implemented in embedded device by using laser fault injection, a physical attack technique that is mostly used in security and reliability communities to test robustness of various systems. We show practical results on four activation functions, ReLu, softmax, sigmoid, and tanh. Our results point out the misclassification possibilities for DNNs achieved by injecting faults into the hidden layers of the network. We evaluate DNNs by using several different attack strategies to show which are the most efficient in terms of misclassification success rates. Outcomes of this work should be taken into account when deploying devices running DNNs in environments where malicious attacker could tamper with the environmental parameters that would bring the device into unstable conditions. resulting into faults.
AB - Deep learning is becoming a basis of decision making systems in many application domains, such as autonomous vehicles, health systems, etc., where the risk of misclassification can lead to serious consequences. It is necessary to know to which extent are Deep Neural Networks (DNNs) robust against various types of adversarial conditions. In this paper, we experimentally evaluate DNNs implemented in embedded device by using laser fault injection, a physical attack technique that is mostly used in security and reliability communities to test robustness of various systems. We show practical results on four activation functions, ReLu, softmax, sigmoid, and tanh. Our results point out the misclassification possibilities for DNNs achieved by injecting faults into the hidden layers of the network. We evaluate DNNs by using several different attack strategies to show which are the most efficient in terms of misclassification success rates. Outcomes of this work should be taken into account when deploying devices running DNNs in environments where malicious attacker could tamper with the environmental parameters that would bring the device into unstable conditions. resulting into faults.
UR - http://www.scopus.com/inward/record.url?scp=85098195835&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098195835&partnerID=8YFLogxK
U2 - 10.1109/IPFA49335.2020.9261013
DO - 10.1109/IPFA49335.2020.9261013
M3 - Conference contribution
AN - SCOPUS:85098195835
T3 - Proceedings of the International Symposium on the Physical and Failure Analysis of Integrated Circuits, IPFA
BT - 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits, IPFA 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits, IPFA 2020
Y2 - 20 July 2020 through 23 July 2020
ER -