Project

Principal Investigator, Evaluating and Enhancing the Resilience of Deep Learning Models, (03-06-2019 to 02-06-2020) Nanyang Technological University, Amount: 100,000 SGD. Account No. M4082474.020

Introduction

Over the last decade, deep learning has become an integral part of artificial intelligence. It has enabled great leaps of progress in the domains of computer vision, robotics, finance, medicine, and more. In the domain of cybersecurity, deep learning has been widely used to detect malware, intrusions, data leakage, theft, fraud, vulnerabilities, and so on. Given their ability to deal with complex problems and massive amounts of data, deep neural networks have often exceeded the capabilities of humans. However, these models are often trained and executed without considering the possibility of a malicious actor. Such an actor can abuse this powerful tool at any point of its life cycle. For example, the attacker may evade detection, force a system to cause damage, or even reveal confidential information contained within its training data. As a result, researchers have been exploring the domain of adversarial machine learning, to understand and mitigate this threat. Although multiple defence techniques exist, they either apply to computer vision problems, or can be evaded by countering the countermeasures. Therefore, to better secure deep neural networks, we must develop the necessary techniques and tools to form a proactive arms-race. In this way, we can safely rely on deep neural networks to perform sensitive tasks and behave as expected. By implementing these tools, we will be able to fully utilize deep learning to further improve our quality of life.

Objective

The proposed research will focus on developing new methods how to measure a deep neural network’s resilience to all possible attacks with or without knowledge about the model structure and training set. During the research we will provide insights how to evaluate and improve the resilience of existing models. Specifically, we will evaluate the resilience by evaluating the quality of training/test data, testing the trained deep neural networks and testing the issues during development. Based on the evaluation, we will improve the robustness of the deep learning system by retraining the neural network offline and detecting adversarial examples online.

Tool Demos

Related Publication

  1. Xiaofei Xie, Lei Ma, Felix Juefei-Xu, Minhui Xue, Hongxu Chen, Yang Liu, Jianjun Zhao, Bo Li, Jianxiong Yin, and Simon See. “DeepHunter: a coverage-guided fuzz testing framework for deep neural networks.” In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 146-157. ACM, ISSTA 2019
  2. Xiaofei Xie, Lei Ma, Haijun Wang, Yuekang Li, Yang Liu, and Xiaohong Li. “DiffChaser: Detecting Disagreements for Deep Neural Networks.” In Proceedings of the 28th International Joint Conference on Artificial Intelligence. pp. 5772-5778. IJCAI 2019
  3. Xiaofei Xie, Hongxu Chen, Yi Li, Lei Ma, Yang Liu and Jianjun Zhao.“Coverage-Guided Fuzzing for Deep Learning Systems.” In Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering, pp. to appear, ASE Tool Demo 2019
  4. Xiaoning Du, Xiaofei Xie, Yi Li, Lei Ma, Yang Liu and Jianjun Zhao. “DeepStellar: Model-Based Quantitative Analysis of Stateful Deep Learning Systems.” In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 477-487, ESEC/FSE 2019
  5. Qianyu Guo, Sen Chen, Xiaofei Xie, Lei Ma, Qiang Hu, Hongtao Liu, Yang Liu, Jianjun Zhao and Xiaohong Li. ”An Empirical Study towards Characterizing Deep Learning Development and Deployment across Different Frameworks and Platforms.” In Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering, pp. to appear, ASE 2019
  6. Xiaoning Du, Xiaofei Xie, Yi Li, Lei Ma, Yang Liu and Jianjun Zhao. “A Quantitative Analysis Framework for Recurrent Neural Network.”In Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering, pp. to appear, ASE Tool Demo 2019
  7. Qiang Hu, Lei Ma,Xiaofei Xie, Yu Bing, Yang Liu and Jianjun Zhao. “DeepMutation++: a Mutation Testing Framework for Deep Learning Systems.” In Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering, pp. to appear, ASE Tool Demo 2019
  8. Bai Xue, Yang Liu, Lei Ma, Xiyue Zhang, Meng Sun and Xiaofei Xie. “Safe Inputs Generation for Black-box Systems.” In Proceedings of the 24th International Conference on Engineering of Complex Computer Systems, pp. to appear, ICECCS 2019.