Adversarial Machine Learning

(logo)

Overview

The wide adoption of machine learning and deep learning in many critical applications introduces stronger incentives for motivated adversaries to manipulate the results and models generated by these automated methods. For instance, attackers can deliberately influence the training data to manipulate the results of a predictive model in poisoning attacks, induce misclassification in the testing phase in evasion attacks, or infer private information about training data in privacy attacks. More research is needed to understand in depth the vulnerabilities of machine learning against a wide range of adversaries in critical environments. In this project we focus on machine learning techniques that have to operate in the presence of a diverse class of adversaries with sophisticated capabilities. The overarching goal of the project is to understand the attack-defense space when machine learning is used in critical applications, identify the vulnerabilities and limitations of different machine learning approaches, and propose solutions to address them.

Publications

    Subpopulation Data Poisoning Attacks. Matthew Jagielski, Giorgio Severi, Niklas Pousette-Harger, and Alina Oprea. 2020 arXiv version
    Exploring Backdoor Poisoning Attacks Against Malware Classifiers. Giorgio Severi, Jim Meyer, Scott Coull, and Alina Oprea. 2020 arXiv version
    FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments. Alesia Chernikova and Alina Oprea. 2019 arXiv version
    Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks. Ambra Dements, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, and Fabio Roli. In Proceedings of the USENIX Security Symposium, 2019 arXiv version
    Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. In Proceedings of the IEEE Symposium on Security and Privacy, 2018. arXiv version
    Robust High-Dimensional Linear Regression. Chang Liu, Bo Li, Yevgeniy Vorobeychik, Alina Oprea. In Proceedings of 10th ACM Workshop on Artificial Intelligence and Security (AISEC), 2017. Best paper award. arXiv version
    On the Practicality of Integrity Attacks on Document-Level Sentiment Analysis. A. Newell, R. Potharaju. L. Xiang, and C. Nita-Rotaru. In the 7th ACM Workshop on Artificial Intelligence and Security (AISec) with the 21st ACM CCS, Nov. 2014. [PDF].
    Securing Application-Level Topology Estimation Networks: Facing the Frog-Boiling Attack. S. Becker, J. Seibert, C. Nita-Rotaru, and R. State. In International Symposium on Recent Advances in Intrusion Detection (RAID) 2011. [PDF] [BIBTEX]
    Applying Game Theory to Analyze Attacks and Defenses in Virtual Coordinate Systems. S. Becker, J. Seibert, D. Zage, C. Nita-Rotaru, and R. State. In International Conference on Dependable Systems and Networks (DSN) 2011. [PDF] [BIBTEX]

    Presentations

  • Applications of Automated Text Interpretation to Fault-tolerance and Security. C. Nita-Rotaru, Boston University, January 2016.

Students

    Current Members

    • Matthew Jagielski, PhD student
    • Alesia Chernikova, PhD student
    • Giorgio Severi, PhD student
    • Yuxuan Wang, undergraduate student

    Previous Members

    • Wei Kong, Undergraduate student
    • Sheila Becker, Ph.D.Dec.2011, University of Luxembourg
    • Andrew Newell, Ph.D. Aug. 2014
    • Rahul Potharaju, Ph.D. May 2014
    • Jeffrey Seibert, Ph.D. May 2012
    • David Zage, Ph.D. May 2010
    • Luojie Xiang, M.S. May 2014

Funding

  • Army Research Lab, Collaborative Research Alliance
  • FireEye
  • This project was partially funded by Verisign. Collaborators: Radu State, University of Luxembourg.