Adversarial Machine Learning



Machine learning has numerous applications to security and privacy. For example, intrusion detection systems use signatures to detect known attacks, email systems use Bayesian filters to detect spam, several protocols use spatial-temporal outlier detection, decision trees, or support vector machines (SVM) to filter malicious data or detect attacks. While machine learning techniques have been very useful in defending against attacks where adversaries are not knowledgeable of the intricacies of the defense techniques themselves, they are less effective in the presence of adaptive and smart adversaries that exploit the specifics of machine learning defenses in an attempt to bypass them or to make them less useful because of a high number of false alarms. In this project we focus on machine learning techniques that have to operate in the presence of a diverse class of adversaries with sophisticated capabilities. The overarching goal of the project is to understand the attack-defense space when machine learning is used for security and privacy applications, identify the vulnerabilities and limitations of different machine learning approaches, and propose solutions to address them.



    Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. In Proceedings of the IEEE Symposium on Security and Privacy, 2018. [PDF]
    Robust Linear Regression Against Training Data Poisoning. Chang Liu, Bo Li, Yevgeniy Vorobeychik, and Alina Oprea. In Proceedings of 10th ACM Workshop on Artificial Intelligence and Security (AISEC), 2017. Best paper award.
    On the Practicality of Integrity Attacks on Document-Level Sentiment Analysis. A. Newell, R. Potharaju. L. Xiang, and C. Nita-Rotaru. In the 7th ACM Workshop on Artificial Intelligence and Security (AISec) with the 21st ACM CCS, Nov. 2014. [PDF].
    Securing Application-Level Topology Estimation Networks: Facing the Frog-Boiling Attack. S. Becker, J. Seibert, C. Nita-Rotaru, and R. State. In International Symposium on Recent Advances in Intrusion Detection (RAID) 2011. [PDF] [BIBTEX]
    Applying Game Theory to Analyze Attacks and Defenses in Virtual Coordinate Systems. S. Becker, J. Seibert, D. Zage, C. Nita-Rotaru, and R. State. In International Conference on Dependable Systems and Networks (DSN) 2011. [PDF] [BIBTEX]

    Technical reports

    Robust High-Dimensional Linear Regression Chang Liu, Bo Li, Yevgeniy Vorobeychik, Alina Oprea. arXiv:1608.02257


  • Applications of Automated Text Interpretation to Fault-tolerance and Security. C. Nita-Rotaru, Boston University, January 2016.


      Current Members

      • Matthew Jagielski
      • Alesia Chernikova

      Previous Members

      • Wei Kong, Undergraduate student
      • Sheila Becker, Ph.D.Dec.2011, University of Luxembourg
      • Andrew Newell, Ph.D. Aug. 2014
      • Rahul Potharaju, Ph.D. May 2014
      • Jeffrey Seibert, Ph.D. May 2012
      • David Zage, Ph.D. May 2010
      • Luojie Xiang, M.S. May 2014


    This project was partially funded by Verisign. Collaborators: Radu State, University of Luxembourg.