Jugal Parikh has been working in the field of security and machine learning for seven years. He enjoys solving complex security problems like targeted attacks detection, static and behavioral file/ script-based detection, and detecting adversarial attacks using machine learning. He is currently a senior data scientist at Microsoft's Windows defender research team.
Hardening Machine Learning Defenses Against Adversarial Attacks
9:10 - 10:00 a.m. CDT
In today's threat landscape, it's not unusual for attackers to circumvent traditional machine learning based detections' by constantly scanning their malware samples against security products and modifying them until they are no longer being detected. But more recently, we've seen a rise in attackers attempting to compromise these machine learning models directly by poisoning incoming telemetry and trying to fool the classifier into believing that a given set of malware samples are actually benign. Just deploying a set of malware classifiers to protect users in not enough. We need to constantly monitor the performance of deployed models and have sensors in place to alert against anomalous incoming traffic.
In this talk, we discuss several strategies to make machine learning models more robust to such attacks. We'll discuss research that shows how singular models are susceptible to tampering, and some techniques, like stacked ensemble models, can be used to make them more resilient. We also talk about the importance of diversity in base ML models and technical details on how they can be optimized to handle different threat scenarios. Lastly, we'll describe suspected tampering activity we've witnessed using protection telemetry from over half a billion computers, and whether our mitigations worked.
Overall, the presentation describes guidelines on creating reliable, scalable and robust production level machine learning models and systems in an active adversarial, noisy, temporally biased (concept shift) security domain. It focuses on identifying various vectors of attack in the data collection, model training, deployment process for malware classification and then proposes mitigations across the ML attack surface.
- Pros and cons of deploying client vs. cloud-based ML models for malware detection
- Real world case studies on past adversarial attacks that we’ve observed
- Journey of using stacked ensembles from a concept to a mature system running in production and protecting over half a billion customers from first seen malware attacks
- Different techniques for black box model interpretability and ways to avoid unnecessary biases
- How stacked ensembles compare against simulated adversarial ML attack techniques and some real-world case studies on their benefits