Electrical Engineering
      and Computer Sciences

Electrical Engineering and Computer Sciences

COLLEGE OF ENGINEERING

UC Berkeley

   

2009 Research Summary

Security of Adaptive Systems (SecML)

View Current Project Information

Blaine Alan Nelson, Benjamin I. P. Rubinstein, Anthony D. Joseph, Doug Tygar, Jack Chi, Satish Rao, Nina Taft1, Ling Huang2, Shing-hon Lau and Anthony Tran

Department of Homeland Security 022412, Air Force Office of Scientific Research FA9550-07-01-0501, MICRO, Hewlett-Packard and Siemens

Machine learning is becoming prevalent in the systems domain as a detection and analysis tool for problems amenable to adaptive techniques. However, the adaptivity and flexibility that are machine learning's biggest assets are also qualities that an attacker can exploit. Thus, it is important to study the security of learning systems [1-3].

We are investigating the vulnerability of real-world learning systems. In the context of the open-source, naive-Bayes-based spam filter SpamBayes; we are exploring how various avenues of attack exploit the learning algorithm's adaptability to compromise the system [4].

Lakhina et al. use Principal Components Analysis (PCA) for detecting anomalous point-to-point flows based on link volume data. We are investigating the ability of an adversary who poisons PCA's training data to evade detection, under various realistic models of control [5].

Universal sequence prediction considers the loss of a learner in the presence of an adversary. By explicitly modeling prediction as a game between an adversary and a learner, this approach provides a worst-case analysis that is desirable for a secure setting. For secure learning it is also important to consider methods that are robust to specific threat models. For this task robust statistics is another appropriate framework which quantifies the effect of outliers on statistical estimators.

[1]
M. Barreno, P. L. Bartlett, F. J. Chi, A. D. Joseph, B. Nelson, B. I. P. Rubinstein, U. Saini, and J. D. Tygar, "Open Problems in the Security of Learning," The First ACM Workshop on AISec, 2008 (to appear).
[2]
M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, "Can Machine Learning Be Secure?" ACM Symp. Information, Computer and Communications Security, 2006.
[3]
B. Nelson and A. D. Joseph, "Bounding an Attack's Complexity for a Simple Learning Model," First Workshop on Tackling Computer Systems Problems with Machine Learning Techniques, 2006.
[4]
B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. P. Rubinstein, U. Saini, C. Sutton, J. D. Tygar, and K. Xia, "Exploiting Machine Learning to Subvert Your Spam Filter," Proceedings of the First Usenix Workshop on Large-Scale Exploits and Emergent Threats (LEET), 2008.
[5]
B. I. P. Rubinstein, B. Nelson, L. Huang, A. D. Joseph, S. Lau, N. Taft, and J. D. Tygar, "Evading Anomaly Detection through Variance Injection Attacks on PCA (Extended Abstract)," International Symposium on Recent Advances in Intrusion Detection, 2008.

1Intel Research, Berkeley
2Intel Research, Berkeley