Marius Kloft and Pavel Laskov

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2010-22

February 24, 2010

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-22.pdf

Security issues are crucial in a number of machine learning applications, especially in scenarios dealing with human activity rather than natural phenomena (e.g., information ranking, spam detection, malware detection, etc.). It is to be expected in such cases that learning algorithms will have to deal with manipulated data aimed at hampering decision making. Although some previous work addressed the handling of malicious data in the context of supervised learning, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution we analyze the performance of a particular method -- online centroid anomaly detection -- in the presence of adversarial noise. Our analysis addresses the following security-related issues: formalization of learning and attack processes, derivation of an optimal attack, analysis of its efficiency and constraints. We derive bounds on the effectiveness of a poisoning attack against centroid anomaly under different conditions: bounded and unbounded percentage of traffic, and bounded false positive rate. Our bounds show that whereas a poisoning attack can be effectively staged in the unconstrained case, it can be made arbitrarily difficult (a strict upper bound on the attacker's gain) if external constraints are properly used. Our experimental evaluation carried out on real HTTP and exploit traces confirms the tightness of our theoretical bounds and practicality of our protection mechanisms.


BibTeX citation:

@techreport{Kloft:EECS-2010-22,
    Author= {Kloft, Marius and Laskov, Pavel},
    Title= {Security Analysis of Online Centroid Anomaly Detection},
    Year= {2010},
    Month= {Feb},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-22.html},
    Number= {UCB/EECS-2010-22},
    Abstract= {Security issues are crucial in a number of machine learning applications, especially in scenarios dealing with human activity rather than natural phenomena (e.g., information ranking, spam detection, malware detection, etc.). It is to be expected in such cases that learning algorithms will have to deal with manipulated data aimed at hampering decision making. Although some previous work addressed the handling of malicious data in the context of supervised learning, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution we analyze the performance of a particular method -- online centroid anomaly detection -- in the presence of adversarial noise.  Our analysis addresses the following security-related issues: formalization of learning and attack processes, derivation of an optimal attack, analysis of its efficiency and constraints. We derive bounds on the effectiveness of
a poisoning attack against centroid anomaly under different conditions: bounded and unbounded percentage of traffic, and bounded false positive rate. Our bounds show that whereas a poisoning attack can be effectively staged in the unconstrained case, it can be made arbitrarily difficult (a strict upper bound on the attacker's gain) if external constraints are properly used. Our experimental evaluation carried out on real HTTP and exploit traces confirms the tightness of our theoretical bounds and practicality of our protection mechanisms.},
}

EndNote citation:

%0 Report
%A Kloft, Marius 
%A Laskov, Pavel 
%T Security Analysis of Online Centroid Anomaly Detection
%I EECS Department, University of California, Berkeley
%D 2010
%8 February 24
%@ UCB/EECS-2010-22
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-22.html
%F Kloft:EECS-2010-22