このページは http://www.slideshare.net/kisa12012/poisoningattacksvm の内容を掲載しています。

掲載を希望されないスライド著者の方は、削除申請よりご連絡下さい。

埋込み型プレイヤーを使用せず、常に元のサイトでご覧になりたい方は、自動遷移設定をご利用下さい。

- FOBOS6年以上前 by Hidekazu Oiwa
- OnlineClassifiers7年弱前 by Hidekazu Oiwa
- PRML58年弱前 by Hidekazu Oiwa

- Poisoning Attacks against

Support Vector Machines

ICML読み会 2012/07/28

Hidekazu Oiwa (@kisa12012)

oiwa (at) r.dl.itc.u-tokyo.ac.jp

12年7月28日土曜日

1 - 読む論文

• Poisoning Attacks against Support Vector Machines

• Battista Biggio (Itary), Blaine Nelson, Pavel Laskov

(German)

• http://icml.cc/2012/papers/880.pdf (論文)

• http://www.slideshare.net/pragroup/battista-

biggio-icml2012-poisoning-attacks-against-

support-vector-machines (スライド)

• 第一著者がDr. Laskovの元へ約半年visitingしてた時の論文

• Adversarial Classiﬁcationの研究者

12年7月28日土曜日

2 - 目次

• 研究の概要

• Poisoning Attacksとは？

• 問題設定

• 提案アルゴリズム

• Poisoning Attacks against SVMs

• カーネルSVMへの拡張

• 実験

• 人工データ実験

• 手書き文字認識実験

12年7月28日土曜日

3 - 研究概要

12年7月28日土曜日

4 - 背景

• （大規模）機械学習流行中

• Malicious Behavior : 悪意あるエージェントの行動

• 分類器や異常検知器を混乱させる様に動く

• Ex. スパムフィルタリング・マルウェア解析

• 目標：Malicious Behaviorに頑健なアルゴリズム

• そのためには…

• Malicious Behaviorの性質の分析が求められる

12年7月28日土曜日

5 - Malicious Behaviorの分類

[Barreno+ ML10]

• Causative Attack

• 設計者が持っている訓練データを直接操作，書き換え

• Exploratory Attack

• 設計者が持っている分類器を直接操作，書き換え

• これらに対応したアルゴリズムは，すでに複数提案されている

• Poisoning Attack

• 設計者の訓練データに新しく悪性データを注入

• 他の手法と比べて，より現実的な攻撃方法

• 設計者のデータベースを直接弄る必要がないため

• 異常検知系の先行研究しかない [Kloft+ AISTATS10]+

12年7月28日土曜日

6 - Poisoning Attack

訓練データ

SVM

12年7月28日土曜日

7 - Poisoning Attack

訓練データ

SVM

12年7月28日土曜日

7 - Poisoning Attack

訓練データ

SVM

性能劣化

12年7月28日土曜日

7 - 問題設定

設計者

攻撃者

training set

validation set

Dtr = {xi, yi}ni=1

Dval = {xk, yk}m

k=1

悪性データ

(xc, yc)

yc は予め固定

SVMは悪性データを含めた

Validation Setに対する分類性能を

training setから学習

一番押し下げる を生成

xc

12年7月28日土曜日

8 - 本研究の概要

• SVMへのPoisoning Attacksの解析

• 悪性データ作成アルゴリズムの提案

• Incremental SVM

• カーネル拡張

• 人工データ実験

• 手書き文字認識データ実験

12年7月28日土曜日

9 - 提案アルゴリズム

12年7月28日土曜日

10 - 最適化問題

X

X

max L(xc) =

[1

ykfx (xk)] =

gk(xc)

x

c

+

c

k

k

• Validation Setにおける損失を最大化

• : 悪性データ込みで学習されたSVM

fx (·)

c

• 非凸な最適化問題

•

x0

解法：Gradient Ascent

c = xc + t · u

u / rL(xc)

• SVMの更新と悪性データの更新を繰り返す

• ステップ幅を適切に設定すれば局所最適解に収束

12年7月28日土曜日

11 - アルゴリズムの概要

Poisoning Attacks against SVMs

Algorithm 1 Poisoning attack against SVM

3.1. Artificial data

Input: Dtr, the training data; Dval, the validation

We first consider a two-dimens

初期点は，既存データional data generation

data; yc, the class label of the attack point; x(0)

c

, the

model in which each class follows a Gaussian distri-

initial attack point; t, the step size.

bution with mean and

のラベルをﬂipしcovarianc

て作成e matrices given by

Output: xc, the final attack point.

µ

= [ 1.5, 0], µ+ = [1.5, 0], ⌃

= ⌃+ = 0.6I.

1: {↵i, b} learn an SVM on Dtr.

The points from the negative distribution are assigned

2: k 0.

the label

1 (shown as red in the subsequent figures)

3: repeat

and otherwise +1 (shown as blue). The training and

4:

Re-compute the SVM solution on Dtr[{x(p)

c

, yc}

the validation sets, Dtr and Dval (consisting of 25 and

SVMの更新

using incremental SVM (e.g., Cauwenberghs &

500 points per class, respectively) are randomly drawn

Poggio, 2001). This step requires {↵i, b}.

from this distribution.

5:

Compute @L on D

@u

val according to Eq. (10).

In the experiment

勾配を算出 presented below, the red class is the

6:

Set u to a unit vector aligned with @L .

@u

attacking class. To this end, a random point of the

7:

k k + 1 and x(p)

c

x(p 1)

c

+ tu

⇣

⌘

⇣

⌘

blue class is selected and its label is flipped to serve

8: until L x(p)

c

L x(p 1)

c

< ✏

as the starting point for ou

悪性データの更新 r method. Our gradient

ascent method is then used to refine this attack un-

9: return: xc = x(p)

c

til its termination condition is satisfied. The attack’s

from [Biggio+ 12]

trajectory is traced as the black line in Fig. 1 for both

12年7月28日土曜日

the linear kernel (upper two plots) and

12

the RBF ker-

be used as a starting point. However, if this point is

nel (lower two plots). The background in each plot

too close to the boundary of the attacking class, the

represents the error surface explicitly computed for all

iteratively adjusted attack point may become a reserve

points within the box x 2 [ 5, 5]2. The leftmost plots

point, which halts further progress.

in each pair show the hinge loss computed on a vali-

dation set while the rightmost plots in each pair show

The computation of the gradient of the validation error

the classification error for the area of interest. For the

crucially depends on the assumption that the structure

linear kernel, the range of attack points is limited to

of the sets S, E and R does not change during the up-

the box x 2 [ 4, 4]2 shown as a dashed line.

date. In general, it is difficult to determine the largest

step t along an arbitrary direction u, which preserves

For both kernels, these plots show that our gradient

this structure. The classical line search strategy used

ascent algorithm finds a reasonably good local maxi-

in gradient ascent methods is not suitable for our case,

mum of the non-convex error surface. For the linear

since the update to the optimal solution for large steps

kernel, it terminates at the corner of the bounded re-

may be prohibitively expensive. Hence, the step t is

gion, since the error surface is unbounded. For the

fixed to a small constant value in our algorithm. After

RBF kernel, it also finds a good local maximum of the

each update of the attack point x(p)

hinge loss which, incidentally, is the maximum classi-

c

, the optimal solu-

tion is efficiently recomputed from the solution on D

fication error within this area of interest.

tr,

using the incremental SVM machinery (e.g., Cauwen-

berghs & Poggio, 2001).

3.2. Real data

The algorithm terminates when the change in the vali-

We now quantitatively validate the e↵ectiveness of

dation error is smaller than a predefined threshold. For

the proposed attack strategy on a well-known MNIST

kernels including the linear kernel, the surface of the

handwritten digit classification task (LeCun et al.,

validation error is unbounded, hence the algorithm is

1995). Similarly to Globerson & Roweis (2006), we

halted when the attack vector deviates too much from

focus on two-class sub-problems of discriminating be-

the training data; i.e., we bound the size of our attack

tween two distinct digits.1 In particular, we consider

points.

the following two-class problems: 7 vs. 1; 9 vs. 8; 4

vs. 0. The visual nature of the handwritten digit data

3. Experiments

provides us with a semantic meaning for an attack.

Each digit in the MNIST data set is properly normal-

The experimental evaluation presented in the follow-

ized and represented as a grayscale image of 28 ⇥ 28

ing sections demonstrates the behavior of our pro-

pixels. In particular, each pixel is ordered in a raster-

posed method on an artificial two-dimensional dataset

and evaluates its e↵ectiveness on the classical MNIST

1The data set is also publicly available in Matlab format

handwritten digit recognition dataset.

at http://cs.nyu.edu/~roweis/data.html. - SVMの更新

• Incremental SVM [Cauwenberghs+ NIPS00]

• １つずつデータを追加しながらSVMを学習

• 全データの役割が不変な範囲の最適化問題を反復的に解く

• Reserve Point / Support Vector / Error Vector

• 条件を破らないと収束しない場

W

W

W

合，データの役割を変更

gi=0

gi>0

gi<0

α

C

• 各最適化時には，サポートベク

i=0

α

C

i

αi=C

xi

xi

ターのパラメータのみが更新

xi

support vector

error vector

• データが追加されるたびに，全

Figure 1: Soft-margin classification SVM training.

パラメータが収束するまで最適

from [Cauwenberghs+ NIPS00]

coefficients

化すれば，SVMの最適解に収束

are obtained by minimizing a convex quadratic objective function under

constraints [12]

(1)

12年7月28日土曜日

13

with Lagrange multiplier (and offset) , and with symmetric positive definite kernel matrix

. The first-order conditions on

reduce to the Kuhn-Tucker (KT)

conditions:

(2)

(3)

which partition the training data

and corresponding coefficients

,

, in

three categories as illustrated in Figure 1 [9]: the set

of margin support vectors strictly

on the margin (

), the set

of error support vectors exceeding the margin (not

necessarily misclassified), and the remaining set

of (ignored) vectors within the margin.

2.2 Adiabatic increments

The margin vector coefficients change value during each incremental step to keep all el-

ements in

in equilibrium, i.e., keep their KT conditions satisfied. In particular, the KT

conditions are expressed differentially as:

(4)

(5)

where

is the coefficient being incremented, initially zero, of a “candidate” vector outside

. Since

for the margin vector working set

, the changes in

coefficients must satisfy

..

.

(6)

.

..

with symmetric but not positive-definite Jacobian

:

.

(7)

.

.

.

.

.

..

. .

.. - 最適化問題の勾配計算

Poisoni •

Incremental SVMのアイデアを用いる

ng Attacks against SVMs

Eq. (2) with respect to u using the product rule: • 更新時に各データの役割が変動しない仮定を置く

inverted matrix are independent of xc, we obtain:

• サポートベクターのみに着目すれば良い

@↵

1

@Q

@g

>

sc

k

@↵

@Q

@b

=

↵

) ·

= Q

+

kc ↵

,

(3)

@u

⇣ c(Q 1

ss

@u

@u

ks @u

@u

c + yk @u

• 更新式はカーネル関数に依存

(9)

@b

1

@Qsc

where

=

↵ > ·

.

@u

⇣ c

@u

2 @↵

3

1

• 厳密な計算には，条件を破らないステップ幅の導出が必要

· · ·

@↵1

@u

@u

@↵

1

d

@Q

@b

Substituting (9) into (3) and further into (1), we obtain

= 6 .

. 7

kc

4 .

. .

. 5 , simil.

,

.

@u

.

.

.

@u

@u

• 本研究では定数ステッ

the desired gradient us プ幅で値を更新，計算をサボる

ed for optimizing our attack:

@↵s

· · ·

@↵s

@u

⇢

1

@ud

@L

m

X

@Q

@Q

=

M

sc +

kc

↵

@u

k @u

@u

c,

(10)

The expressions for the gradient can be further re-

k=1

fined using the fact that the step taken in direction

where

u should maintain the optimal SVM solution. This

can expressed as an adiabatic update condition using

1

M

(Q

T ) + y T ).

the technique introduced in (Cauwenberghs & Poggio,

k =

⇣

ks(Q 1

ss

k

from [Biggio+ 12]

2001). Observe that for the i-th point in the training

12年7月28日土曜日

14

set, the KKT conditions for the optimal solution of the

2.2. Kernelization

SVM training problem can be expressed as:

8

From Eq. (10), we see that the gradient of the objec-

tive function at iteration k may depend on the attack

X

>

<> 0; i 2 R

g

point x(p)

i =

Qij↵j + yib

1

= 0; i 2 S

(4)

c

= x(p 1)

c

+ tu only through the gradients of

>

the matrix Q. In particular, this depends on the cho-

j2D

:

tr

< 0; i 2 E

X

sen kernel. We report below the expressions of these

h =

yj↵j = 0 .

(5)

gradients for three common kernels.

j2Dtr

• Linear kernel:

The equality in condition (4) and (5) implies that an

infinitesimal change in the attack point xc causes a

@Kic

@(xi · x(p)

c

)

smooth change in the optimal solution of the SVM,

=

= tx

@u

@u

i

under the restriction that the composition of the sets

S, E and R remain intact. This equilibrium allows

• Polynomial kernel:

us to predict the response of the SVM solution to the

variation of xc, as shown below.

@Kic

@(x

=

i · x(p)

c

+ R)d = d(x

@u

@u

i · x(p)

c

+ R)d 1txi

By di↵erentiation of the xc-dependent terms in Eqs.

(4)–(5) with respect to each component ul (1 l d),

• RBF kernel:

we obtain, for any i 2 S,

@Kic

@e 2 ||xi xc||2

@g

@↵

@Q

@b

=

= K(xi, x(p)

= Q

+

sc ↵

= 0

@u

@u

c

) t(xi

x(p)

c

)

@u

ss

c + ys

l

@ul

@ul

@ul

(6)

@h

@↵

= y>

= 0 ,

The dependence on x(p)

c

(and, thus, on u) in the gra-

@u

s

l

@ul

dients of non-linear kernels can be avoided by substi-

which can be rewritten as

tuting x(p)

c

with x(p 1)

c

, provided that t is sufficiently

small. This approximation enables a straightforward

@b

1

0

@u

0

y>

l

S

extension of our method to arbitrary kernels.

@↵

=

↵

y

@Qsc

c .

(7)

@u

s

Qss

l

@ul

2.3. Poisoning Attack Algorithm

The first matrix can be inverted using the Sherman-

Morrison-Woodbury formula (L¨

utkepohl, 1996):

The algorithmic details of the method described in

Section 2.1 are presented in Algorithm 1.

1

0

y>

1

s

1

>

=

(8)

y

>

In this algorithm, the attack vector x(0)

s

Qss

⇣

Q 1

ss

c

is initialized

by cloning an arbitrary point from the attacked class

where

= Q 1

ss ys and ⇣ = y>

s Q 1

ss ys.

Substituting

and flipping its label. In principle, any point suffi-

(8) into (7) and observing that all components of the

ciently deep within the attacking class’s margin can - 実験

12年7月28日土曜日

15 - 人工データ実験

Poisoning Attacks against SVMs

mean Σ ξ (hinge loss)

i i

classification error

5

5

0.06

0.16

0.14

0.05

0.12

0.04

0.1

線形

0

0

0.03

0.08

0.06

0.02

カーネル

0.04

0.01

0.02

−5

0

−5

−5

0

5

−5

0

5

mean Σ ξ (hinge loss)

i i

classification error

5

5

0.145

0.14

0.035

0.135

RBF

0.13

0.03

0

0

0.125

カーネル

0.12

0.025

0.115

−5

0.11

−5

0.02

−5

0

5

−5

0

5

from [Biggio+ 12]

12年7月28日土曜日

16

Figure 1. Behavior of the gradient-based attack strategy on the Gaussian data sets, for the linear (top row) and the RBF

kernel (bottom row) with

= 0.5. The regularization parameter C was set to 1 in both cases. The solid black line

represents the gradual shift of the attack point x(p)

c

toward a local maximum. The hinge loss and the classification error

are shown in colors, to appreciate that the hinge loss provides a good approximation of the classification error. The value

of such functions for each point x 2 [ 5, 5]2 is computed by learning an SVM on Dtr [ {x, y =

1} and evaluating its

performance on Dval. The SVM solution on the clean data Dtr, and the training data itself, are reported for completeness,

highlighting the support vectors (with black circles), the decision hyperplane and the margin bounds (with black lines).

scan and its value is directly considered as a feature.

the bottom segment of the 7 straightens to resemble

The overall number of features is d = 28 ⇥ 28 = 784.

a 1, the lower segment of the 9 becomes more round

We normalized each feature (pixel value) x 2 [0, 1]d by

thus mimicking an 8, and round noise is added to the

dividing its value by 255.

outer boundary of the 4 to make it similar to a 0.

In this experiment only the linear kernel is considered,

The increase in error over the course of attack is es-

and the regularization parameter of the SVM is fixed

pecially striking, as shown in the rightmost plots. In

to C = 1. We randomly sample a training and a vali-

general, the validation error overestimates the classifi-

dation data of 100 and 500 samples, respectively, and

cation error due to a smaller sample size. Nonetheless,

retain the complete testing data given by MNIST for

in the exemplary runs reported in this experiment, a

Dts. Although it varies for each digit, the size of the

single attack data point caused the classification error

testing data is about 2000 samples per class (digit).

to rise from the initial error rates of 2–5% to 15–20%.

Since our initial attack point is obtained by flipping

The results of the experiment are presented in Fig. 2.

the label of a point in the attacked class, the errors

The leftmost plots of each row show the example of

in the first iteration of the rightmost plots of Fig. 2

the attacked class taken as starting points in our algo-

are caused by single random label flips. This confirms

rithm. The middle plots show the final attack point.

that our attack can achieve significantly higher error

The rightmost plots displays the increase in the vali-

rates than random label flips, and underscores the vul-

dation and testing errors as the attack progresses.

nerability of the SVM to poisoning attacks.

The visual appearance of the attack point reveals that

The latter point is further illustrated in a multiple

the attack blurs the initial prototype toward the ap-

point, multiple run experiment presented in Fig. 3.

pearance of examples of the attacking class. Compar-

For this experiment, the attack was extended by in-

ing the initial and final attack points, we see this e↵ect: - 手書き文字認識

実験設定

MNIST

実験データ

( 7 vs. 1; 9 vs. 8; 4 vs. 0)

線形カーネル

SVM

C=1

training set

100

validation set

500

12年7月28日土曜日

17 - 手書き文字認識

実験結果 (7 vs. 1)

Poisoning Attacks against SVMs

Before attack (7 vs 1)

After attack (7 vs 1)

classification error

0.4

validation error

5

5

0.3

testing error

10

10

15

15

0.2

20

20

0.1

25

25

0

5

10

15

20

25

5

10

15

20

25

0

200

400

number of iterations

Before attack (9 vs 8)

After attack (9 vs 8)

classification error

ラベルは1

from [Biggio+ 12]

0.4

validation error

5

5

0.3

testing error

10

10

12年7月28

15

日土曜日

15

0.2

18

20

20

0.1

25

25

0

5

10

15

20

25

5

10

15

20

25

0

200

400

number of iterations

Before attack (4 vs 0)

After attack (4 vs 0)

classification error

0.4

validation error

5

5

0.3

testing error

10

10

15

15

0.2

20

20

0.1

25

25

0

5

10

15

20

25

5

10

15

20

25

0

200

400

number of iterations

Figure 2. Modifications to the initial (mislabeled) attack point performed by the proposed attack strategy, for the three

considered two-class problems from the MNIST data set. The increase in validation and testing errors across di↵erent

iterations is also reported.

jecting additional points into the same class and av-

it attains a surprisingly large impact on the SVM’s

eraging results over multiple runs on randomly cho-

empirical classification accuracy. The presented at-

sen training and validation sets of the same size (100

tack method also reveals the possibility for assessing

and 500 samples, respectively). One can clearly see a

the impact of transformations carried out in the input

steady growth of the attack e↵ectiveness with the in-

space on the functions defined in the Reproducing Ker-

creasing percentage of the attack points in the training

nel Hilbert Spaces by means of di↵erential operators.

set. The variance of the error is quite high, which can

Compared to previous work on evasion of learning al-

be explained by relatively small sizes of the training

gorithms (e.g., Br¨

uckner & Sche↵er, 2009; Kloft &

and validation data sets.

Laskov, 2010), such influence may facilitate the prac-

tical realization of various evasion strategies. These

4. Conclusions and Future Work

implications need to be further investigated.

Several potential improvements to the presented

The poisoning attack presented in this paper is the

method remain to be explored in future work. The

first step toward the security analysis of SVM against

first would be to address our optimization method’s

training data attacks. Although our gradient ascent

restriction to small changes in order to maintain the

method is arguably a crude algorithmic procedure,

SVM’s structural constraints. We solved this by tak- - Poisoning Attacks against SVMs

Before attack (7 vs 1)

After attack (7 vs 1)

classification error

0.4

validation error

5

手書き文字認識

5

0.3

testing error

10

10

15

15

0.2

20

実験結果 (8 vs. 9)

20

0.1

25

25

0

5

10

15

20

25

5

10

15

20

25

0

200

400

number of iterations

Before attack (9 vs 8)

After attack (9 vs 8)

classification error

0.4

validation error

5

5

0.3

testing error

10

10

15

15

0.2

20

20

0.1

25

25

0

5

10

15

20

25

5

10

15

20

25

0

200

400

number of iterations

Before attack (4 vs 0)

After attack (4 vs 0)

classification error

ラベルは8

from [Biggio+ 12]

0.4

validation error

5

5

0.3

testing error

10

10

12年7月28

15

日土曜日

15

0.2

19

20

20

0.1

25

25

0

5

10

15

20

25

5

10

15

20

25

0

200

400

number of iterations

Figure 2. Modifications to the initial (mislabeled) attack point performed by the proposed attack strategy, for the three

considered two-class problems from the MNIST data set. The increase in validation and testing errors across di↵erent

iterations is also reported.

jecting additional points into the same class and av-

it attains a surprisingly large impact on the SVM’s

eraging results over multiple runs on randomly cho-

empirical classification accuracy. The presented at-

sen training and validation sets of the same size (100

tack method also reveals the possibility for assessing

and 500 samples, respectively). One can clearly see a

the impact of transformations carried out in the input

steady growth of the attack e↵ectiveness with the in-

space on the functions defined in the Reproducing Ker-

creasing percentage of the attack points in the training

nel Hilbert Spaces by means of di↵erential operators.

set. The variance of the error is quite high, which can

Compared to previous work on evasion of learning al-

be explained by relatively small sizes of the training

gorithms (e.g., Br¨

uckner & Sche↵er, 2009; Kloft &

and validation data sets.

Laskov, 2010), such influence may facilitate the prac-

tical realization of various evasion strategies. These

4. Conclusions and Future Work

implications need to be further investigated.

Several potential improvements to the presented

The poisoning attack presented in this paper is the

method remain to be explored in future work. The

first step toward the security analysis of SVM against

first would be to address our optimization method’s

training data attacks. Although our gradient ascent

restriction to small changes in order to maintain the

method is arguably a crude algorithmic procedure,

SVM’s structural constraints. We solved this by tak- - Poisoning Attacks against SVMs

Before attack (7 vs 1)

After attack (7 vs 1)

classification error

0.4

validation error

5

5

0.3

testing error

10

10

15

15

0.2

20

20

0.1

25

25

0

5

10

15

20

25

5

10

15

20

25

0

200

400

number of iterations

Before attack (9 vs 8)

After attack (9 vs 8)

classification error

0.4

validation error

5

手書き文字認識

5

0.3

testing error

10

10

15

15

0.2

20

実験結果 (4 vs. 0)

20

0.1

25

25

0

5

10

15

20

25

5

10

15

20

25

0

200

400

number of iterations

Before attack (4 vs 0)

After attack (4 vs 0)

classification error

0.4

validation error

5

5

0.3

testing error

10

10

15

15

0.2

20

20

0.1

25

25

0

5

10

15

20

25

5

10

15

20

25

0

200

400

number of iterations

ラベルは0

from [Biggio+ 12]

Figure 2. Modifications to the initial (mislabeled) attack point performed by the proposed attack strategy, for the three

considered two-class problems from the MNIST data set. The increase in validation and testing errors across di↵erent

iterations is also reported.

12年7月28日土曜日

20

jecting additional points into the same class and av-

it attains a surprisingly large impact on the SVM’s

eraging results over multiple runs on randomly cho-

empirical classification accuracy. The presented at-

sen training and validation sets of the same size (100

tack method also reveals the possibility for assessing

and 500 samples, respectively). One can clearly see a

the impact of transformations carried out in the input

steady growth of the attack e↵ectiveness with the in-

space on the functions defined in the Reproducing Ker-

creasing percentage of the attack points in the training

nel Hilbert Spaces by means of di↵erential operators.

set. The variance of the error is quite high, which can

Compared to previous work on evasion of learning al-

be explained by relatively small sizes of the training

gorithms (e.g., Br¨

uckner & Sche↵er, 2009; Kloft &

and validation data sets.

Laskov, 2010), such influence may facilitate the prac-

tical realization of various evasion strategies. These

4. Conclusions and Future Work

implications need to be further investigated.

Several potential improvements to the presented

The poisoning attack presented in this paper is the

method remain to be explored in future work. The

first step toward the security analysis of SVM against

first would be to address our optimization method’s

training data attacks. Although our gradient ascent

restriction to small changes in order to maintain the

method is arguably a crude algorithmic procedure,

SVM’s structural constraints. We solved this by tak- - 実験結果から

• 悪性データがラベルクラスの性質を取り込んだデータに変化

• 7の下部が1の横棒に似た形になるなど

• 1データで，15-20%のエラー率向上を達成

• 初期点では2-5%

• その後のデータ改良にて，上記のエラー率を達成

• アルゴリズムの有効性を示した形に

• training dataの数を増やした時は，もっと性能は悪いで

しょうが…

12年7月28日土曜日

21 - Poisoning Attacks against SVMs

ing many tiny gradient steps. It would be interesting

to investigate a more accurate and efficient computa-

tion of the largest possible step that does not alter the

classification error (7 vs 1)

structure of the optimal solution.

0.4

validation error

0.35

testing error

Another direction for research is the simultaneous opti-

mization of multi-point attacks, which we successfully

0.3

approached with sequential single-point attacks. The

0.25

first question is how to optimally perturb a subset of

0.2

the training data; that is, instead of individually opti-

0.15

mizing each attack point, one could derive simultane-

複数データ実験

0.1

ous steps for every attack point to better optimize their

overall e↵ect. The second question is how to choose

Poisoning Attac

0.05 ks against SVMs

the best subset of points to use as a starting point

0 0

2

4

6

8

ing many tiny gradient step

for the attack. Generally, the latter is a subset selec-

% of attack points in training data s. It would be interesting

to investigate a more accurate and efficient com

ti pu

on ta-

problem but heuristics may allow for improved ap-

tion of the largest possible step that does not alt

p erro t

xihe

mations. Regardless, we demonstrate that even

classification error (7 vs 1)

structure of the optimal s

classification error (9 vs 8) olution.

non-optimal multi-point attack strategies significantly

0.4

0.4

degrade the SVM’s performance.

validation error

validation error

0.35

testing error

0.35

Another dir

testing error ection for research is the simultaneous opti-

•

An important practical limitation of the proposed

悪性データを一個ずつ追加で入れ

0.3

mization of multi-point attacks, which we successfully

0.3

ていった場合の性能推移

approached with sequential single-point attacks.

me T

thhoed is the assumption that the attacker controls

0.25

0.25

first question is how to optimally perturb a sub

t s

h e

e tl of

abels of the injected points. Such assumptions

•

0.2

0.2

may not hold when the labels are only assigned by

決定境界に近いデータを初期点に置くと，悪性データ

the training data; that is, instead of individually opti-

trusted sources such as humans. For instance, a spam

0.15

0.15

mizing each attack point, one could derive simultane-

filter uses its users’ labeling of messages as its ground

0.1

0.1

ous steps for every attack point to better optimize their

がreserve pointに陥るため，そこで更新がストッ

truth. Thus, although an attacker can send arbitrary

Poisoning Attack

0.05

overall e↵ect. The second question is how to choose

0.05 s against SVMs

プ

the best subset of points to use as a starting

m peoi

ss nt

ages, he cannot guarantee that they will have the

0

0

labels necessary for his attack. This imposes an ad-

0

ing man

2 y tiny gr

4

adient6 steps. I8t would be in

0 ter

f es

or titn

h2 g

e attack.4 General

6 ly, the lat

8

ter is a subset selec-

% of attack points in training data

% of attack points in training data

ditional requirement that the attack data must satisfy

to investigate a more accurate and efficient com

ti pu

on ta-

problem but heuristics may allow for improved ap-

certain side constraints to fool the labeling oracle. Fur-

tion of the largest possible step that does not alt

p erro t

xihe

mations. Regardless, we demonstrate that even

ther work is needed to understand these potential side

classification error (7 vs 1)

structure of the optimal s

classification error (9 vs 8) olution.

non-optimal multi-point at

classification error (4 vs 0)

tack strategies significantly

0.4

0.4

0.4

degrade the SVM’s performance.

constraints and to incorporate them into attacks.

validation error

validation error

validation error

0.35

testing error

0.35

Another dire

testing error ction for research is the simultan

0.35 eous opti-

testing error

The final extension would be to incorporate the real-

mization of multi-point attacks, which we su

0.3 ccess

Anfuilly

mportant practical limitation of the proposed

0.3

0.3

world inverse feature-mapping problem; that is, the

approached with sequential single-point attacks.

me T

thhoed is the assumption that the attacker controls

problem of finding real-world attack data that can

0.25

0.25

0.25

first question is how to optimally perturb a sub

t s

h e

e tl of

abels of the injected points. Such assumptions

achieve the desired result in the learner’s input space.

0.2

0.2

0.2

the training data; that is, instead of individually

m op

ay tin-ot hold when the labels are only assigned by

For data like handwritten digits, there is a direct map-

0.15

trusted sources such as humans. For instance, a spam

0.15

0.15

mizing each attack point, one could derive simultane-

filter uses its users’ labeling of messages as its gr

pi ou

ng nd

between the real-world image data and the input

0.1

0.1

ous steps for every attack point to better opt

0.1imize their

truth. Thus, although an attacker can send arb

feitr

at ar

ur yes used for learning. In many other problems

0.05

overall e↵ect. The second question is how

0.05 to choose

0.05

messages, he cannot guarantee that they will ha

( ve

e. t

g.,he

spam filtering) the mapping is more complex and

the best subset of points to use as a starting point

may involve various non-smooth operations and nor-

0

0

0 0 labels2 necessar4y for his

6

attack.8 This imposes an ad-

0

2

4

6

8

0

for the2 attack.4 General6ly, the lat

8

ter is a subset selec-

malizations. Solving these inverse mapping problems

% of attack points in training data

% of attack points in training data

ditional requirement that th

% of attack points in training data e attack data must satisfy

tion problem but heuristics may allow for improved ap-

certain side constraints to fool the labeling oracle

f .or Fu

at rt-acks against learning remains open.

proximations. Regardless, we demon

from [Biggio+ 12]

Fi s

g t

u rat

re e

3. th

Reat

sul e

t v

s e

o n

ther wfor

t k

he is

m n

uleteid

-ped

ointto, u

m n

uldte

i-rst

ru an

n e d

xp teh

ries

m ee p

nt ot

s ential side

classification error (9 vs 8)

non-optimal multi-point at

classification error (4 vs 0)

tack strate

on gies

the signific

MNISTantly

constrai

data nstest. and

In teo

ac ihncorp

plot, or

wat

e es t

hohe

w m i

the nto

clasat

- tacks

A.cknowledgments

0.4

0.4

degrade the S

validation errorVM’s performance.

sification errors due to poisoning as a function of the per-

validation error

0.35

0.35

testing error

testing error

centage of t T

ra h

i e

ni fi

ngnal

cone

t x

a te

minsi

a on

tion wou

for ld

bo b

t e

h tto

he in

v c

al or

id p

a or

tio at

n e the

Thrie

s al

w-ork was supported by a grant awarded to B. Big-

An important practical limitation of the proposed

0.3

(red solid line) and testing sets (black dashed line). The

0.3

world inverse feature-mapping problem; that is

gi ,

o th

b e

y Regione Autonoma della Sardegna, and by

method is the assumption that the

12年7月28日土曜日

toat

p t

mac

os k

t er

ploc

t on

is trol

for sthe 7 vs.1 classifier, the middle is for

0.25

problem of finding real-world attack data

22 that

the can

project No. CRP-18293 funded by the same in-

0.25

the labels of the injected points. tS

hu

e ch

9 vas

s. su

8 m

cl p

a t

s i

s on

ifie s

achiever,th

a e

n d

d etshir

e ed

bo r

ttes

o u

m lt

m i

o n

st tih

s efole

r ar

thneer

4 ’svsi.nputs stp

it ac

ut e

i .

0.2

on, PO Sardegna FSE 2007-2013, L.R. 7/2007

0.2

may not hold when the labels are0 on

cl l

a y

ssi as

fie s

r.igned by

For data like handwritten digits, there is a direct m

“Pr ap

om-otion of the scientific research and technolog-

0.15

0.15

trusted sources such as humans. For instance, a spam

ping between the real-world image data and the

ic in

al piu

ntnovation in Sardinia”. The authors also wish

0.1

filter uses its users’ labeling of messages as its ground

0.1

features used for learning. In many other prob

to lem

ack s

nowledge the Alexander von Humboldt Founda-

0.05

truth. Thus, although an attacker can send arbitrary

(e.g., spam filtering) the mapping is more complex and

0.05

messages, he cannot guarantee that they will have the

0

may involve various non-smooth operations and nor-

0

0

labels n

2 ecessary

4

for his6 attack.8 This imposes an ad-

0

2

4

6

8

malizations. Solving these inverse mapping problems

% of attack points in training data

% of attack points in training data

ditional requirement that the attack data must satisfy

for attacks against learning remains open.

certain side constraint - まとめ

• SVMに毒を盛る！

• SVMの精度をガタ落ちさせるデータを新しく注入

• 性能を落とすデータの作り方を提案

• 最適化問題は非凸だが，勾配法で解く形に

• カーネルSVMにも適用可能

• 手書き文字認識タスクで実験

• たった1データで精度を2割前後落とす事に成功

12年7月28日土曜日

23 - Future Work

• より効率的，頑健，高速な最適化手法

• カーネル毎でのPoisoning Attackへの耐性評価

• 複数の悪性データを同時に注入可能なケース

• データのラベルを攻撃者が固定できないケース

• 設計者の人力でラベル付けされる場合等

• ラベル付を誘導するため，入力データに制約が必要

• 現実的には，入力データの人工生成は困難

• 入力ベクトルがbag-of-wordsの場合，それっぽいテキストに

変換可能な結果を返す必要がある

12年7月28日土曜日

24