This notebook demonstrates how you can find adversarial examples for a pre-trained example network on the MNIST dataset.
We suggest having the Gurobi
solver installed, since its performance is significantly faster. If this is not possible, the HiGHS
solver is another option.
The Images
package is only necessary for visualizing the sample images.
using MIPVerify
# If you have an academic license, use `Gurobi` instead.
using HiGHS
using Images
mnist = MIPVerify.read_datasets("MNIST")
mnist: `train`: {LabelledImageDataset} `images`: 60000 images of size (28, 28, 1), with pixels in [0.0, 1.0]. `labels`: 60000 corresponding labels, with 10 unique labels in [0, 9]. `test`: {LabelledImageDataset} `images`: 10000 images of size (28, 28, 1), with pixels in [0.0, 1.0]. `labels`: 10000 corresponding labels, with 10 unique labels in [0, 9].
mnist.train
{LabelledImageDataset} `images`: 60000 images of size (28, 28, 1), with pixels in [0.0, 1.0]. `labels`: 60000 corresponding labels, with 10 unique labels in [0, 9].
size(mnist.train.images)
(60000, 28, 28, 1)
mnist.train.labels
60000-element Vector{UInt8}: 0x05 0x00 0x04 0x01 0x09 0x02 0x01 0x03 0x01 0x04 ⋮ 0x02 0x09 0x05 0x01 0x08 0x03 0x05 0x06 0x08
We import a sample pre-trained neural network.
n1 = MIPVerify.get_example_network_params("MNIST.n1")
sequential net MNIST.n1 (1) Flatten(): flattens 4 dimensional input, with dimensions permuted according to the order [4, 3, 2, 1] (2) Linear(784 -> 40) (3) ReLU() (4) Linear(40 -> 20) (5) ReLU() (6) Linear(20 -> 10)
MIPVerify.frac_correct
allows us to verify that the network has a reasonable accuracy on the test set of 96.95%. (This step is crucial when working with your own neural net parameters; since the training is done outside of Julia, a common mistake is to transfer the parameters incorrectly.)
MIPVerify.frac_correct(n1, mnist.test, 10000)
0.9695
We feed the first image into the neural net, obtaining the activations of the final softmax layer.
Note that the image must be specified as a 4-dimensional array with size (1, height, width, num_channels)
. We provide a helper function MIPVerify.get_image
that extracts the image from the dataset while preserving all four dimensions.
sample_image = MIPVerify.get_image(mnist.test.images, 1)
1×28×28×1 Array{Float64, 4}: [:, :, 1, 1] = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0 [:, :, 2, 1] = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0 [:, :, 3, 1] = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ;;; … [:, :, 26, 1] = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0 [:, :, 27, 1] = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0 [:, :, 28, 1] = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0
output_activations = sample_image |> n1
10-element Vector{Float64}: -0.02074390040759505 -0.017499541361042703 0.16707187742051954 -0.05323712887827292 -0.019291011852467455 -0.07951546424946399 0.06191130931372918 4.833970937815984 0.46706000134294867 0.40145201599055125
The category that has the largest activation is category 8, corresponding to a label of 7.
(output_activations |> MIPVerify.get_max_index) - 1
7
This matches the true label.
MIPVerify.get_label(mnist.test.labels, 1)
7
We now try to find the closest $L_{\infty}$ norm adversarial example to the first image, setting the target category as index 10
(corresponding to a true label of 9). Note that we restrict the search space to a distance of 0.05
around the original image via the specified pp
.
target_label_index = 10
d = MIPVerify.find_adversarial_example(
n1,
sample_image,
target_label_index,
HiGHS.Optimizer,
Dict(),
norm_order = Inf,
pp=MIPVerify.LInfNormBoundedPerturbationFamily(0.05)
)
[notice | MIPVerify]: Attempting to find adversarial example. Neural net predicted label is 8, target labels are [10] [notice | MIPVerify]: Determining upper and lower bounds for the input to each non-linear unit. Running HiGHS 1.4.2 [date: 1970-01-01, git hash: f797c1ab6] Copyright (c) 2022 ERGO-Code under MIT licence terms Presolving model 1770 rows, 1621 cols, 51874 nonzeros 1770 rows, 1621 cols, 51874 nonzeros Solving MIP model with: 1770 rows 1621 cols (26 binary, 0 integer, 0 implied int., 1595 continuous) 51874 nonzeros Nodes | B&B Tree | Objective Bounds | Dynamic Constraints | Work Proc. InQueue | Leaves Expl. | BestBound BestSol Gap | Cuts InLp Confl. | LpIters Time 0 0 0 0.00% 0 inf inf 0 0 0 0 0.0s 0 0 0 0.00% 0.000619835635 inf inf 0 0 8 2021 0.1s 0 0 0 0.00% 0.00338385518 inf inf 5652 14 30 2097 5.2s 0 0 0 0.00% 0.00343896617 inf inf 5989 12 38 40946 25.4s B 15 0 3 12.55% 0.00343896617 0.0460846816 92.54% 5994 12 47 55284 28.3s Solving report Status Optimal Primal bound 0.0460846815889 Dual bound 0.0460846815889 Gap 0% (tolerance: 0.01%) Solution status feasible 0.0460846815889 (objective) 0 (bound viol.) 0 (int. viol.) 0 (row viol.) Timing 32.80 (total) 0.02 (presolve) 0.00 (postsolve) Nodes 27 LP iterations 60751 (total) 14654 (strong br.) 751 (separation) 38835 (heuristics)
Dict{Any, Any} with 11 entries: :TargetIndexes => [10] :SolveTime => 32.8068 :TotalTime => 60.2054 :Perturbation => [_[1] _[2] … _[27] _[28];;; _[29] _[30] … _[55] _[56];… :PerturbedInput => [_[785] _[786] … _[811] _[812];;; _[813] _[814] … _[83… :TighteningApproach => "mip" :PerturbationFamily => linf-norm-bounded-0.05 :SolveStatus => OPTIMAL :Model => A JuMP Model… :Output => JuMP.AffExpr[-0.012063867412507534 _[1601] + 0.0758192… :PredictedIndex => 8
using JuMP
perturbed_sample_image = JuMP.value.(d[:PerturbedInput])
1×28×28×1 Array{Float64, 4}: [:, :, 1, 1] = 0.0460847 0.0 0.0 0.0460847 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0460847 [:, :, 2, 1] = 0.0 0.0 0.0 0.0460847 0.0460847 0.0 … 0.0 0.0 0.0 0.0 0.0460847 [:, :, 3, 1] = 0.0 0.0 0.0 0.0 0.0 0.0 0.0460847 … 0.0 0.0 0.0460847 0.0 0.0 ;;; … [:, :, 26, 1] = 0.0 0.0 0.0460847 0.0 0.0 0.0 0.0 … 0.0460847 0.0460847 0.0 0.0 [:, :, 27, 1] = 0.0460847 0.0460847 0.0 0.0 0.0 0.0 … 0.0460847 0.0 0.0460847 [:, :, 28, 1] = 0.0460847 0.0 0.0460847 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0460847 0.0
As a sanity check, we feed the perturbed image into the neural net and inspect the activation in the final layer. We verify that the perturbed image does maximize the activation of the target label index, which is 10.
perturbed_sample_image |> n1
10-element Vector{Float64}: 0.6749450628745558 0.6179790360668576 0.3930321598089386 0.29656185967035986 0.2410105349548307 0.1606002120357421 0.5428526100447275 4.288351484573889 -0.2264301823307634 4.288351484573882
We visualize the perturbed image and compare it to the original image. Since we are minimizing the $L_{\infty}$-norm, changes are made to many pixels but the change to each pixels is not very noticeable.
colorview(Gray, perturbed_sample_image[1, :, :, 1])
colorview(Gray, sample_image[1, :, :, 1])
That concludes this quickstart! The next tutorial will introduce you to each of the layers, and show how you can import your own neural network parameters.