Quickstart

This notebook demonstrates how you can find adversarial examples for a pre-trained example network on the MNIST dataset.

We suggest having the Gurobi solver installed, since its performance is significantly faster. If this is not possible, the Cbc solver is another option.

The Images package is only necessary for visualizing the sample images.

In [1]:
using MIPVerify
using Gurobi
using Images
┌ Info: Precompiling MIPVerify [e5e5f8be-2a6a-5994-adbb-5afbd0e30425]
└ @ Base loading.jl:1260
┌ Info: Precompiling Images [916415d5-f1e6-5110-898d-aaa5f9f070e0]
└ @ Base loading.jl:1260

Setup

MNIST dataset

We begin by loading the MNIST dataset. The data is provided as a Julia struct for easy access. The training images and test images are provided as a 4-dimensional array of size (num_samples, height, width, num_channels).

In [2]:
mnist = MIPVerify.read_datasets("MNIST")
Out[2]:
mnist:
  `train`: {LabelledImageDataset}
    `images`: 60000 images of size (28, 28, 1), with pixels in [0.0, 1.0].
    `labels`: 60000 corresponding labels, with 10 unique labels in [0, 9].
  `test`: {LabelledImageDataset}
    `images`: 10000 images of size (28, 28, 1), with pixels in [0.0, 1.0].
    `labels`: 10000 corresponding labels, with 10 unique labels in [0, 9].
In [3]:
mnist.train
Out[3]:
{LabelledImageDataset}
    `images`: 60000 images of size (28, 28, 1), with pixels in [0.0, 1.0].
    `labels`: 60000 corresponding labels, with 10 unique labels in [0, 9].
In [4]:
size(mnist.train.images)
Out[4]:
(60000, 28, 28, 1)
In [5]:
mnist.train.labels
Out[5]:
60000-element Array{UInt8,1}:
 0x05
 0x00
 0x04
 0x01
 0x09
 0x02
 0x01
 0x03
 0x01
 0x04
 0x03
 0x05
 0x03
    ⋮
 0x07
 0x08
 0x09
 0x02
 0x09
 0x05
 0x01
 0x08
 0x03
 0x05
 0x06
 0x08

Sample Neural Network

We import a sample pre-trained neural network.

In [6]:
n1 = MIPVerify.get_example_network_params("MNIST.n1")
Out[6]:
sequential net MNIST.n1
  (1) Flatten(): flattens 4 dimensional input, with dimensions permuted according to the order [4, 3, 2, 1]
  (2) Linear(784 -> 40)
  (3) ReLU()
  (4) Linear(40 -> 20)
  (5) ReLU()
  (6) Linear(20 -> 10)

MIPVerify.frac_correct allows us to verify that the network has a reasonable accuracy on the test set of 96.95%. (This step is crucial when working with your own neural net parameters; since the training is done outside of Julia, a common mistake is to transfer the parameters incorrectly.)

In [7]:
MIPVerify.frac_correct(n1, mnist.test, 10000)
Computing fraction correct...100%|██████████████████████| Time: 0:00:02
Out[7]:
0.9695

We feed the first image into the neural net, obtaining the activations of the final softmax layer.

Note that the image must be specified as a 4-dimensional array with size (1, height, width, num_channels). We provide a helper function MIPVerify.get_image that extracts the image from the dataset while preserving all four dimensions.

In [8]:
sample_image = MIPVerify.get_image(mnist.test.images, 1)
Out[8]:
1×28×28×1 Array{Float64,4}:
[:, :, 1, 1] =
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  …  0.0  0.0  0.0  0.0  0.0  0.0  0.0

[:, :, 2, 1] =
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  …  0.0  0.0  0.0  0.0  0.0  0.0  0.0

[:, :, 3, 1] =
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  …  0.0  0.0  0.0  0.0  0.0  0.0  0.0

...

[:, :, 26, 1] =
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  …  0.0  0.0  0.0  0.0  0.0  0.0  0.0

[:, :, 27, 1] =
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  …  0.0  0.0  0.0  0.0  0.0  0.0  0.0

[:, :, 28, 1] =
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  …  0.0  0.0  0.0  0.0  0.0  0.0  0.0
In [9]:
output_activations = sample_image |> n1
Out[9]:
10-element Array{Float64,1}:
 -0.02074390040759505
 -0.017499541361042703
  0.16707187742051954
 -0.05323712887827292
 -0.019291011852467455
 -0.07951546424946399
  0.06191130931372918
  4.833970937815984
  0.46706000134294867
  0.40145201599055125

The category that has the largest activation is category 8, corresponding to a label of 7.

In [10]:
(output_activations |> MIPVerify.get_max_index) - 1
Out[10]:
7

This matches the true label.

In [11]:
MIPVerify.get_label(mnist.test.labels, 1)
Out[11]:
7

Finding an Adversarial Example

We now try to find the closest $L_infty$ norm adversarial example to the first image, setting the target category as index 10 (corresponding to a true label of 9). Note that we restrict the search space to a distance of 0.05 around the original image via the specified pp.

In [12]:
target_label_index = 10
d = MIPVerify.find_adversarial_example(
    n1, 
    sample_image, 
    target_label_index, 
    Gurobi.Optimizer, 
    Dict(),
    norm_order = Inf,
    pp=MIPVerify.LInfNormBoundedPerturbationFamily(0.05)
)
Academic license - for non-commercial use only
[notice | MIPVerify]: Attempting to find adversarial example. Neural net predicted label is 8, target labels are [10]
[notice | MIPVerify]: Determining upper and lower bounds for the input to each non-linear unit.
  Calculating upper bounds: 100%|███████████████████████| Time: 0:00:00
Academic license - for non-commercial use only
  Calculating lower bounds: 100%|███████████████████████| Time: 0:00:00
  Imposing relu constraint: 100%|███████████████████████| Time: 0:00:00
  Calculating upper bounds:  10%|██▎                    |  ETA: 0:02:41
Academic license - for non-commercial use only
  Calculating upper bounds: 100%|███████████████████████| Time: 0:00:26
  Calculating lower bounds: 100%|███████████████████████| Time: 0:00:08
  Imposing relu constraint: 100%|███████████████████████| Time: 0:00:00
Academic license - for non-commercial use only
Academic license - for non-commercial use only
Gurobi Optimizer version 9.0.2 build v9.0.2rc0 (linux64)
Optimize a model with 3249 rows, 2405 columns and 54806 nonzeros
Model fingerprint: 0x64c1ce1f
Variable types: 2379 continuous, 26 integer (26 binary)
Coefficient statistics:
  Matrix range     [1e-05, 3e+01]
  Objective range  [1e+00, 1e+00]
  Bounds range     [5e-03, 1e+01]
  RHS range        [4e-03, 3e+01]
Presolve removed 2263 rows and 1564 columns
Presolve time: 0.16s
Presolved: 986 rows, 841 columns, 45508 nonzeros
Variable types: 815 continuous, 26 integer (26 binary)

Root relaxation: objective 6.198262e-04, 1141 iterations, 0.07 seconds

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     0    0.00062    0   16          -    0.00062      -     -    0s
     0     0    0.00524    0   18          -    0.00524      -     -    0s
     0     0    0.00545    0   18          -    0.00545      -     -    0s
     0     0    0.00568    0   18          -    0.00568      -     -    0s
     0     0    0.00569    0   18          -    0.00569      -     -    0s
     0     0    0.00570    0   18          -    0.00570      -     -    0s
     0     0    0.00570    0   18          -    0.00570      -     -    0s
     0     0    0.00572    0   18          -    0.00572      -     -    0s
     0     0    0.00572    0   18          -    0.00572      -     -    0s
     0     0    0.00572    0   18          -    0.00572      -     -    0s
H    0     0                       0.0462748    0.00572  87.6%     -    1s
     0     2    0.00573    0   17    0.04627    0.00573  87.6%     -    1s
H   59     2                       0.0460847    0.04270  7.35%  71.8    2s

Cutting planes:
  MIR: 10
  RLT: 3

Explored 70 nodes (5782 simplex iterations) in 2.53 seconds
Thread count was 4 (of 4 available processors)

Solution count 2: 0.0460847 0.0462748 

Optimal solution found (tolerance 1.00e-04)
Best objective 4.608468158892e-02, best bound 4.608468158892e-02, gap 0.0000%
Out[12]:
Dict{Any,Any} with 11 entries:
  :TargetIndexes      => [10]
  :SolveTime          => 2.52689
  :TotalTime          => 55.0765
  :Perturbation       => JuMP.VariableRef[noname noname … noname noname]…
  :PerturbedInput     => JuMP.VariableRef[noname noname … noname noname]…
  :TighteningApproach => "mip"
  :PerturbationFamily => linf-norm-bounded-0.05
  :SolveStatus        => OPTIMAL
  :Model              => A JuMP Model…
  :Output             => JuMP.GenericAffExpr{Float64,JuMP.VariableRef}[-0.01206…
  :PredictedIndex     => 8
In [13]:
using JuMP

perturbed_sample_image = JuMP.value.(d[:PerturbedInput])
Out[13]:
1×28×28×1 Array{Float64,4}:
[:, :, 1, 1] =
 0.0460847  0.0  0.0  0.0460847  0.0  …  0.0  0.0  0.0  0.0  0.0  0.0460847

[:, :, 2, 1] =
 0.0  0.0  0.0  0.0460847  0.0460847  0.0  …  0.0  0.0  0.0  0.0  0.0460847

[:, :, 3, 1] =
 0.0  0.0  0.0  0.0  0.0  0.0  0.0460847  …  0.0  0.0  0.0460847  0.0  0.0

...

[:, :, 26, 1] =
 0.0  0.0  0.0460847  0.0  0.0  0.0  0.0  …  0.0460847  0.0460847  0.0  0.0

[:, :, 27, 1] =
 0.0460847  0.0460847  0.0  0.0  0.0  0.0  …  0.0460847  0.0  0.0460847

[:, :, 28, 1] =
 0.0460847  0.0  0.0460847  0.0  0.0  0.0  …  0.0  0.0  0.0  0.0460847  0.0

As a sanity check, we feed the perturbed image into the neural net and inspect the activation in the final layer. We verify that the perturbed image does maximize the activation of the target label index, which is 10.

In [14]:
perturbed_sample_image |> n1
Out[14]:
10-element Array{Float64,1}:
  0.6749450628745557
  0.6179790360668576
  0.3930321598089386
  0.29656185967035986
  0.2410105349548306
  0.16060021203574193
  0.5428526100447275
  4.288351484573889
 -0.22643018233076273
  4.288351484573882

We visualize the perturbed image and compare it to the original image. Since we are minimizing the $L_infty$-norm, changes are made to many pixels but the change to each pixels is not very noticeable.

In [15]:
colorview(Gray, perturbed_sample_image[1, :, :, 1])
Out[15]:
In [16]:
colorview(Gray, sample_image[1, :, :, 1])
Out[16]:

That concludes this quickstart! The next tutorial will introduce you to each of the layers, and show how you can import your own neural network parameters.