In this notebook, we're following the Tensorflow for Poets tutorial, using Python to make an archaeological image classifier. You need tensorflow 1.15. This notebook + binder already has tensorflow installed. If you were working on your own machine, you'd install tensorflow with

pip install --upgrade "tensorflow==1.15.*"

We need to get some training data, organized like so (imagining we were building a classifier that recognized Roman pottery types):

|
|-training-images
  |
  |-Roman_pottery
      |-terrasig
      |-african_red_slip
      |-veranice_nera

That is, each category gets its own directory, and the category label is the directory name.

If you examine the directory structure in this notebook binder, you'll see in thetf_files/gallery folder that we've already provided two categories with some images in each. We grabbed this data by scraping the images at the Atlas of Roman Pottery and by downloading data from the Portable Antiquities Scheme.

This notebook is only a demonstration; for your own needs you'd need several thousand images overall (and generally, at least twenty for each category), and put them in the tf_files folder.

So let's get started.

The Retraining Script

The script we're going to use is called retrain. You can look at the script here (don't make any changes to it!). We're going to call the script in the codeblock below, but let's take a look at it first:

python -m scripts.retrain \
  --bottleneck_dir=tf_files/bottlenecks \
  --how_many_training_steps=500 \
  --model_dir=tf_files/models/ \
  --summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}" \
  --output_graph=tf_files/retrained_graph.pb \
  --output_labels=tf_files/retrained_labels.txt \
  --architecture="${ARCHITECTURE}" \
  --image_dir=tf_files/{training_images}

The last flag, image_dir is just the path to the training images relative to the script.

See that ${ARCHITECTURE]} bit? That refers to an already-trained model that we are going to add to with our images. What we are doing is adding another layer to the model that takes what the model has already learned about the world and gives it just a bit more information about our particular use-case. If we use the Inception-v3 model, that means we're adding (2048 x n categories) model parameters corresponding to weights in the network. If we use the Mobilenet architecture, which 'sees' only 1001 dimensions, that means we're adding (1001 x n categories) model parameters. We're going to use mobilenet, which has

... 32 different Mobilenet models to choose from, with a variety of file size and latency options. The first number can be '1.0', '0.75', '0.50', or '0.25' to control the size, and the second controls the input image size, either '224', '192', '160', or '128', with smaller sizes running faster

We're going to use mobilenet_0.50_224 for now.

Because our training data is very small in this demo (anything less than 10000 images counts as small!), we have to add another flag to our command, concerning validation batch size. Otherwise we’ll get an error message. Note also that we’re only training for 500 steps; more steps will generally get better results (but diminishing returns also apply). So we'll add this flag:

--validation_batch_size=-1

The codeblock below has our finished command. When you run it, it might look like there's been some errors, but just scroll down the result pane. Eventually you'll start seeing data on how well the training is going. This might take a few minutes, depending on how much data we're computing. You'll know it's finished when you see INFO:tensorflow:Final test accuracy and the asterisk to the top left of the codeblock disappears.

In [ ]:
!python -m scripts.retrain \
  --bottleneck_dir=tf_files/bottlenecks \
  --how_many_training_steps=500 \
  --model_dir=tf_files/models/ \
  --summaries_dir=tf_files/training_summaries/mobilenet_0.50_224 \
  --output_graph=tf_files/retrained_graph.pb \
  --output_labels=tf_files/retrained_labels.txt \
  --architecture mobilenet_0.50_224 \
  --validation_batch_size=-1 \
  --image_dir=tf_files/gallery

This will run for a while. Elements to explore: different mobilenet architectures, increasing iterations, more data. Once it’s finished training, let’s test it:

In [ ]:
!python -m scripts.label_image \
    --graph=tf_files/retrained_graph.pb  \
    --image=tf_files/testing/B4-1-f.jpg

The result indicates the probability that the image corresponds to the category. Find an image of an amphora or a Roman fibula, load it into the testing subfolder, and modify the code above to try out. Or just change the path to one of the other sample photos.

Congratulations - you've used transfer learning to build a (very small) image classifier for archaeology!