In another notebook, I constructed a heatmap displaying the places of publication of articles returned by a search in Trove's newspapers zone.
I suggested that it would be interesting to visualise changes over time. This notebook does just that by creating an animated heatmap.
The key difference here is that instead of just getting and processing a single Trove API request, we'll need to fire off a series of API requests — one for each time interval.
You can use this notebook to visualise your own search queries, just edit the search parameters were indicated.
If you haven't used one of these notebooks before, they're basically web pages in which you can write, edit, and run live code. They're meant to encourage experimentation, so don't feel nervous. Just try running a few cells and see what happens!.
Some tips:
# This creates a variable called 'api_key', paste your key between the quotes
# <-- Then click the run icon
api_key = 'YOUR API KEY'
# This displays a message with your key
print('Your API key is: {}'.format(api_key))
You don't need to edit anything here. Just run the cells to load the bits and pieces we need.
# Import the libraries we need
# <-- Click the run icon
import requests
import pandas as pd
import os
import altair as alt
import json
import folium
from folium.plugins import HeatMapWithTime
import numpy as np
from tqdm.auto import tqdm
# Set up default parameters for our API query
# <-- Click the run icon
params = {
'zone': 'newspaper',
'encoding': 'json',
'facet': 'title',
'n': '1',
'key': api_key
}
api_url = 'http://api.trove.nla.gov.au/v2/result'
# <-- Click the run icon
def format_facets(data):
'''
Extract and normalise the facet data
'''
# Check to make sure we have results
try:
facets = data['response']['zone'][0]['facets']['facet']['term']
except TypeError:
# No results!
raise
else:
# Convert to DataFrame
df = pd.DataFrame(facets)
# Select the columns we want
df = df[['display', 'count']]
# Rename the columns
df.columns = ['title_id', 'total']
# Make sure the total column is a number
df['total'] = pd.to_numeric(df['total'], errors='coerce')
return df
def prepare_data(data):
'''
Reformat the facet data, merge with locations, and then generate a list of locations.
'''
# Check for results
try:
df = format_facets(data)
except TypeError:
# If there are no results just return and empty list
hm_data = []
else:
# Merge facets data with geolocated list of titles
df_located = pd.merge(df, locations, on='title_id', how='left')
# Group results by place, and calculate the total results for each
df_totals = df_located.groupby(['place', 'latitude', 'longitude']).sum()
hm_data = []
for place in df_totals.index:
# Get the total
total = df_totals.loc[place]['total']
# Add the coordinates of the place to the list of locations as many times as there are articles
hm_data += ([[place[1], place[2]]] * total)
return hm_data
# Get the geolocated titles data
locations = pd.read_csv('data/trove-newspaper-titles-locations.csv', dtype={'title_id': 'int64'})
# Only keep the first instance of each title
locations.drop_duplicates(subset=['title_id'], keep='first', inplace=True)
This is where you set your search keywords. Change 'weather AND wragge date:[* TO 1954]' in the cell below to anything you might enter in the Trove simple search box. Don't include a date range, as we'll be handling that separately. For example:
params['q'] = 'weather AND wragge'
params['q'] = '"Clement Wragge"'
params['q'] = 'text:"White Australia Policy"'
You can also limit the results to specific categories. To only search for articles, include this line:
params['l-category'] = 'Article'
# Enter your search parameters
# This can be anything you'd enter in the Trove simple search box
params['q'] = 'text:"White Australia"'
# Remove the "#" symbol from the line below to limit the results to the article category
#params['l-category'] = 'Article'
In this example we'll use years as our time interval. We could easily change this to months, or even individual days for a fine-grained analysis.
start_year = 1880
end_year = 1950
We need to make an API request for each year in our date range, so we'll construct a loop.
The cell below generates two lists. The first, hm_series
, is a list containing the data from each API request. The second, time_index
, is a list of the years we're getting data for. Obviously these two lists should be the same length — one dataset for each year.
# <-- Click the run icon
hm_series = []
time_index = []
for year in tqdm(range(start_year, end_year + 1)):
time_index.append(year)
decade = str(year)[:3]
params['l-decade'] = decade
params['l-year'] = year
response = requests.get(api_url, params=params)
data = response.json()
hm_data = prepare_data(data)
hm_series.append(hm_data)
To create an animated heatmap we just need to feed it the hm_series
data and time index.
# <-- Click the run icon
# Create the map
m = folium.Map(
location=[-30, 135],
zoom_start=4
)
#Add the heatmap data!
HeatMapWithTime(
hm_series,
index=time_index,
auto_play=True
).add_to(m)
<folium.plugins.heat_map_withtime.HeatMapWithTime at 0x11912d9d0>
# <-- Click the run icon
display(m)
Created by Tim Sherratt for the GLAM Workbench.
Support this project by becoming a GitHub sponsor.