ANU Archives

Current version: v1.1.2

This repository contains Jupyter notebooks that help you work with data from the ANU Archives. The notebooks currently focus on the Sydney Stock exchange stock and share lists. As the content note indicates:

These are large format bound volumes of the official lists that were posted up for the public to see - 3 times a day - forenoon, noon and afternoon - at the close of the trading session in the call room at the Sydney Stock Exchange. The closing prices of stocks and shares were entered in by hand on pre-printed sheets.

There are 199 volumes covering the period from 1901 to 1950, containing more than 70,000 pages. Each page is divided into columns. The number of columns varies across the collection. Each column is divided into rows labelled with printed company or stock names. The prices are written alongside the company names.

We're currently working on ways of extracting company and share names, as well as the handwritten prices, from the digitised images. For more information see this repository. The notebooks below provide ways of navigating, visualising, and using the digitised pages.

See the GLAM Workbench for more details.

Notebooks

Data files

  • CSV-formatted list of all 70,000+ pages in the bound volumes including their date and session (Morning, Noon, Afternoon). Duplicate images are excluded.
  • CSV-formatted list of all dates within the period of the volumes. Includes the number of pages available for each date, and the number of pages expected (the number of pages produced each day changes across the collection). On dates with no pages, the reason field is used to record details of holidays or other interruptions to trading (some with links to Trove).
  • CSV-formatted list of holidays in NSW from 1901 to 1950.
  • Full data about missing, misplaced, and duplicated pages is saved in page_data_master.py. This data is combined with the holiday data to generate the complete page and date lists above.
  • Print and handwritten data extracted from the images using Amazon Textract have been saved in a series of CSV files available from Cloudstor. There's one file per year, and each row in the CSV represents a single column row. This data is in the process of being checked and cleaned, and is likely to change. The easiest way to explore this data is through the Datasette interface which provides fulltext and structured searching.

Cite as

See the GLAM Workbench or Zenodo for up-to-date citation details.

DOI


This repository is part of the GLAM Workbench.

If you think this project is worthwhile, you might like to support me on GitHub.