In [6]:
from datetime import datetime


In this notebook, we use the miditime package to represent time-series data in sound. We use timidity to play the resulting midi file.

In [1]:
# A demo script that creates two notes

from miditime.miditime import MIDITime
# NOTE: this import works at least as of v1.1.3; for older versions or forks of miditime, you may need to use
# from miditime.MIDITime import MIDITime

# Instantiate the class with a tempo (120bpm is the default) and an output file destination.
mymidi = MIDITime(120, 'myfile.mid')

# Create a list of notes. Each note is a list: [time, pitch, attack, duration]
midinotes = [
    [0, 60, 200, 3],  #At 0 beats (the start), Middle C with attack 200, for 3 beats
    [10, 61, 200, 4]  #At 10 beats (12 seconds from start), C#5 with attack 200, for 4 beats

# Add a track with those notes

# Output the .mid file
60 0 3 200
61 10 4 200

Your music file - with just two notes! - is now available for download. From the 'home' page for this notebook, select and then download the file. Double-click it on your machine. Whatever program you use for music on your own computer should be able to play it. A mid file is not, in itself, music (in the sense that an mp3 is a compressed representation of the music. It is more akin to the digital representation of a score that the computer plays with a default instrument (often, a piano).

Writing music

Play with that script now, and add more notes. Try something simple - the notes for ‘Baa Baa Black Sheep’ are:

D, D, A, A, B, B, B, B, A
Baa, Baa, black, sheep, have, you, any, wool?

Middle C is '60' - this chart show the numerical representation of each note on the 88 key keyboard.

Let's represent some data

The miditime package is written with time series data in mind:

my_data = [
    {'event_date': <datetime object>, 'magnitude': 3.4},
    {'event_date': <datetime object>, 'magnitude': 3.2},
    {'event_date': <datetime object>, 'magnitude': 3.6},
    {'event_date': <datetime object>, 'magnitude': 3.0},
    {'event_date': <datetime object>, 'magnitude': 5.6},
    {'event_date': <datetime object>, 'magnitude': 4.0}

Here we're dealing with earthquake data, and it's in json notation. The datetime object is a particular way of formatting the date: datetime(1753,6,8) would be June 8 1753. Following the example that miditime provides, we can build up a sonification of this earthquake data like this:

In [46]:
# tempo (120bpm is the default), an output file destination, 
# the number of seconds you want to represent a year in the final song (default is 5 sec/year),
# the base octave (C5 is middle C, so the default is 5, 
# and how many octaves you want your output to range over (default is 1)
from miditime.miditime import MIDITime
mymidi = MIDITime(120, 'second-example.mid', 0.1, 5, 1)
In [47]:
my_data = [
    {'event_date': datetime(1792,6,8), 'magnitude': 3.4},
    {'event_date': datetime(1800,3,4), 'magnitude': 3.2},
    {'event_date': datetime(1810,1,16), 'magnitude': 3.6},
    {'event_date': datetime(1812,8,23), 'magnitude': 3.0},
    {'event_date': datetime(1813,10,10), 'magnitude': 5.6},
    {'event_date': datetime(1824,1,5), 'magnitude': 4.0}

Now we convert those dates into an integer. Oddly enough, this is done by defining time since 'epoch'. This epoch date is Jan 1 1970. The reasons why have to do with the evolution of unix. For dates before 1970, we end up with a negative number, but this is not a problem.

First we convert the date so that it is expressed with reference to the epoch:

In [48]:
my_data_epoched = [{'days_since_epoch': mymidi.days_since_epoch(d['event_date']), 'magnitude': d['magnitude']} for d in my_data]
In [49]:
[{'days_since_epoch': -64854.0, 'magnitude': 3.4},
 {'days_since_epoch': -62029.0, 'magnitude': 3.2},
 {'days_since_epoch': -58424.0, 'magnitude': 3.6},
 {'days_since_epoch': -57474.0, 'magnitude': 3.0},
 {'days_since_epoch': -57061.0, 'magnitude': 5.6},
 {'days_since_epoch': -53322.0, 'magnitude': 4.0}]

And then we convert that integer into something reasonable for music. Per the miditime package:

Convert your integer date/time to something reasonable for a song. For example, at 120 beats per minute, you'll need to scale the data down a lot to avoid a very long song if your data spans years. This uses the seconds_per_year attribute you set at the top, so if your date is converted to something other than days you may need to do your own conversion. But if your dataset spans years and your dates are in days (with fractions is fine), use the beat() helper method.

In [50]:
my_data_timed = [{'beat': mymidi.beat(d['days_since_epoch']), 'magnitude': d['magnitude']} for d in my_data_epoched]
In [51]:
[{'beat': -35.51, 'magnitude': 3.4},
 {'beat': -33.97, 'magnitude': 3.2},
 {'beat': -31.99, 'magnitude': 3.6},
 {'beat': -31.47, 'magnitude': 3.0},
 {'beat': -31.24, 'magnitude': 5.6},
 {'beat': -29.2, 'magnitude': 4.0}]
In [52]:
# Get the earliest date in your series so you can set that to 0 in the MIDI:
start_time = my_data_timed[0]['beat']

Finally, we define how the data get mapped against the pitch. If the data were percentages ranging from 0.01 (ie 1%) to 0.99 (99%), we would scale_pct below between 0 and 1. If you weren’t dealing with percentages, you’d use your lowest value and your highest value (say if for instance your data were counts of pottery over time)

In [53]:
# Set up some functions to scale your other variable (magnitude in our case) to match your desired mode/key and octave range. 
#There are helper methods to assist this scaling, very similar to a charting library like D3.
# You can choose a linear or logarithmic scale.

def mag_to_pitch_tuned(magnitude):
    # Where does this data point sit in the domain of your data? (I.E. the min magnitude is 3, the max in 5.6). In this case the optional 'True' means the scale is reversed, so the highest value will return the lowest percentage.
    scale_pct = mymidi.linear_scale_pct(3, 5.7, magnitude)

    # Another option: Linear scale, reverse order
    # scale_pct = mymidi.linear_scale_pct(3, 5.7, magnitude, True)

    # Another option: Logarithmic scale, reverse order
    # scale_pct = mymidi.log_scale_pct(3, 5.7, magnitude, True)

    # Pick a range of notes. This allows you to play in a key.
    c_major = ['C', 'D', 'E', 'F', 'G', 'A', 'B']

    #Find the note that matches your data point
    note = mymidi.scale_to_note(scale_pct, c_major)

    #Translate that note to a MIDI pitch
    midi_pitch = mymidi.note_to_midi_pitch(note)

    return midi_pitch
In [54]:
# Now build the note list
note_list = []

for d in my_data_timed:
        d['beat'] - start_time,
        100,  # velocity
        1  # duration, in beats
In [55]:
[[0.0, 62, 100, 1],
 [1.5399999999999991, 60, 100, 1],
 [3.5199999999999996, 62, 100, 1],
 [4.039999999999999, 60, 100, 1],
 [4.27, 71, 100, 1],
 [6.309999999999999, 64, 100, 1]]
In [56]:
# Add a track with those notes

# Output the .mid file
62 0.0 1 100
60 1.5399999999999991 1 100
62 3.5199999999999996 1 100
60 4.039999999999999 1 100
71 4.27 1 100
64 6.309999999999999 1 100

Download the file, and open in your computer's music program. How does it sound? It might not be 'music' - but that's not the point.

Try feeding actual archaeological data that you've retrieved from other exercises into this program. Save each result under a different name; you can then begin to mix the data as unique voices using something like GarageBand.

For other approaches to sonification, please see this tutorial. For other creative uses of sonification by an undergraduate, see Daniel Ruten's Sonic Word Clouds.

In [ ]: