# 1. Queries¶

This is the first in a series of lessons about working with astronomical data.

As a running example, we will replicate parts of the analysis in a recent paper, "Off the beaten path: Gaia reveals GD-1 stars outside of the main stream" by Adrian Price-Whelan and Ana Bonaca.

## Outline¶

1. First we'll make a connection to the Gaia server,

2. We will explore information about the database and the tables it contains,

3. We will write a query and send it to the server, and finally

## Query Language¶

In order to select data from a database, you have to compose a query, which is a program written in a "query language". The query language we'll use is ADQL, which stands for "Astronomical Data Query Language".

ADQL is a dialect of SQL (Structured Query Language), which is by far the most commonly used query language. Almost everything you will learn about ADQL also works in SQL.

The reference manual for ADQL is here. But you might find it easier to learn from this ADQL Cookbook.

## Using Jupyter¶

If you have not worked with Jupyter notebooks before, you might start with the tutorial on from Jupyter.org called "Try Classic Notebook", or this tutorial from DataQuest.

There are two environments you can use to write and run notebooks:

• "Jupyter Notebook" is the original, and

• "Jupyter Lab" is a newer environment with more features.

For these lessons, you can use either one.

If you are too impatient for the tutorials, here are the most important things to know:

1. Notebooks are made up of code cells and text cells (and a few other less common kinds). Code cells contain code; text cells, like this one, contain explanatory text written in Markdown.

2. To run a code cell, click the cell to select it and press Shift-Enter. The output of the code should appear below the cell.

3. In general, notebooks only run correctly if you run every code cell in order from top to bottom. If you run cells out of order, you are likely to get errors.

4. You can modify existing cells, but then you have to run them again to see the effect.

5. You can add new cells, but again, you have to be careful about the order you run them in.

6. If you have added or modified cells and the behavior of the notebook seems strange, you can restart the "kernel", which clears all of the variables and functions you have defined, and run the cells again from the beginning.

• If you are using Jupyter notebook, open the Kernel menu and select "Restart and Run All".

• In Jupyter Lab, open the Kernel menu and select "Restart Kernel and Run All Cells"

• In Colab, open the Runtime menu and select "Restart and run all"

Before you go on, you might want to explore the other menus and the toolbar to see what else you can do.

## Installing libraries¶

If you are running this notebook on Colab, you should run the following cell to install the libraries we'll need.

If you are running this notebook on your own computer, you might have to install these libraries yourself.

In [1]:
# If we're running on Colab, install libraries

import sys

if IN_COLAB:
!pip install astroquery


## Connecting to Gaia¶

The library we'll use to get Gaia data is Astroquery. Astroquery provides Gaia, which is an object that represents a connection to the Gaia database.

We can connect to the Gaia database like this:

In [1]:
from astroquery.gaia import Gaia


This import statement creates a TAP+ connection; TAP stands for "Table Access Protocol", which is a network protocol for sending queries to the database and getting back the results.

## Databases and Tables¶

What is a database, anyway? Most generally, it can be any collection of data, but when we are talking about ADQL or SQL:

• A database is a collection of one or more named tables.

• Each table is a 2-D array with one or more named columns of data.

We can use Gaia.load_tables to get the names of the tables in the Gaia database. With the option only_names=True, it loads information about the tables, called "metadata", not the data itself.

In [2]:
tables = Gaia.load_tables(only_names=True)


The following for loop prints the names of the tables.

In [3]:
for table in tables:
print(table.name)


So that's a lot of tables. The ones we'll use are:

• gaiadr2.gaia_source, which contains Gaia data from data release 2,

• gaiadr2.panstarrs1_original_valid, which contains the photometry data we'll use from PanSTARRS, and

• gaiadr2.panstarrs1_best_neighbour, which we'll use to cross-match each star observed by Gaia with the same star observed by PanSTARRS.

We can use load_table (not load_tables) to get the metadata for a single table. The name of this function is misleading, because it only downloads metadata, not the contents of the table.

In [4]:
meta = Gaia.load_table('gaiadr2.gaia_source')
meta


Jupyter shows that the result is an object of type TapTableMeta, but it does not display the contents.

To see the metadata, we have to print the object.

In [5]:
print(meta)


## Columns¶

The following loop prints the names of the columns in the table.

In [6]:
for column in meta.columns:
print(column.name)


You can probably infer what many of these columns are by looking at the names, but you should resist the temptation to guess. To find out what the columns mean, read the documentation.

If you want to know what can go wrong when you don't read the documentation, you might like this article.

### Exercise¶

One of the other tables we'll use is gaiadr2.panstarrs1_original_valid. Use load_table to get the metadata for this table. How many columns are there and what are their names?

In [8]:
# Solution goes here


## Writing queries¶

By now you might be wondering how we download these tables. With tables this big, you generally don't. Instead, you use queries to select only the data you want.

A query is a string written in a query language like SQL; for the Gaia database, the query language is a dialect of SQL called ADQL.

Here's an example of an ADQL query.

In [8]:
query1 = """SELECT
TOP 10
source_id, ra, dec, parallax
"""


Python note: We use a triple-quoted string here so we can include line breaks in the query, which makes it easier to read.

The words in uppercase are ADQL keywords:

• SELECT indicates that we are selecting data (as opposed to adding or modifying data).

• TOP indicates that we only want the first 10 rows of the table, which is useful for testing a query before asking for all of the data.

• FROM specifies which table we want data from.

The third line is a list of column names, indicating which columns we want.

In this example, the keywords are capitalized and the column names are lowercase. This is a common style, but it is not required. ADQL and SQL are not case-sensitive.

Also, the query is broken into multiple lines to make it more readable. This is a common style, but not required. Line breaks don't affect the behavior of the query.

To run this query, we use the Gaia object, which represents our connection to the Gaia database, and invoke launch_job:

In [9]:
job = Gaia.launch_job(query1)
job


The result is an object that represents the job running on a Gaia server.

If you print it, it displays metadata for the forthcoming results.

In [11]:
print(job)


Don't worry about Results: None. That does not actually mean there are no results.

However, Phase: COMPLETED indicates that the job is complete, so we can get the results like this:

In [16]:
results = job.get_results()
type(results)


The type function indicates that the result is an Astropy Table.

Optional detail: Why is table repeated three times? The first is the name of the module, the second is the name of the submodule, and the third is the name of the class. Most of the time we only care about the last one. It's like the Linnean name for gorilla, which is Gorilla gorilla gorilla.

An Astropy Table is similar to a table in an SQL database except:

• SQL databases are stored on disk drives, so they are persistent; that is, they "survive" even if you turn off the computer. An Astropy Table is stored in memory; it disappears when you turn off the computer (or shut down this Jupyter notebook).

• SQL databases are designed to process queries. An Astropy Table can perform some query-like operations, like selecting columns and rows. But these operations use Python syntax, not SQL.

Jupyter knows how to display the contents of a Table.

In [17]:
results


Each column has a name, units, and a data type.

For example, the units of ra and dec are degrees, and their data type is float64, which is a 64-bit floating-point number, used to store measurements with a fraction part.

This information comes from the Gaia database, and has been stored in the Astropy Table by Astroquery.

### Exercise¶

Read the documentation of this table and choose a column that looks interesting to you. Add the column name to the query and run it again. What are the units of the column you selected? What is its data type?

In [18]:
# Solution goes here


## Asynchronous queries¶

launch_job asks the server to run the job "synchronously", which normally means it runs immediately. But synchronous jobs are limited to 2000 rows. For queries that return more rows, you should run "asynchronously", which mean they might take longer to get started.

If you are not sure how many rows a query will return, you can use the SQL command COUNT to find out how many rows are in the result without actually returning them. We'll see an example in the next lesson.

The results of an asynchronous query are stored in a file on the server, so you can start a query and come back later to get the results. For anonymous users, files are kept for three days.

As an example, let's try a query that's similar to query1, with these changes:

• It selects the first 3000 rows, so it is bigger than we should run synchronously.

• It selects two additional columns, pmra and pmdec, which are proper motions along the axes of ra and dec.

• It uses a new keyword, WHERE.

In [15]:
query2 = """SELECT
TOP 3000
source_id, ra, dec, pmra, pmdec, parallax
WHERE parallax < 1
"""


A WHERE clause indicates which rows we want; in this case, the query selects only rows "where" parallax is less than 1. This has the effect of selecting stars with relatively low parallax, which are farther away. We'll use this clause to exclude nearby stars that are unlikely to be part of GD-1.

WHERE is one of the most common clauses in ADQL/SQL, and one of the most useful, because it allows us to download only the rows we need from the database.

We use launch_job_async to submit an asynchronous query.

In [19]:
job = Gaia.launch_job_async(query2)
job


And here are the results.

In [20]:
results = job.get_results()
results


You might notice that some values of parallax are negative. As this FAQ explains, "Negative parallaxes are caused by errors in the observations." They have "no physical meaning," but they can be a "useful diagnostic on the quality of the astrometric solution."

### Exercise¶

The clauses in a query have to be in the right order. Go back and change the order of the clauses in query2 and run it again. The modified query should fail, but notice that you don't get much useful debugging information.

For this reason, developing and debugging ADQL queries can be really hard. A few suggestions that might help:

• Whenever possible, start with a working query, either an example you find online or a query you have used in the past.

• Make small changes and test each change before you continue.

• While you are debugging, use TOP to limit the number of rows in the result. That will make each test run faster, which reduces your development time.

• Launching test queries synchronously might make them start faster, too.

In [21]:
# Solution goes here


## Operators¶

In a WHERE clause, you can use any of the SQL comparison operators; here are the most common ones:

Symbol Operation
> greater than
< less than
>= greater than or equal
<= less than or equal
= equal
!= or <> not equal

Most of these are the same as Python, but some are not. In particular, notice that the equality operator is =, not ==. Be careful to keep your Python out of your ADQL!

You can combine comparisons using the logical operators:

• AND: true if both comparisons are true
• OR: true if either or both comparisons are true

Finally, you can use NOT to invert the result of a comparison.

### Exercise¶

Read about SQL operators here and then modify the previous query to select rows where bp_rp is between -0.75 and 2.

In [22]:
# Solution goes here


bp_rp contains BP-RP color, which is the difference between two other columns, phot_bp_mean_mag and phot_rp_mean_mag. You can read about this variable here.

This Hertzsprung-Russell diagram shows the BP-RP color and luminosity of stars in the Gaia catalog (Copyright: ESA/Gaia/DPAC, CC BY-SA 3.0 IGO).

Selecting stars with bp-rp less than 2 excludes many class M dwarf stars, which are low temperature, low luminosity. A star like that at GD-1's distance would be hard to detect, so if it is detected, it it more likely to be in the foreground.

## Formatting queries¶

The queries we have written so far are string "literals", meaning that the entire string is part of the program. But writing queries yourself can be slow, repetitive, and error-prone.

It is often better to write Python code that assembles a query for you. One useful tool for that is the string format method.

As an example, we'll divide the previous query into two parts; a list of column names and a "base" for the query that contains everything except the column names.

Here's the list of columns we'll select.

In [32]:
columns = 'source_id, ra, dec, pmra, pmdec, parallax'


And here's the base; it's a string that contains at least one format specifier in curly brackets (braces).

In [25]:
query3_base = """SELECT
TOP 10
{columns}
WHERE parallax < 1
AND bp_rp BETWEEN -0.75 AND 2
"""


This base query contains one format specifier, {columns}, which is a placeholder for the list of column names we will provide.

To assemble the query, we invoke format on the base string and provide a keyword argument that assigns a value to columns.

In [26]:
query3 = query3_base.format(columns=columns)


In this example, the variable that contains the column names and the variable in the format specifier have the same name. That's not required, but it is a common style.

The result is a string with line breaks. If you display it, the line breaks appear as \n.

In [27]:
query3


But if you print it, the line breaks appear as... line breaks.

In [28]:
print(query3)


Notice that the format specifier has been replaced with the value of columns.

Let's run it and see if it works:

In [29]:
job = Gaia.launch_job(query3)
print(job)

In [30]:
results = job.get_results()
results


Good so far.

### Exercise¶

This query always selects sources with parallax less than 1. But suppose you want to take that upper bound as an input.

Modify query3_base to replace 1 with a format specifier like {max_parallax}. Now, when you call format, add a keyword argument that assigns a value to max_parallax, and confirm that the format specifier gets replaced with the value you provide.

In [31]:
# Solution goes here


## Summary¶

This notebook demonstrates the following steps:

1. Making a connection to the Gaia server,

2. Exploring information about the database and the tables it contains,

3. Writing a query and sending it to the server, and finally

4. Downloading the response from the server as an Astropy Table.

In the next lesson we will extend these queries to select a particular region of the sky.

## Best practices¶

• If you can't download an entire dataset (or it's not practical) use queries to select the data you need.

• Read the metadata and the documentation to make sure you understand the tables, their columns, and what they mean.

• Develop queries incrementally: start with something simple, test it, and add a little bit at a time.

• Use ADQL features like TOP and COUNT to test before you run a query that might return a lot of data.

• If you know your query will return fewer than 2000 rows, you can run it synchronously, which might complete faster. If it might return more than 2000 rows, you should run it asynchronously.

• ADQL and SQL are not case-sensitive, so you don't have to capitalize the keywords, but you should.

• ADQL and SQL don't require you to break a query into multiple lines, but you should.

Jupyter notebooks can be good for developing and testing code, but they have some drawbacks. In particular, if you run the cells out of order, you might find that variables don't have the values you expect.

To mitigate these problems:

• Make each section of the notebook self-contained. Try not to use the same variable name in more than one section.

• Keep notebooks short. Look for places where you can break your analysis into phases with one notebook per phase.

In [ ]: