In [ ]:

```
import numpy as np
```

In [ ]:

```
# Low-level
rr = np.ndarray(shape=(2,4,3),dtype=float)
rr
```

In [ ]:

```
# From list or tuple
rr = np.array([[3.,4.],[5.,6.]])
print(rr)
```

In [ ]:

```
rr1 = np.array([['Claire','Paola'],['Scott','Danny']]) # It doesn't have to be a numerical type
print(rr1)
```

In [ ]:

```
rr2 = np.array([['Claire',10], ['Paola', 6]]) # It doesn't have to be only 1 type
print(rr2)
```

In [ ]:

```
# Initialise to 0 or 1.
rr = np.zeros((2,3),dtype=float)
print(rr)
rr1 = np.ones((2,4),dtype=np.int32)
print(rr1)
```

In [ ]:

```
# Same shape as an existing array. It's possible to choose the data-type with the dtype argument.
rr2 = np.zeros_like(rr1)
rr2
```

In [ ]:

```
# Evenly spaced values
rr2= np.arange(5,45,2)
rr2
```

In [ ]:

```
# Reshaping an existing array
rr2 = rr2.reshape((5,2,2))
rr2
```

Do you remember the csv example from the last training? Here it is with numpy.

In [ ]:

```
li = np.loadtxt('test.txt',delimiter=',',skiprows=2)
print(li)
# For the third format example, simply take the transpose
print(li.T)
# You want the columns in separate arrays?
c1,c2,c3 = np.loadtxt('test.txt', delimiter=',',skiprows=2,unpack=True)
print(c1,c2,c3)
```

It is the same as for lists etc, except for the multi-dimensional part:

In [ ]:

```
print(f"First element {rr2[0,0]}\n")
print(f"First index of the second dimension \n {rr2[:,0,:]}\n")
print(f"First 2 indexes along the 1st dimension and all other indexes along other dimensions\n {rr2[:2,:,:]}\n")
print(f"Stride {rr2[0:5:2,0,1]}\n")
```

There is a generic form to say "all other indexes along all other dimensions", i.e. "everything else", without specifying the number of dimensions in your array. It can be used to indicate all dimensions before or after the specified slice:

In [ ]:

```
print(f"Specify slices in all dimensions: \n{rr2[:2,:,:]}\n")
print(f"Generic form:\n{rr2[:2,...]}\n")
print(f"Any number of dimensions specified before:\n{rr2[:2,0,...]}\n")
print(f"Works for the start of the array as well:\n{rr2[...,0]}\n")
```

Obviously, `numpy`

has a lot of handy functions for common operations. For example if you want the mean of an array:

In [ ]:

```
rr2.mean()
```

That's handy, what is even more handy is the possibility to calculate the mean over a given dimension only. For example, rr2 is 3D. Let's say the dimensions are time, latitude and longitude respectively and you want to calculate the time average at each spatial point:

In [ ]:

```
rr2.mean(axis=0) # Remember indexes start at 0
```

There is already a lot out there. You probably won't need to develop much yourself.

Numpy has a date data type: `datetime64`

. Do not confuse `datetime64`

from Numpy and `datetime`

from Python! They do not have the same methods or abilities. Both can be useful.

`datetime64`

is relatively simple, it doesn't have a lot of built-in capabilities. When doing fancy date calculations in Python, the must is probably to work with `pandas`

. `pandas`

is built upon numpy so can readily convert `datetime64`

to its own date and time objects.

Note `xarray`

and `pandas`

are also very compatible with each other.

In [ ]:

```
print(np.datetime64('2020-04-15T05:00','ms'))
print(np.datetime64('2020-04-10','M'))
```

You can do simple calculations. For example, the number of days in February 2036:

In [ ]:

```
print(np.datetime64('2036-03','D')-np.datetime64('2036-02','D'))
```

Be careful of the unit:

In [ ]:

```
print(np.datetime64('2036-03','M')-np.datetime64('2036-02','M'))
```

It's possible to convert units. So if you want both the number of months and the number of days:

In [ ]:

```
timeA = np.datetime64('1988-05','M')
timeB = np.datetime64('1990-03','M')
delta = timeB - timeA
print(delta)
deltaD = timeB.astype('datetime64[D]') - timeA.astype('datetime64[D]')
print(deltaD)
```

Note, `delta`

and `deltaD`

are not the strings printed out above. They are `numpy.timedelta64`

objects. The `print()`

function gives a pretty output because of the way the object has been developed.

In [ ]:

```
deltaD
```

Numpy is great but it is very generic. And it only gives you the raw data. The coder has to keep track of the additional information: is there a time dimension? What field does this data represent? etc.

Xarray introduces labelled arrays which typically means you get self-described arrays: name of the field, name of dimensions, coordinates for the dimensions, etc.

As such it works very well with the netCDF format since this is also a self-describing format.

In [ ]:

```
import xarray as xr
```

In [ ]:

```
# open netcdf file
ds = xr.open_dataset("http://dapds00.nci.org.au/thredds/dodsC/ua8/ARCCSS_Data-10/v1-0/A/A_ACCESS1-0_N48_mon_198001.nc")
# see how all the info is there
ds
# print just a variable to see the variable level attributes.
```

In [ ]:

```
# Variables are stored in the Dataset in a dictionary so you can refer to them by name
rls = ds['rls']
rls
```

Just like numpy, arrays have common functions as methods. But unlike numpy, you can identify dimensions by name instead of index position.

Xarray arrays work with most numpy functions. If not, you can access the underlying numpy array.

In [ ]:

```
print('Global mean\n',rls.mean(),'\n')
print('Latitudinal mean\n',rls.mean(dim='lat'))
```

Remember how the variables have the dimension names and the coordinate arrays attached to them? This means you can select data using the coordinate names and values rather than indexes.

In [ ]:

```
rls.sel(lat=-85)
```

Xarray can even interpolate for you. You don't have to know the exact values of the points that are in your array. For example you could ask for the nearest points to 100°E in longitude. I would not use this to actually interpolate a whole field! Just use it to save on typing or if you have a projected grid.

In [ ]:

```
rls.sel(lat=-87, method='nearest')
```

It is very easy to do a quick plot of your data. It isn't a plot ready for publication but it can easily allow you to visualise your fields. And the nice touch is Xarray will automatically use the meta-data to add labelling to the plot.

Before plotting with matplotlib in Jupyter Notebook, you need to add a special line so the plots appear in the notebook. This only to be done once per notebook, not for each plot.

In [ ]:

```
%matplotlib inline
```

In [ ]:

```
rls.plot()
```

Note we haven't loaded matplotlib, but because xarray uses it, it loads it for us.

There is a very simple way to save data back to a netcdf file. It isn't necessarily the fastest way. But you should only have to write out some analysed fields using Python which means relatively small amounts of data.

Note, netcdf support inline compression so you should ALWAYS save your data compressed. Inline compression means the file just looks the same, access is the same, you don't need to uncompress before being able to see the information from the file.

In [ ]:

```
# Compression
encod={}
for var in ds.data_vars: # data_vars stores the names of the variables in a dataset, as strings
encod[var]={'zlib':True}
# Write to file
ds.to_netcdf('test.nc',encoding=encod)
```

This was a very very quick presentation of `xarray`

. We were simply trying to give you the very basics, especially highlights the philosophy of the package. We will run a `xarray`

training soon. In the meantime if you want to know more, you can always run through xarray quick overview

In addition, the CMS team has a blog with quite a few blogs using `xarray`

and Python in general. Feel free to check those as well.

In [ ]:

```
```