import os
import sqlite3
import sys
sys.path.append("..")
import nivapy3 as nivapy
import numpy as np
import pandas as pd
import utils
This notebook explores the 2021 Q4 data from Eurofins sent by Marianne Isebakke at 21.01.2022 at 15.21. The code performs initial data exploration and cleaning, with the aim of creating a tidy dataset in SQLite that can be used for further analysis.
Note: This file uses an updated export from Vannmiljø reflecting corrections made to two station codes in August 2021 - see e-mail from Kjetil received 27.05.2021 at 17:15 for details.
# Choose dataset to process
lab = "Eurofins"
year = 2021
qtr = 4
version = 2
Using a database will provide basic checks on data integrity and consistency. For this project, three tables will be sufficient:
The code below creates a basic database structure, which will be populated later.
# Create database
fold_path = f"../../output/{lab.lower()}_{year}_q{qtr}_v{version}"
if not os.path.exists(fold_path):
os.makedirs(fold_path)
db_path = os.path.join(fold_path, "kalk_data.db")
if os.path.exists(db_path):
os.remove(db_path)
eng = sqlite3.connect(db_path, detect_types=sqlite3.PARSE_DECLTYPES)
# Turn off journal mode for performance
eng.execute("PRAGMA synchronous = OFF")
eng.execute("PRAGMA journal_mode = OFF")
eng.execute("PRAGMA foreign_keys = ON")
# Create stations table
sql = (
"CREATE TABLE stations "
"( "
" fylke text NOT NULL, "
" vassdrag text NOT NULL, "
" station_name text NOT NULL, "
" station_number text, "
" vannmiljo_code text NOT NULL, "
" vannmiljo_name text, "
" utm_east real NOT NULL, "
" utm_north real NOT NULL, "
" utm_zone integer NOT NULL, "
" lon real NOT NULL, "
" lat real NOT NULL, "
" liming_status text NOT NULL, "
" comment text, "
" PRIMARY KEY (vannmiljo_code) "
")"
)
eng.execute(sql)
# Create parameters table
sql = (
"CREATE TABLE parameters_units "
"( "
" vannmiljo_name text NOT NULL UNIQUE, "
" vannmiljo_id text NOT NULL UNIQUE, "
" vannmiljo_unit text NOT NULL, "
" vestfoldlab_name text NOT NULL UNIQUE, "
" vestfoldlab_unit text NOT NULL, "
" vestfoldlab_to_vm_conv_fac real NOT NULL, "
" eurofins_name text NOT NULL UNIQUE, "
" eurofins_unit text NOT NULL, "
" eurofins_to_vm_conv_fac real NOT NULL, "
" min real NOT NULL, "
" max real NOT NULL, "
" PRIMARY KEY (vannmiljo_id) "
")"
)
eng.execute(sql)
# Create chemistry table
sql = (
"CREATE TABLE water_chemistry "
"( "
" vannmiljo_code text NOT NULL, "
" sample_date datetime NOT NULL, "
" lab text NOT NULL, "
" period text NOT NULL, "
" depth1 real, "
" depth2 real, "
" parameter text NOT NULL, "
" flag text, "
" value real NOT NULL, "
" unit text NOT NULL, "
" PRIMARY KEY (vannmiljo_code, sample_date, depth1, depth2, parameter), "
" CONSTRAINT vannmiljo_code_fkey FOREIGN KEY (vannmiljo_code) "
" REFERENCES stations (vannmiljo_code) "
" ON UPDATE NO ACTION ON DELETE NO ACTION, "
" CONSTRAINT parameter_fkey FOREIGN KEY (parameter) "
" REFERENCES parameters_units (vannmiljo_id) "
" ON UPDATE NO ACTION ON DELETE NO ACTION "
")"
)
eng.execute(sql)
<sqlite3.Cursor at 0x7fd39bc59180>
Station details are stored in ../../data/active_stations_2020.xlsx
, which is a tidied version of Øyvind's original file here:
K:\Prosjekter\langtransporterte forurensninger\Kalk Tiltaksovervåking\12 KS vannkjemi\Vannlokaliteter koordinater_kun aktive stasj 2020.xlsx
Note that corrections (e.g. adjusted station co-ordinates) have been made to the tidied file, but not the original on K:
. The version in this repository should therefore been used as the "master" copy.
# Read station data
stn_df = pd.read_excel(r"../../data/active_stations_2020.xlsx", sheet_name="data")
stn_df = nivapy.spatial.utm_to_wgs84_dd(stn_df)
print("The following stations are missing spatial co-ordinates:")
stn_df.query("lat != lat")
The following stations are missing spatial co-ordinates:
fylke | vassdrag | station_name | station_number | vannmiljo_code | vannmiljo_name | utm_east | utm_north | utm_zone | liming_status | comment | lat | lon |
---|
print("The following stations do not have a code in Vannmiljø:")
stn_df.query("vannmiljo_code != vannmiljo_code")
The following stations do not have a code in Vannmiljø:
fylke | vassdrag | station_name | station_number | vannmiljo_code | vannmiljo_name | utm_east | utm_north | utm_zone | liming_status | comment | lat | lon |
---|
# Map
stn_map = nivapy.spatial.quickmap(
stn_df.dropna(subset=["lat"]),
lat_col="lat",
lon_col="lon",
popup="station_name",
cluster=True,
kartverket=True,
aerial_imagery=True,
)
stn_map.save("../../pages/stn_map.html")
stn_map
# Add to database
stn_df.dropna(subset=["vannmiljo_code", "lat"], inplace=True)
stn_df.to_sql(name="stations", con=eng, if_exists="append", index=False)
220
The file ../../data/parameter_unit_mapping.xlsx
provides a lookup between parameter names & units used by the labs and those in Vannmiljø. It also contains plausible ranges (using Vannmiljø units) for each parameter. These ranges have been chosen by using the values already in Vannmiljø as a reference. However, it looks as though some of the data in Vannmiljø might also be spurious, so it would be good to refine these ranges based on domain knowledge, if possible.
Note: Concentrations reported as exactly zero are likely to be errors, because most (all?) lab methods should report an LOQ instead.
# Read parameter mappings
par_df = utils.get_par_unit_mappings()
# Add to database
par_df.to_sql(name="parameters_units", con=eng, if_exists="append", index=False)
par_df
vannmiljo_name | vannmiljo_id | vannmiljo_unit | vestfoldlab_name | vestfoldlab_unit | vestfoldlab_to_vm_conv_fac | eurofins_name | eurofins_unit | eurofins_to_vm_conv_fac | min | max | |
---|---|---|---|---|---|---|---|---|---|---|---|
0 | Temperatur | TEMP | °C | Temp | °C | 1.000000 | Temp | °C | 1.000000 | -10 | 30 |
1 | pH | PH | <ubenevnt> | pH | enh | 1.000000 | pH | enh | 1.000000 | 1 | 10 |
2 | Konduktivitet | KOND | mS/m | Kond | mS/m | 1.000000 | Kond | ms/m | 1.000000 | 0 | 100 |
3 | Total alkalitet | ALK | mmol/l | Alk | mmol/l | 1.000000 | Alk | mmol/l | 1.000000 | 0 | 2 |
4 | Totalfosfor | P-TOT | µg/l P | Tot-P | µg/l | 1.000000 | Tot-P | µg/l | 1.000000 | 0 | 500 |
5 | Totalnitrogen | N-TOT | µg/l N | Tot-N | µg/l | 1.000000 | Tot-N | µg/l | 1.000000 | 0 | 4000 |
6 | Nitrat | N-NO3 | µg/l N | NO3 | µg/l | 1.000000 | NO3 | µg/l | 1.000000 | 0 | 2000 |
7 | Totalt organisk karbon (TOC) | TOC | mg/l C | TOC | mg/l | 1.000000 | TOC | mg/l | 1.000000 | 0 | 100 |
8 | Reaktivt aluminium | RAL | µg/l Al | RAl | µg/l | 1.000000 | RAl | µg/l | 1.000000 | 0 | 500 |
9 | Ikke-labilt aluminium | ILAL | µg/l Al | ILAl | µg/l | 1.000000 | ILAl | µg/l | 1.000000 | 0 | 500 |
10 | Labilt aluminium | LAL | µg/l Al | LAl | µg/l | 1.000000 | LAl | µg/l | 1.000000 | 0 | 500 |
11 | Klorid | CL | mg/l | Cl | mg/l | 1.000000 | Cl | mg/l | 1.000000 | 0 | 100 |
12 | Sulfat | SO4 | mg/l | SO4 | mg/l | 1.000000 | SO4 | mg/l | 1.000000 | 0 | 20 |
13 | Kalsium | CA | mg/l | Ca | mg/l | 1.000000 | Ca | mg/l | 1.000000 | 0 | 500 |
14 | Kalium | K | mg/l | K | mg/l | 1.000000 | K | mg/l | 1.000000 | 0 | 10 |
15 | Magnesium | MG | mg/l | Mg | mg/l | 1.000000 | Mg | mg/l | 1.000000 | 0 | 100 |
16 | Natrium | NA | mg/l | Na | mg/l | 1.000000 | Na | mg/l | 1.000000 | 0 | 50 |
17 | Totalt silikat | SIO2 | µg/l Si | SIO2 | mg/l | 467.543276 | SIO2 | µg/l | 0.467543 | 0 | 7000 |
18 | Syrenøytraliserende kapasitet (ANC) | ANC | µekv/l | ANC | µekv/l | 1.000000 | ANC | µekv/l | 1.000000 | -1000 | 6000 |
The Vannmiljø dataset is large and reading from Excel is slow; the code below takes a couple of minutes to run.
Note from the output below that there are more than 1600 "duplicated" samples in the Vannmiljø dataset i.e. where the station code, sample date, sample depth, lab and parameter name are all the same, but a different value is reported. It would be helpful to know why these duplicates were collected e.g. are these reanalysis values, where only one of the duplicates should be used, or are they genuine (in which case should they be averaged or kept separate?). For the moment, I will ignore these values.
# Read historic data from Vannmiljø
his_df = utils.read_historic_data(
r"../../data/vannmiljo_export_2012-19_2021-08-16.xlsx"
)
# Tidy lab names for clarity
his_df["lab"].replace(
{"NIVA": "NIVA (historic)", "VestfoldLAB AS": "VestfoldLAB (historic)"},
inplace=True,
)
# Add label for data period
his_df["period"] = "historic"
# Print summary
n_stns = len(his_df["vannmiljo_code"].unique())
print(f"The number of unique stations with data is: {n_stns}.\n")
# Handle duplicates
his_dup_csv = r"../../output/vannmiljo_historic/vannmiljo_duplicates.csv"
his_df = utils.handle_duplicates(his_df, his_dup_csv, action="drop")
his_df.head()
The number of unique stations with data is: 211. There are 1620 duplicated records (same station_code-date-depth-parameter, but different value). These will be dropped.
vannmiljo_code | sample_date | lab | depth1 | depth2 | par_unit | flag | value | period | |
---|---|---|---|---|---|---|---|---|---|
0 | 027-28435 | 2012-01-02 | NIVA (historic) | 0.0 | 0.0 | ALK_mmol/l | = | 0.06 | historic |
1 | 027-28435 | 2012-01-02 | NIVA (historic) | 0.0 | 0.0 | CA_mg/l | = | 1.23 | historic |
2 | 027-28435 | 2012-01-02 | NIVA (historic) | 0.0 | 0.0 | ILAL_µg/l Al | = | 18.00 | historic |
3 | 027-28435 | 2012-01-02 | NIVA (historic) | 0.0 | 0.0 | KOND_mS/m | = | 3.80 | historic |
4 | 027-28435 | 2012-01-02 | NIVA (historic) | 0.0 | 0.0 | PH_<ubenevnt> | = | 6.24 | historic |
The code below reads the Excel template provided by Eurofins and reformats it to the same structure (parameter names, units etc.) as the data in Vannmiljø.
# Read new data
new_df = utils.read_data_template_to_wide(
f"../../data/{lab.lower()}_data_{year}_q{qtr}_v{version}.xlsx",
sheet_name="results",
lab=lab,
)
utils.perform_basic_checks(new_df)
new_df = utils.wide_to_long(new_df, lab)
# Add label for data period
new_df["period"] = "new"
# Handle duplicates
dup_csv = os.path.join(
fold_path, f"{lab.lower()}_{year}_q{qtr}_v{version}_duplicates.csv"
)
new_df = utils.handle_duplicates(new_df, dup_csv, action="drop")
new_df.head()
Checking stations: The following location IDs have inconsistent names within this template: The following location names have multiple IDs within this template: Checking sample dates: Done. Checking for non-numeric data: Done. Checking for values less than zero: Done. Checking for consistent LOD values: Done. Checking NO3 and TOTN: Done. Checking Al fractions: The following samples have LAl != RAl - ILAl: vannmiljo_code sample_date depth1 depth2 RAl_µg/l ILAl_µg/l \ 525 026-30849 2021-12-07 07:10:00 0 0 16.0 5.0 607 036-58749 2021-12-08 07:10:18 0 0 9.4 9.2 826 079-58880 2021-10-05 07:10:00 0 0 7.4 5.0 831 079-58880 2021-12-07 07:10:00 0 0 9.4 5.0 845 079-58882 2021-12-07 07:10:00 0 0 9.6 5.0 852 079-58883 2021-12-07 07:10:00 0 0 8.7 5.0 857 067-82879 2021-10-06 07:10:30 0 0 19.0 8.8 892 064-81557 2021-12-07 07:10:29 0 0 18.0 9.6 905 064-62171 2021-12-07 07:10:29 0 0 24.0 9.8 911 045-58814 2021-12-10 07:10:45 0 0 7.7 5.0 919 045-58815 2021-12-10 07:10:45 0 0 13.0 5.0 957 062-58820 2021-12-08 07:10:18 0 0 6.3 5.0 LAl_µg/l LAl_Calc_µg/l 525 0.0 11.0 607 59.0 0.2 826 0.0 2.4 831 0.0 4.4 845 0.0 4.6 852 0.0 3.7 857 10.0 10.2 892 8.1 8.4 905 14.0 14.2 911 0.0 2.7 919 0.0 8.0 957 0.0 1.3 There are 158 duplicated records (same station_code-date-depth-parameter, but different value). These will be dropped.
vannmiljo_code | sample_date | lab | depth1 | depth2 | par_unit | flag | value | period | |
---|---|---|---|---|---|---|---|---|---|
0 | 027-79278 | 2021-10-05 07:10:00 | Eurofins | 0.0 | 0.0 | TEMP_°C | nan | 11.0 | new |
1 | 027-79278 | 2021-11-02 07:05:44 | Eurofins | 0.0 | 0.0 | TEMP_°C | nan | 8.0 | new |
2 | 027-79278 | 2021-12-07 07:10:12 | Eurofins | 0.0 | 0.0 | TEMP_°C | nan | 0.0 | new |
3 | 019-58793 | 2021-11-02 07:05:37 | Eurofins | 0.0 | 0.0 | TEMP_°C | nan | 10.0 | new |
4 | 021-45780 | 2021-12-15 07:15:50 | Eurofins | 0.0 | 0.0 | TEMP_°C | nan | 0.5 | new |
Combine the historic
and new
datasets into a single dataframe in "long" format.
# Combine
df = pd.concat([his_df, new_df], axis="rows")
# Separate par and unit
df[["parameter", "unit"]] = df["par_unit"].str.split("_", n=1, expand=True)
del df["par_unit"]
df.reset_index(drop=True, inplace=True)
df.head()
vannmiljo_code | sample_date | lab | depth1 | depth2 | flag | value | period | parameter | unit | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 027-28435 | 2012-01-02 | NIVA (historic) | 0.0 | 0.0 | = | 0.06 | historic | ALK | mmol/l |
1 | 027-28435 | 2012-01-02 | NIVA (historic) | 0.0 | 0.0 | = | 1.23 | historic | CA | mg/l |
2 | 027-28435 | 2012-01-02 | NIVA (historic) | 0.0 | 0.0 | = | 18.00 | historic | ILAL | µg/l Al |
3 | 027-28435 | 2012-01-02 | NIVA (historic) | 0.0 | 0.0 | = | 3.80 | historic | KOND | mS/m |
4 | 027-28435 | 2012-01-02 | NIVA (historic) | 0.0 | 0.0 | = | 6.24 | historic | PH | <ubenevnt> |
# Apply correction to historic SIO2
df["value"] = np.where(
(df["lab"] == "VestfoldLAB (historic)") & (df["parameter"] == "SIO2"),
df["value"] * 467.5432,
df["value"],
)
# Reclassify (nitrate + nitrite) to nitrate
df["parameter"].replace({"N-SNOX": "N-NO3"}, inplace=True)
A simple method for preliminary quality control is to check whether parameter values are within sensible ranges (as defined in the parameters_units
table; see Section 4 above). I believe this screening should be implemented differently for the historic
(i.e. Vannmiljø) and new
datsets, as follows:
For the historic
data in Vannmiljø, values outside the plausible ranges should be removed from the dataset entirely. This is because we intend to use the Vannmiljø data as a reference against which new values will be compared, so it is important the dataset does not contain anything too strange. Ideally, the reference dataset should be carefully manually curated to ensure it is as good as possible, but I'm not sure we have the resouces in this project to thoroughly quality assess the data already in Vannmiljø. Dealing with any obvious issues is a good start, though
For the new
data, values outside the plausible ranges should be highlighted and checked with the reporting lab
Note: At present, my code will remove any concentration values of exactly zero from the historic dataset. Check with Øyvind whether this is too strict.
# Check ranges
df = utils.check_data_ranges(df)
Checking data ranges for the 'historic' period. KOND: Maximum value of 309.00 is greater than or equal to upper limit (100.00). LAL: Minimum value of 0.00 is less than or equal to lower limit (0.00). SO4: Minimum value of 0.00 is less than or equal to lower limit (0.00). CA: Minimum value of 0.00 is less than or equal to lower limit (0.00). SIO2: Maximum value of 51897.30 is greater than or equal to upper limit (7000.00). Checking data ranges for the 'new' period. KOND: Maximum value of 246.00 is greater than or equal to upper limit (100.00). N-NO3: Maximum value of 3400.00 is greater than or equal to upper limit (2000.00). CL: Maximum value of 530.00 is greater than or equal to upper limit (100.00). SO4: Maximum value of 93.80 is greater than or equal to upper limit (20.00). K: Maximum value of 14.00 is greater than or equal to upper limit (10.00). NA: Maximum value of 360.00 is greater than or equal to upper limit (50.00). Dropping problem rows from historic data. Dropping rows for KOND. Dropping rows for LAL. Dropping rows for SO4. Dropping rows for CA. Dropping rows for SIO2.
# Add to database
df.to_sql(
name="water_chemistry",
con=eng,
if_exists="append",
index=False,
method="multi",
chunksize=1000,
)
171401
eng.close()
Points to note from the initial data exploration:
./output/eurofins_2021_q4_{version}/eurofins_2021_q4_{version}_duplicates.csv
for a list). Based on previous experience, these are probably genuine duplicates (e.g. flood samples), but check with EurofinsAll the samples listed below have already been reanalysed by Eurofins and the original results confirmed. However, these values are very high. I suspect the sample from Boenfossen, at least, may be contaminated?
I have re-run this notebook with the cleaned "v2" dataset from Marianne (received 20.02.2022 at 15.30). Values in the spreadsheet for v2 are identical to those in v1 (and outliers etc. identified are therefore also the same). The exceptions are:
The Ca sample from 036-47959 (Markosvatnet utløp) on 21.12.2021 is more reasonable after reanalysis
The v2 spreadsheet says that the sample for TP from 045-58816 (Uskedalselva utløp) on 16.12.2021 has changes following reanalysis, but the value entered is still the same as in v1 (which is very high). I think the spreadsheet needs updating?
Otherwise all issues are the same as for version 1, above.