import numpy as np
import modelskill as ms
fn = '../tests/testdata/SW/HKZN_local_2017_DutchCoast.dfsu'
mr = ms.model_result(fn, name='HKZN_local', item=0)
mr.data
Dfsu2D number of elements: 958 number of nodes: 570 projection: LONG/LAT number of items: 15 time: 23 steps with dt=10800.0s 2017-10-27 00:00:00 -- 2017-10-29 18:00:00
Configuration of comparison, see SW_DutchCoast.ipynb for more details.
o1 = ms.PointObservation('../tests/testdata/SW/HKNA_Hm0.dfs0', item=0, x=4.2420, y=52.6887, name="HKNA")
o2 = ms.PointObservation("../tests/testdata/SW/eur_Hm0.dfs0", item=0, x=3.2760, y=51.9990, name="EPL")
o3 = ms.TrackObservation("../tests/testdata/SW/Alti_c2_Dutch.dfs0", item=3, name="c2")
cc = ms.match([o1, o2, o3], mr)
cc
<ComparerCollection> Comparer: HKNA Comparer: EPL Comparer: c2
Standard set of metrics
cc.skill()
n | bias | rmse | urmse | mae | cc | si | r2 | |
---|---|---|---|---|---|---|---|---|
observation | ||||||||
HKNA | 386 | -0.315380 | 0.447311 | 0.317210 | 0.341344 | 0.968323 | 0.102122 | 0.847042 |
EPL | 67 | -0.077520 | 0.227927 | 0.214339 | 0.192689 | 0.969454 | 0.082866 | 0.929960 |
c2 | 113 | -0.004701 | 0.352470 | 0.352439 | 0.294758 | 0.975050 | 0.128010 | 0.899121 |
Select a specific metric
cc.skill(metrics="mean_absolute_error")
n | mean_absolute_error | |
---|---|---|
observation | ||
HKNA | 386 | 0.341344 |
EPL | 67 | 0.192689 |
c2 | 113 | 0.294758 |
Some metrics has parameters, which require a bit special treatment.
import modelskill.metrics as mtr
from modelskill.metrics import hit_ratio
def hit_ratio_05_pct(obs, model):
return hit_ratio(obs, model, 0.5) * 100
def hit_ratio_01_pct(obs, model):
return hit_ratio(obs, model, 0.1) * 100
cc.skill(metrics=[hit_ratio_05_pct, hit_ratio_01_pct])
n | hit_ratio_05_pct | hit_ratio_01_pct | |
---|---|---|---|
observation | |||
HKNA | 386 | 80.051813 | 17.098446 |
EPL | 67 | 98.507463 | 28.358209 |
c2 | 113 | 85.840708 | 17.699115 |
And you are of course always free to specify your own special metric or import metrics from other libraries, e.g. scikit-learn.
def my_special_metric_with_long_descriptive_name(obs, model):
res = obs - model
res_clipped = np.clip(res,0,np.inf)
return np.mean(np.abs(res_clipped))
# short alias to avoid long column names in output
def mcae(obs, model): return my_special_metric_with_long_descriptive_name(obs, model)
cc.skill(metrics=mcae)
n | mcae | |
---|---|---|
observation | ||
HKNA | 386 | 0.328362 |
EPL | 67 | 0.135104 |
c2 | 113 | 0.149729 |