For a more elementary introduction to MLJ, see Getting Started.
Note. Be sure this file has not been separated from the accompanying Project.toml and Manifest.toml files, which should not should be altered unless you know what you are doing. Using them, the following code block instantiates a julia environment with a tested bundle of packages known to work with the rest of the script:
using Pkg
Pkg.activate(@__DIR__)
Pkg.instantiate()
Activating project at `~/GoogleDrive/Julia/MLJ/MLJ/examples/lightning_tour`
Assuming Julia 1.7
In MLJ a model is just a container for hyperparameters, and that's all. Here we will apply several kinds of model composition before binding the resulting "meta-model" to data in a machine for evaluation, using cross-validation.
Loading and instantiating a gradient tree-boosting model:
using MLJ
MLJ.color_off()
Booster = @load EvoTreeRegressor # loads code defining a model type
booster = Booster(max_depth=2) # specify hyperparameter at construction
[ Info: For silent loading, specify `verbosity=0`. import EvoTrees ✔
EvoTreeRegressor( loss = EvoTrees.Linear(), nrounds = 10, λ = 0.0, γ = 0.0, η = 0.1, max_depth = 2, min_weight = 1.0, rowsample = 1.0, colsample = 1.0, nbins = 64, α = 0.5, metric = :mse, rng = Random.MersenneTwister(123), device = "cpu")
booster.nrounds=50 # or mutate post facto
booster
EvoTreeRegressor( loss = EvoTrees.Linear(), nrounds = 50, λ = 0.0, γ = 0.0, η = 0.1, max_depth = 2, min_weight = 1.0, rowsample = 1.0, colsample = 1.0, nbins = 64, α = 0.5, metric = :mse, rng = Random.MersenneTwister(123), device = "cpu")
This model is an example of an iterative model. As is stands, the
number of iterations nrounds
is fixed.
Let's create a new model that automatically learns the number of iterations,
using the NumberSinceBest(3)
criterion, as applied to an
out-of-sample l1
loss:
using MLJIteration
iterated_booster = IteratedModel(model=booster,
resampling=Holdout(fraction_train=0.8),
controls=[Step(2), NumberSinceBest(3), NumberLimit(300)],
measure=l1,
retrain=true)
DeterministicIteratedModel( model = EvoTreeRegressor( loss = EvoTrees.Linear(), nrounds = 50, λ = 0.0, γ = 0.0, η = 0.1, max_depth = 2, min_weight = 1.0, rowsample = 1.0, colsample = 1.0, nbins = 64, α = 0.5, metric = :mse, rng = Random.MersenneTwister(123), device = "cpu"), controls = Any[Step(2), NumberSinceBest(3), NumberLimit(300)], resampling = Holdout( fraction_train = 0.8, shuffle = false, rng = Random._GLOBAL_RNG()), measure = LPLoss(p = 1), weights = nothing, class_weights = nothing, operation = MLJModelInterface.predict, retrain = true, check_measure = true, iteration_parameter = nothing, cache = true)
Combining the model with categorical feature encoding:
pipe = ContinuousEncoder |> iterated_booster
DeterministicPipeline( continuous_encoder = ContinuousEncoder( drop_last = false, one_hot_ordered_factors = false), deterministic_iterated_model = DeterministicIteratedModel( model = EvoTreeRegressor{Float64,…}, controls = Any[Step(2), NumberSinceBest(3), NumberLimit(300)], resampling = Holdout, measure = LPLoss(p = 1), weights = nothing, class_weights = nothing, operation = MLJModelInterface.predict, retrain = true, check_measure = true, iteration_parameter = nothing, cache = true), cache = true)
First, we define a hyperparameter range for optimization of a (nested) hyperparameter:
max_depth_range = range(pipe,
:(deterministic_iterated_model.model.max_depth),
lower = 1,
upper = 10)
NumericRange(1 ≤ deterministic_iterated_model.model.max_depth ≤ 10; origin=5.5, unit=4.5)
Now we can wrap the pipeline model in an optimization strategy to make it "self-tuning":
self_tuning_pipe = TunedModel(model=pipe,
tuning=RandomSearch(),
ranges = max_depth_range,
resampling=CV(nfolds=3, rng=456),
measure=l1,
acceleration=CPUThreads(),
n=50)
DeterministicTunedModel( model = DeterministicPipeline( continuous_encoder = ContinuousEncoder, deterministic_iterated_model = DeterministicIteratedModel{EvoTreeRegressor{Float64,…}}, cache = true), tuning = RandomSearch( bounded = Distributions.Uniform, positive_unbounded = Distributions.Gamma, other = Distributions.Normal, rng = Random._GLOBAL_RNG()), resampling = CV( nfolds = 3, shuffle = true, rng = Random.MersenneTwister(456)), measure = LPLoss(p = 1), weights = nothing, operation = nothing, range = NumericRange(1 ≤ deterministic_iterated_model.model.max_depth ≤ 10; origin=5.5, unit=4.5), selection_heuristic = MLJTuning.NaiveSelection(nothing), train_best = true, repeats = 1, n = 50, acceleration = CPUThreads{Int64}(5), acceleration_resampling = CPU1{Nothing}(nothing), check_measure = true, cache = true)
Loading a selection of features and labels from the Ames House Price dataset:
X, y = @load_reduced_ames;
Binding the "self-tuning" pipeline model to data in a machine (which will additionally store learned parameters):
mach = machine(self_tuning_pipe, X, y)
Machine{DeterministicTunedModel{RandomSearch,…},…} trained 0 times; caches data model: MLJTuning.DeterministicTunedModel{RandomSearch, MLJBase.DeterministicPipeline{NamedTuple{(:continuous_encoder, :deterministic_iterated_model), Tuple{Unsupervised, Deterministic}}, MLJModelInterface.predict}} args: 1: Source @512 ⏎ `Table{Union{AbstractVector{Continuous}, AbstractVector{Count}, AbstractVector{Multiclass{15}}, AbstractVector{Multiclass{25}}, AbstractVector{OrderedFactor{10}}}}` 2: Source @129 ⏎ `AbstractVector{Continuous}`
Evaluating the "self-tuning" pipeline model's performance using 5-fold cross-validation (implies multiple layers of nested resampling):
evaluate!(mach,
measures=[l1, l2],
resampling=CV(nfolds=5, rng=123),
acceleration=CPUThreads())
[ Info: Performing evaluations using 5 threads. Evaluating over 5 folds: 100%[=========================] Time: 0:07:23
PerformanceEvaluation object with these fields: measure, measurement, operation, per_fold, per_observation, fitted_params_per_fold, report_per_fold, train_test_pairs Extract: ┌───────────────┬─────────────┬───────────┬───────────────────────────────────────────────┐ │ measure │ measurement │ operation │ per_fold │ ├───────────────┼─────────────┼───────────┼───────────────────────────────────────────────┤ │ LPLoss(p = 1) │ 16800.0 │ predict │ [16500.0, 16300.0, 16300.0, 16600.0, 18600.0] │ │ LPLoss(p = 2) │ 6.65e8 │ predict │ [6.14e8, 6.3e8, 5.98e8, 6.17e8, 8.68e8] │ └───────────────┴─────────────┴───────────┴───────────────────────────────────────────────┘
This notebook was generated using Literate.jl.