In this tutorial we introduce HEBO, while running a simple Ray Tune experiment. Tune’s Search Algorithms integrate with ZOOpt and, as a result, allow you to seamlessly scale up a HEBO optimization process - without sacrificing performance.
Heteroscadastic Evolutionary Bayesian Optimization (HEBO) does not rely on the gradient of the objective function, but instead, learns from samples of the search space. It is suitable for optimizing functions that are nondifferentiable, with many local minima, or even unknown but only testable. This necessarily makes the algorithm belong to the domain of "derivative-free optimization" and "black-box optimization".
In this example we minimize a simple objective to briefly demonstrate the usage of HEBO with Ray Tune via HEBOSearch
. It's useful to keep in mind that despite the emphasis on machine learning experiments, Ray Tune optimizes any implicit or explicit objective. Here we assume zoopt==0.4.1
library is installed. To learn more, please refer to the HEBO website.
# !pip install ray[tune]
!pip install HEBO==0.3.2
Click below to see all the imports we need for this example. You can also launch directly into a Binder instance to run this notebook yourself. Just click on the rocket symbol at the top of the navigation.
import time
import ray
from ray import tune
from ray.tune.suggest.hebo import HEBOSearch
Let's start by defining a simple evaluation function.
We artificially sleep for a bit (0.1
seconds) to simulate a long-running ML experiment.
This setup assumes that we're running multiple step
s of an experiment and try to tune two hyperparameters,
namely width
and height
, and activation
.
def evaluate(step, width, height, activation):
time.sleep(0.1)
activation_boost = 10 if activation=="relu" else 1
return (0.1 + width * step / 100) ** (-1) + height * 0.1 + activation_boost
Next, our objective
function takes a Tune config
, evaluates the score
of your experiment in a training loop,
and uses tune.report
to report the score
back to Tune.
def objective(config):
for step in range(config["steps"]):
score = evaluate(step, config["width"], config["height"], config["activation"])
tune.report(iterations=step, mean_loss=score)
ray.init(configure_logging=False)
While defining the search algorithm, we may choose to provide an initial set of hyperparameters that we believe are especially promising or informative, and
pass this information as a helpful starting point for the HyperOptSearch
object.
We also set the maximum concurrent trials to 8
.
previously_run_params = [
{"width": 10, "height": 0, "activation": "relu"},
{"width": 15, "height": -20, "activation": "tanh"},
]
known_rewards = [-189, -1144]
max_concurrent = 8
algo = HEBOSearch(
metric="mean_loss",
mode="min",
points_to_evaluate=previously_run_params,
evaluated_rewards=known_rewards,
random_state_seed=123,
max_concurrent=max_concurrent,
)
The number of samples is the number of hyperparameter combinations that will be tried out. This Tune run is set to 1000
samples.
(you can decrease this if it takes too long on your machine).
num_samples = 1000
# If 1000 samples take too long, you can reduce this number.
# We override this number here for our smoke tests.
num_samples = 10
Next we define a search space. The critical assumption is that the optimal hyperparamters live within this space. Yet, if the space is very large, then those hyperparameters may be difficult to find in a short amount of time.
search_config = {
"steps": 100,
"width": tune.uniform(0, 20),
"height": tune.uniform(-100, 100),
"activation": tune.choice(["relu, tanh"])
}
Finally, we run the experiment to "min"
imize the "mean_loss" of the objective
by searching search_config
via algo
, num_samples
times. This previous sentence is fully characterizes the search problem we aim to solve. With this in mind, notice how efficient it is to execute tune.run()
.
analysis = tune.run(
objective,
metric="mean_loss",
mode="min",
name="hebo_exp_with_warmstart",
search_alg=algo,
num_samples=num_samples,
config=search_config
)
Here are the hyperparamters found to minimize the mean loss of the defined objective.
print("Best hyperparameters found were: ", analysis.best_config)
ray.shutdown()