How can we sample some of the variables in a pyll configuration space, while assigning values to the others?
Let's look at a simple example involving 2 variables 'a' and 'b'. The 'a' variable controls whether our space returns -1 or some random number, 'b'.
If we just run optimization normally, then we'll find that 'a' should be 0 (the index of the choice that gives the lowest return value.
from hyperopt import hp, fmin, rand
space = hp.choice('a', [-1, hp.uniform('b', 0, 1)])
best = fmin(fn=lambda x: x, space=space, algo=rand.suggest, max_evals=100)
print(best)
100%|██████████| 100/100 [00:00<00:00, 1413.57trial/s, best loss: -1.0] {'a': 0}
But what if someone else already set up the space, and we just run the search over the other part of the space, which corresponds to the uniform draw?
The easiest way to do this is probably to clone the search space, while making some substitutions while we're at it. We can just make a new search space in which 'a' is no longer a hyperparameter.
# put the configuration space in a local var
# so that we can work on it.
print(space)
0 switch 1 hyperopt_param 2 Literal{a} 3 randint 4 Literal{2} 5 Literal{-1} 6 float 7 hyperopt_param 8 Literal{b} 9 uniform 10 Literal{0} 11 Literal{1}
The transformation we want to make on the search space is to replace the randint
with a constant value of 1,
corresponding to always choosing hyperparameter a to be the second element of the list of choices.
Now, if you don't have access to the code that generated a search space, then you'll have to go digging around for the node you need to replace. There are two approaches you can use to do this: navigation and search.
from hyperopt import pyll
# The "navigation" approach to finding an internal
# search space node:
randint_node_nav = space.pos_args[0].pos_args[1]
print("by navigation:")
print(randint_node_nav)
# The "search" approach to finding an internal
# search space node:
randint_nodes = [node for node in pyll.dfs(space) if node.name == 'randint']
randint_node_srch, = randint_nodes
print("by search:")
print(randint_node_srch)
assert randint_node_nav == randint_node_srch
by navigation: 0 randint 1 Literal{2} by search: 0 randint 1 Literal{2}
space_with_fixed_a = pyll.clone(space, memo={randint_node_nav: pyll.as_apply(1)})
print(space_with_fixed_a)
0 switch 1 hyperopt_param 2 Literal{a} 3 Literal{1} 4 Literal{-1} 5 float 6 hyperopt_param 7 Literal{b} 8 uniform 9 Literal{0} 10 Literal{1}
Now, having cloned the space with a new term for the randint, we can search the new space. I wasn't sure if this would work because I haven't really tested the use of hyperopt_params that wrap around non-random nodes (here we replaced the randint with a constant) but it works for random search:
best = fmin(fn=lambda x: x, space=space_with_fixed_a, algo=rand.suggest, max_evals=100)
print(best)
100%|██████████| 100/100 [00:00<00:00, 1149.65trial/s, best loss: 0.0020179230078352095] {'a': 1, 'b': 0.0020179230078352095}
Yep, sure enough: The TPE implementation is broken by a hyperparameter that turns out to be a constant. At implementation time, that was not part of the plan.
from hyperopt import tpe
best = fmin(fn=lambda x: x, space=space_with_fixed_a, algo=tpe.suggest, max_evals=100)
print(best)
0%| | 0/100 [00:00<?, ?trial/s, best loss=?]
--------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-6-8b19efbb73ec> in <module> 1 from hyperopt import tpe ----> 2 best = fmin(fn=lambda x: x, space=space_with_fixed_a, algo=tpe.suggest, max_evals=100) 3 print(best) ~/anaconda3/lib/python3.7/site-packages/hyperopt/fmin.py in fmin(fn, space, algo, max_evals, timeout, loss_threshold, trials, rstate, allow_trials_fmin, pass_expr_memo_ctrl, catch_eval_exceptions, verbose, return_argmin, points_to_evaluate, max_queue_len, show_progressbar) 507 508 # next line is where the fmin is actually executed --> 509 rval.exhaust() 510 511 if return_argmin: ~/anaconda3/lib/python3.7/site-packages/hyperopt/fmin.py in exhaust(self) 328 def exhaust(self): 329 n_done = len(self.trials) --> 330 self.run(self.max_evals - n_done, block_until_done=self.asynchronous) 331 self.trials.refresh() 332 return self ~/anaconda3/lib/python3.7/site-packages/hyperopt/fmin.py in run(self, N, block_until_done) 264 # processes orchestration 265 new_trials = algo( --> 266 new_ids, self.domain, trials, self.rstate.randint(2 ** 31 - 1) 267 ) 268 assert len(new_ids) >= len(new_trials) ~/anaconda3/lib/python3.7/site-packages/hyperopt/tpe.py in suggest(new_ids, domain, trials, seed, prior_weight, n_startup_jobs, n_EI_candidates, gamma, verbose) 866 # use build_posterior_wrapper to create the pyll nodes 867 observed, observed_loss, posterior = build_posterior_wrapper( --> 868 domain, prior_weight, gamma 869 ) 870 tt = time.time() - t0 ~/anaconda3/lib/python3.7/site-packages/hyperopt/tpe.py in build_posterior_wrapper(domain, prior_weight, gamma) 830 observed_loss["vals"], 831 pyll.Literal(gamma), --> 832 pyll.Literal(float(prior_weight)), 833 ) 834 ~/anaconda3/lib/python3.7/site-packages/hyperopt/tpe.py in build_posterior(specs, prior_idxs, prior_vals, obs_idxs, obs_vals, obs_loss_idxs, obs_loss_vals, oloss_gamma, prior_weight) 708 obs_below, obs_above = obs_memo[node] 709 aa = [memo[a] for a in node.pos_args] --> 710 fn = adaptive_parzen_samplers[node.name] 711 b_args = [obs_below, prior_weight] + aa 712 named_args = {kw: memo[arg] for (kw, arg) in node.named_args} KeyError: 'asarray'
The TPE algorithm works if we make a different replacement in the graph. If we replace the entire "hyperopt_param" node corresponding to hyperparameter "a", then it works fine.
space_with_no_a = pyll.clone(space, memo={space.pos_args[0]: pyll.as_apply(1)})
best = fmin(fn=lambda x: x, space=space_with_no_a, algo=tpe.suggest, max_evals=100)
print(best)
100%|██████████| 100/100 [00:00<00:00, 352.69trial/s, best loss: 0.00037832452435874396] {'b': 0.00037832452435874396}