This tutorial includes an overview of the contextual bandits approach to reinforcement learning and describes how to approach a contextual bandit problem using Vowpal Wabbit. You will learn how to use Vowpal Wabbit in a contextual bandit setting with the Python tutorial—including when and how to work with different contextual bandits approaches, how to format data, and understand the results. No prior knowledge of contextual bandits or Vowpal Wabbit is required.
In the contextual bandit problem, a learner repeatedly observes a context, chooses an action, and observes a loss/cost/reward for the chosen action only. Contextual bandit algorithms use additional side information (or context) to aid real world decision-making. They work well for choosing actions in dynamic environments where options change rapidly, and the set of available actions is limited.
To introduce a Vowpal Wabbit approach to the contextual bandit problem and explore the capabilities of this approach to reinforcement learning, this guide uses a hypothetical application called APP.
APP interacts with the context of a user's behavior (search history, visited pages, or geolocation) in a dynamic environment–such as a news website or a cloud controller. APP differs from MAB because we have some information available to the APP, which is the context.
APP performs the following functions:
For example:
APP news website:
APP cloud controller:
You want APP to take actions that provide the highest possible reward. In machine learning parlance, we want a model that tells us which action to take.
We use the term policy many times in this tutorial. In reinforcement learning, the policy is roughly equivalent to model. In machine learning, the model means learned function. When someone says policy, it is more specific than model because it indicates this is a model that acts in the world.
Contexts and actions are typically represented as feature vectors in contextual bandit algorithms. For example, APP chooses actions by applying a policy π that takes a context as input and returns an action. The goal is to find a policy that maximizes the average reward over a sequence of interactions.
There are multiple policy evaluation approaches available to optimize a policy. Vowpal Wabbit offers four approaches to specify a contextual bandit approach using --cb_type
:
--cb_type ips
--cb_type dr
--cb_type dm
--cb_type mtr
Note: The focal point of contextual bandit learning research is efficient exploration algorithms. For more details, see the Contextual Bandit bake-off paper.
Vowpal Wabbit offers five exploration algorithms:
--first
--epsilon
--bag
--cover
--softmax
(only supported for --cb_explore_adf
)Note: For more details on contextual bandits algorithms and Vowpal Wabbit, please refer to the Vowpal Wabbit Github Wiki.
There are four main components to a contextual bandit problem:
Vowpal Wabbit provides three contextual bandits algorithms:
--cb
The contextual bandit module which allows you to optimize predictor based on already collected data, or contextual bandits without exploration.
2. --cb_explore
The contextual bandit learning algorithm for when the maximum number of actions is known ahead of time and semantics of actions stays the same across examples.
3. --cb_explore_adf
The contextual bandit learning algorithm for when the set of actions changes over time or you have rich information for each action.
--cb
¶--cb <number_of_actions>
The --cb 4
command specifies that we want to use the contextual bandit module and our data has a total of four actions.
Each example is represented as a separate line in your data file and must follow the following format:
action:cost:probability | features
Sample data file train.dat with five examples:
1:2:0.4 | a c
3:0.5:0.2 | b d
4:1.2:0.5 | a b c
2:1:0.3 | b c
3:1.5:0.7 | a d
Use the command
vw -d train.dat --cb 4
Note: This usage is for the Vowpal Wabbit command line. See below for a Python tutorial.
--cb_explore
¶--cb_explore <number_of_actions>
The command --cb_explore 4
specifies our examples explore a total of four actions.
Note: This format explores the action space so you must specify which algorithm you want to use for exploration.
The following examples use the input format for the --cb
command example above:
vw -d train.dat --cb_explore 4 --first 2
In this case, on the first two actions, you take each of the four actions with probability 1/4.
vw -d train.dat --cb_explore 4 --epsilon 0.2
In this case, the prediction of the current learned policy takes with probability 1 - epsilon 80% of the time, and with the remaining 20% epsilon probability, an action is chosen uniformly at random.
vw -d train.dat --cb_explore 4 --bag 5
vw -d train.dat --cb_explore 4 --cover 3
This algorithm is a theoretically optimal exploration algorithm. Similar to the previous bagging m example, different policies are trained in this case. Unlike bagging, the training of these policies is explicitly optimized to result in a diverse set of predictions—choosing all the actions which are not already learned to be bad in a given context.
For more information and research on this theoretically optimal exploration algorithm see this paper.
--cb_explore_adf
¶--cb_explore_adf
The command --cb_explore_adf
is different from the other two example cases because the action set changes over time (or we have rich information for each action):
It best to create features for every (context, action) pair rather than features associated only with context and shared across all actions.
Note: This format explores the action space so you must specify which algorithm you want to use for exploration.
You can specify contextual features which share all line actions at the beginning of an example, which always has a shared
label, as in the second multiline example below.
Since the shared line is not associated with any action, it should never contain the label information.
Sample data file train.dat with two examples:
| a:1 b:0.5
0:0.1:0.75 | a:0.5 b:1 c:2
shared | s_1 s_2
0:1.0:0.5 | a:1 b:1 c:1
| a:0.5 b:2 c:1
In the first example, we have two actions, one line for each. The first line represents the first action, and it has two action dependent features a and b.
| a:1 b:0.5
The second line represents the second action, and it has three action dependent features a, b, and c.
0:0.1:0.75 | a:0.5 b:1 c:2
If the second action is the chosen action it follows the following format:
action:cost:probability | features
0:0.1:0.75 |
Action 0 is ignored, has cost 0.1 and a probability of 0.75.
In the case of the softmax explorer, which uses the policy not only to predict an action but also predict a score indicating the quality of each action. The probability of action a creates distribution proportional to exp(lambda * score(x, a)).
vw -d train_adf.dat --cb_explore_adf
vw -d train.dat --cb_explore_adf --first 2
vw -d train.dat --cb_explore_adf --epsilon 0.1
vw -d train.dat --cb_explore_adf --bag 5
vw -d train.dat --cb_explore_adf --softmax --lambda 10
Here lambda is a parameter, which leads to uniform exploration for lambda = 0, and stops exploring as lambda approaches infinity. In general, this provides an excellent knob for controlled exploration based on the uncertainty in the learned policy.
Begin by loading the required Python packages:
import pandas as pd
Now, generate some sample training data that could originate from previous random trial (for example A/B test) for the contextual bandit to explore:
train_data = [
{
"action": 1,
"cost": 2,
"probability": 0.4,
"feature1": "a",
"feature2": "c",
"feature3": "",
},
{
"action": 3,
"cost": 0,
"probability": 0.2,
"feature1": "b",
"feature2": "d",
"feature3": "",
},
{
"action": 4,
"cost": 1,
"probability": 0.5,
"feature1": "a",
"feature2": "b",
"feature3": "",
},
{
"action": 2,
"cost": 1,
"probability": 0.3,
"feature1": "a",
"feature2": "b",
"feature3": "c",
},
{
"action": 3,
"cost": 1,
"probability": 0.7,
"feature1": "a",
"feature2": "d",
"feature3": "",
},
]
train_df = pd.DataFrame(train_data)
# Add index to data frame
train_df["index"] = range(1, len(train_df) + 1)
train_df = train_df.set_index("index")
Note: The data here is equivalent to the VW wiki example.
Next, create data for the contextual bandit to exploit to make decisions (for example features describing new users):
test_data = [
{"feature1": "b", "feature2": "c", "feature3": ""},
{"feature1": "a", "feature2": "", "feature3": "b"},
{"feature1": "b", "feature2": "b", "feature3": ""},
{"feature1": "a", "feature2": "", "feature3": "b"},
]
test_df = pd.DataFrame(test_data)
# Add index to data frame
test_df["index"] = range(1, len(test_df) + 1)
test_df = test_df.set_index("index")
# Your dataframes are:
train_df.head()
test_df.head()
First, create the Python model store the model parameters in the Python vw
object.
Use the following command for a contextual bandit with four possible actions:
import vowpalwabbit
vw = vowpalwabbit.Workspace("--cb 4", quiet=True)
Note: Use
--quiet
command to turn off diagnostic information in Vowpal Wabbit.
Now, call learn for each trained example on your Vowpal Wabbit model.
for i in train_df.index:
action = train_df.loc[i, "action"]
cost = train_df.loc[i, "cost"]
probability = train_df.loc[i, "probability"]
feature1 = train_df.loc[i, "feature1"]
feature2 = train_df.loc[i, "feature2"]
feature3 = train_df.loc[i, "feature3"]
# Construct the example in the required vw format.
learn_example = (
str(action)
+ ":"
+ str(cost)
+ ":"
+ str(probability)
+ " | "
+ str(feature1)
+ " "
+ str(feature2)
+ " "
+ str(feature3)
)
# Here we do the actual learning.
vw.learn(learn_example)
Use the model that was just trained on the train set to perform predictions on the test set. Construct the example like before but don't include the label and pass it into predict instead of learn. For example:
for j in test_df.index:
feature1 = test_df.loc[j, "feature1"]
feature2 = test_df.loc[j, "feature2"]
feature3 = test_df.loc[j, "feature3"]
test_example = "| " + str(feature1) + " " + str(feature2) + " " + str(feature3)
choice = vw.predict(test_example)
print(j, choice)
Note: The contextual bandit assigns every instance to the third action as it should per the cost structure of the train data. You can save and load the model you train from a file.
Finally, experiment with the cost structure to see that the contextual bandit updates its predictions accordingly.
vw.save("cb.model")
del vw
vw = vowpalwabbit.Workspace("--cb 4 -i cb.model", quiet=True)
print(vw.predict("| a b"))
The -i
argument means input regressor, telling Vowpal Wabbit to load a model from that file instead of starting from scratch.