Supervised Learning Tutorial¶
Note
This tutorial will cover how to use Rasa Core directly from python. We will dive a bit deeper into the different concepts and overall structure of the library. You should already be familiar with the terms domain, stories, and have some knowledge of NLU (if not, head Building a Simple Bot first). Here, we’ll be using the Example Code on GitHub.
Goal¶
In this example we will create a restaurant search bot by training a
neural network on example conversations. A user can contact the bot with
something close to "I want a mexican restaurant!"
and the bot will ask
more details until it is ready to suggest a restaurant.
First Steps¶
Let’s start by heading over to the directory for our restaurant bot. All example code snippets assume you are running the code from within that project directory:
cd examples/restaurantbot
1. The Domain¶
Let’s inspect the domain
definition in restaurant_domain.yml
:
In addition to the previous example, there are two sections: slots
and entities
.
slots
are used to store user preferences, like the cuisine and price range of a restaurant.
entities
are closely related to slots. Slots are updated over time, and entities are the raw
information that’s picked up from user messages. But slots can also be used to store information about
the outside world, like the results of API calls, or a user profile read from a database.
Here we have a slot called matches
which stores the matching restaurants returned by an API.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 | slots:
cuisine:
type: text
people:
type: text
location:
type: text
price:
type: text
info:
type: text
matches:
type: unfeaturized
entities:
- location
- info
- people
- price
- cuisine
intents:
- greet
- affirm
- deny
- inform
- thankyou
- request_info
templates:
utter_greet:
- "hey there!"
utter_goodbye:
- "goodbye :("
- "Bye-bye"
utter_default:
- "default message"
utter_ack_dosearch:
- "ok let me see what I can find"
utter_ack_findalternatives:
- "ok let me see what else there is"
utter_ack_makereservation:
- text: "ok making a reservation"
buttons:
- title: "thank you"
payload: "thank you"
utter_ask_cuisine:
- "what kind of cuisine would you like?"
utter_ask_howcanhelp:
- "how can I help you?"
utter_ask_location:
- "where?"
utter_ask_moreupdates:
- "if you'd like to modify anything else, please tell me what"
utter_ask_numpeople:
- "for how many people?"
utter_ask_price:
- text: "in which price range?"
buttons:
- title: "cheap"
payload: "cheap"
- title: "expensive"
payload: "expensive"
utter_on_it:
- "I'm on it"
actions:
- utter_greet
- utter_goodbye
- utter_default
- utter_ack_dosearch
- utter_ack_findalternatives
- utter_ack_makereservation
- utter_ask_cuisine
- utter_ask_howcanhelp
- utter_ask_location
- utter_ask_moreupdates
- utter_ask_numpeople
- utter_ask_price
- utter_on_it
- bot.ActionSearchRestaurants
- bot.ActionSuggest
|
Custom Actions¶
In this example we also have custom actions: bot.ActionSearchRestaurants
and bot.ActionSuggest
,
where bot.
stands for the name of the module where this action is defined.
An action can do much more than just send a message.
Here’s a small example of a custom action which calls an API.
Notice that the run
method can use the values of the slots, which are stored in the tracker.
1 2 3 4 5 6 7 8 9 | class ActionSearchRestaurants(Action):
def name(self):
return 'action_search_restaurants'
def run(self, dispatcher, tracker, domain):
dispatcher.utter_message("looking for restaurants")
restaurant_api = RestaurantAPI()
restaurants = restaurant_api.search(tracker.get_slot("cuisine"))
return [SlotSet("matches", restaurants)]
|
But a domain alone doesn’t make a bot; we need some training data to tell the bot which actions it should execute at what point in the conversation. We need some conversation training data - the stories!
2. The Training Data¶
Take a look at data/babi_stories.md
, where the training conversations
for the restaurant bot are defined. One example story looks as follows:
1 2 3 4 5 6 7 8 9 10 11 | ## story_00914561
* greet
- utter_ask_howcanhelp
* inform{"cuisine": "italian"}
- utter_on_it
- utter_ask_location
* inform{"location": "paris"}
- utter_ask_numpeople
* inform{"people": "six"}
- utter_ask_price
...
|
See Training Data below to get more information about this training data.
3. Training your bot¶
We can go directly from data to bot with only a few steps:
- Train a Rasa NLU model to extract intents and entities. Read more about that in the NLU docs.
- Train a dialogue policy which will learn to choose the correct actions.
- Set up an agent which has both model 1 (the NLU) and model 2 (the dialogue) working together to go directly from user input to action.
We will go through these steps one by one.
NLU model¶
To train our Rasa NLU model, we need a configuration file, which you can
find in nlu_model_config.yml
:
1 2 3 4 5 6 7 | pipeline:
- name: "nlp_spacy"
- name: "tokenizer_spacy"
- name: "intent_featurizer_spacy"
- name: "intent_classifier_sklearn"
- name: "ner_crf"
- name: "ner_synonyms"
|
And training data franken_data.json
(see
NLU Dataformat for details
about the training data format).
We can train the NLU model using
python -m rasa_nlu.train -c nlu_model_config.yml --fixed_model_name current \
--data ./data/franken_data.json --path ./models/nlu
or using python code
1 2 3 4 5 6 7 8 9 10 11 12 | def train_nlu():
from rasa_nlu.training_data import load_data
from rasa_nlu import config
from rasa_nlu.model import Trainer
training_data = load_data('data/franken_data.json')
trainer = Trainer(config.load("nlu_model_config.yml"))
trainer.train(training_data)
model_directory = trainer.persist('models/nlu/',
fixed_model_name="current")
return model_directory
|
and calling
python bot.py train-nlu
Training NLU takes approximately 18 seconds on a 2014 MacBook Pro.
A Custom Dialogue Policy¶
Now our bot needs to learn what to do in response to user messages. We do this by training one or multiple Rasa Core policies.
For this bot, we came up with our own policy.
This policy extends the Keras policy modifying the ML architecture of the
underlying neural network.
Check out the
RestaurantPolicy
class in policy.py
for the glory details:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | class RestaurantPolicy(KerasPolicy):
def model_architecture(self, input_shape, output_shape):
"""Build a Keras model and return a compiled model."""
from keras.layers import LSTM, Activation, Masking, Dense
from keras.models import Sequential
from keras.models import Sequential
from keras.layers import \
Masking, LSTM, Dense, TimeDistributed, Activation
# Build Model
model = Sequential()
# the shape of the y vector of the labels,
# determines which output from rnn will be used
# to calculate the loss
if len(output_shape) == 1:
# y is (num examples, num features) so
# only the last output from the rnn is used to
# calculate the loss
model.add(Masking(mask_value=-1, input_shape=input_shape))
model.add(LSTM(self.rnn_size))
model.add(Dense(input_dim=self.rnn_size, units=output_shape[-1]))
elif len(output_shape) == 2:
# y is (num examples, max_dialogue_len, num features) so
# all the outputs from the rnn are used to
# calculate the loss, therefore a sequence is returned and
# time distributed layer is used
# the first value in input_shape is max dialogue_len,
# it is set to None, to allow dynamic_rnn creation
# during prediction
model.add(Masking(mask_value=-1,
input_shape=(None, input_shape[1])))
model.add(LSTM(self.rnn_size, return_sequences=True))
model.add(TimeDistributed(Dense(units=output_shape[-1])))
else:
raise ValueError("Cannot construct the model because"
"length of output_shape = {} "
"should be 1 or 2."
"".format(len(output_shape)))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
logger.debug(model.summary())
return model
|
The parameters max_history_len
and n_hidden
may be altered dependent on the task complexity and the amount of data one
has. max_history_len
is important as it is the amount of previous story steps the
network has access to make a classification.
Because we’ve created a custom policy, we can’t train the bot by running rasa_core.train
like in the Building a Simple Bot. The bot.py
script shows how you can
train a bot that uses a custom policy and actions.
Note
Remember, you do not need to create your own policy. The default policy setup using a memoization policy and a Keras policy works quite well. Nevertheless, you can always fine tune them for your use case. Read Plumbing - How it all fits together for more info.
Now let’s train it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | def train_dialogue(domain_file="restaurant_domain.yml",
model_path="models/dialogue",
training_data_file="data/babi_stories.md"):
agent = Agent(domain_file,
policies=[MemoizationPolicy(max_history=3),
RestaurantPolicy()])
training_data = agent.load_data(training_data_file)
agent.train(
training_data,
epochs=400,
batch_size=100,
validation_split=0.2
)
agent.persist(model_path)
return agent
|
This code creates the policies to be trained and uses the story training data to train and persist (store) a model. The goal of the trained policy is to predict the next action, given the current state of the bot.
To train the dialogue policy from the command line, run
python bot.py train-dialogue
Training the dialogue model takes roughly 12 minutes on a 2014 MacBook Pro
4. Using the bot¶
Now we’re going to glue some pieces together to create an actual bot.
We instantiate an Agent
, which owns our trained Policy
, a Domain
from models/dialogue
,
and our NLU Interpreter
from models/nlu/default/current
.
For this demonstration, we will send messages directly to the bot out of a python console. You can look at how to build a command line bot and a facebook bot by checking out the Connecting to messaging & voice platforms.
from rasa_core.interpreter import RasaNLUInterpreter
from rasa_core.agent import Agent
agent = Agent.load("models/dialogue", interpreter=RasaNLUInterpreter("models/nlu/default/current"))
We can then try sending it a message:
>>> agent.handle_message("/greet")
[u'hey there!']
And there we have it! A minimal bot containing all the important pieces of Rasa Core.
Note
Here, we’ve skipped the NLU interpretation of our message by directly
providing the underlying intent greet
(see
Fixed intent & entity input).
If you want to handle input from the command line (or a different input channel) you need handle that channel instead of handling messages directly, e.g.:
from rasa_core.channels.console import ConsoleInputChannel
agent.handle_channel(ConsoleInputChannel())
In this case messages will be retrieved from the command line because we specified
the ConsoleInputChannel
. Responses are printed to the command line as well.
You can find a complete example of how to load an agent and chat with it on the
command line in the restaurant bot’s bot.py
run
method.
To run the bot from the command line, call
python bot.py run
If the bot appears to be stuck or answers incorrectly, do not worry. The provided dataset is not diverse enough to handle all possible inputs, as can be seen from visualization of the training data below. One can use Interactive Learning to augment the training data with custom stories.
The Details¶
Training Data¶
The training conversations come from the bAbI dialog task. However, the messages in these dialogues are machine generated, so we will augment this dataset with real user messages from the DSTC dataset. Lucky for us, this dataset is also in the restaurant domain.
Note
The bAbI dataset is machine-generated, and there are a LOT of dialogues in there. There are 1000 stories in the training set, but you don’t need that many to build a useful bot. How much data you need depends on the number of actions you define, and the number of edge cases you want to support. But a few dozen stories is a good place to start.
We have converted the bAbI dialogue training set into the Rasa stories
format, and you can download the stories training data from GitHub. That file is stored in
data/babi_stories.md
.
See Stories - The Training Data to get more information about the Rasa Core data format. We can also visualize that training data to generate a graph which is similar to a flow chart:
The graph shows all of the actions executed in the training data, and the user messages (if any) that occurred between them. As you can see, flow charts get complicated quite quickly. Nevertheless, they can be a helpful tool in debugging a bot. For example, it can be clearly seen that there is not enough data for handling reservation. To learn how to build this chart go to Visualization of Stories.