Deploy an Example App¶
Note
This tutorial will show you the pieces required to deploy an example bot to the Rasa Platform.
Goal¶
We will deploy one of the example bots from the Rasa Core repo to your running instance of the
platform.
The easiest way to follow along is if you carry out these steps while in an ssh
session on
the server where you deployed the platform. In the final section we discuss how to build and
deploy to a remote server.
There are two main steps to deploying a bot to the Rasa Platform:
- Creating a docker container where all your actions will be executed
- Making models available to the platform’s Rasa Core and Rasa NLU containers.
Creating an Application and a Docker Container¶
If you haven’t done so already, ssh
into your server.
Clone the demo app:
git clone https://github.com/RasaHQ/platform-demo.git
cd platform-demo/moodbot
Have a look at the file main.py
.
This example app uses the RasaChatInput
. you can find more information
on that here: Rasa Chat.
Running your bot on the platform makes use of a class RemoteAgent
.
Note
RasaChatInput
is part of Rasa Core as of version 0.10.2
. If you
run an older version of Core, you can install RasaChatInput
from the
rasa_extensions
package. For this you need an
account on https://pypi.rasa.ai
.
More information can be found in Python package installation.
Building Your Docker Image¶
To build a docker image which will execute the bot’s actions, run:
docker build -t $IMAGE_TAG .
This will use the Dockerfile in the moodbot directory.
For now, you can use something like IMAGE_TAG=demobot:v1
.
If you are deploying to a remote server, you will want to push the image to a registry first.
An image tag like IMAGE_TAG=username/demobot:v1
will allow you to push this tag to a docker registry like GCR or docker Hub.
You can read more about docker tags here: https://docs.docker.com/engine/reference/commandline/tag/
Start the application¶
Once your docker image is built, you can start the container using a command like the below.
This will copy your trained dialogue model to the directory where Rasa Core can find it,
and start up your container so that it can talk to the other containers of the platform.
You will have to set the $RASA_CORE_TOKEN
environment variable, which you can find
in the settings at http://rasa.examples.com/settings .
Add Some NLU Data¶
Unlike the Rasa Core model, the Rasa NLU model is trained directly on the server.
The training data in the platform should therefore be considered the master copy.
Navigate to the NLU tab in the platform at http://rasa.example.com/inbox, click on Trained Examples
,
and click Upload and Replace
. Upload the training data at data/training_data.json
and hit the train
button.
Try out your bot!¶
Go to http://rasa.example.com/chat - you should see a chat interface similar to the one below.
For more details read Rasa Chat.
Deploying to a remote server¶
For a CI/CD set up you probably don’t want to build your docker image on the server that’s running your bot.
In the remote
directory there are two scripts, deploy_model.sh
and deploy_app.sh
that show how you
can deploy the application to a remote server. However you will need to set up a docker registry to hold your images.
The easiest option is to create an account at https://hub.docker.com and push your images there.