Deploying custom Rasa Core or Rasa NLU code¶
The Rasa Stack (Core and NLU) are open source and many Rasa Platform users have developed custom NLU components and Rasa Core Policies. This page explains how to let your custom Rasa Core & NLU servers communicate with Rasa Platform. We will show you how to run them as Docker containers, but this is optional. You can skip Section 2 if you don’t want to use Docker.
- Steps:
- creating your custom Rasa NLU and Core servers
- running these servers as Docker containers
- modifying the
docker-compose.yml
- deploy the changes
1. Building custom Rasa Stack servers¶
Both servers for Rasa NLU and the Rasa Core make use of a number of environment
variables and it’s convenient to collect them all in a single file. This file
called config.py
is available for download
here
.
1.1 Custom Rasa NLU¶
Your custom version of Rasa NLU should launch an instance of the builtin
server class RasaNLU
. We recommend that you define it in a script
called nlu_sever.py
. This should be located in your custom Rasa NLU
branch or directory, and can make use of any custom code you need.
Here’s an example of what your script might look like:
Make sure that the config.py
file you created earlier is in the same
directory as nlu_sever.py
.
1.2 Custom Rasa Core¶
The steps for your custom Rasa Core instance are the same as we saw above for
NLU. Create a script called core_server.py
that runs
the serve_application()
method defined in rasa_core.run
:
Make sure your config.py
is in the same directory.
2. Docker containers for custom Stack¶
2.1 Rasa NLU¶
The next step is to write a Dockerfile
instructing Docker to
install any necessary dependencies and run the script when the container is
started:
FROM python:3.6-slim
SHELL ["/bin/bash", "-c"]
RUN apt-get update -qq && apt-get install -y --no-install-recommends \
build-essential wget openssh-client graphviz-dev pkg-config git-core \
openssl libssl-dev libffi6 libffi-dev libpng-dev curl && apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && mkdir /app
WORKDIR /app
ARG SPACY_VERSION
RUN pip install spacy==${SPACY_VERSION}
ARG TENSORFLOW_VERSION
RUN pip install tensorflow==${TENSORFLOW_VERSION}
ARG SPACY_ENGLISH_MODEL_BASE
ARG SPACY_ENGLISH_MODEL_VERSION
RUN python -m spacy download "${SPACY_ENGLISH_MODEL_BASE}-${SPACY_ENGLISH_MODEL_VERSION}" --direct && \
python -m spacy link ${SPACY_ENGLISH_MODEL_BASE} en
COPY setup.py alt_requirements/requirements_full.txt .
COPY ./rasa_nlu ./rasa_nlu/
RUN pip install -r requirements_full.txt
RUN pip install -e .
VOLUME ["/app/projects", "/app/logs", "/app/data"]
EXPOSE 5000
CMD ["python", "nlu_server.py"]
You can now build your Docker image. The following command creates the image
and tags it as <YOUR_RASA_NLU_IMAGE>
:
$ sudo docker build -t <YOUR_RASA_NLU_IMAGE>
--build-arg SPACY_VERSION=2.0.9 \
--build-arg TENSORFLOW_VERSION=1.8.0 \
--build-arg SPACY_ENGLISH_MODEL_BASE=en_core_web_md \
--build-arg SPACY_ENGLISH_MODEL_VERSION=2.0.0 \
.
Note
This command requires your Dockerfile
to be located in the root
directory of the Rasa NLU version you wish to install.
You may push your image to your private Docker registry with:
$ sudo docker login -u USER -p PASSWORD https://my-private-docker-registry.com
$ sudo docker push <YOUR_RASA_NLU_IMAGE>
2.2 Rasa Core¶
You will have to create a Dockerfile
which looks like this:
FROM python:3.6-slim
SHELL ["/bin/bash", "-c"]
RUN apt-get update -qq && apt-get install -y --no-install-recommends \
build-essential wget openssh-client graphviz-dev pkg-config git-core \
openssl libssl-dev libffi6 libffi-dev libpng-dev curl && apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && mkdir /app
WORKDIR /app
COPY requirements.txt setup.py .
COPY ./rasa_core ./rasa_core/
RUN pip install -r requirements.txt
RUN pip install -e .
VOLUME ["/app/model", "/app/credentials.yml"]
CMD ["python", "core_server.py"]
Note
As for NLU, this command requires your Dockerfile
to be located in the
root directory of the Rasa Core version you would like to install.
Finally, you can build your image with:
$ sudo docker build -t <YOUR_RASA_CORE_IMAGE> .
Pushing this image to your private docker registry works as follows:
$ sudo docker login -u USER -p PASSWORD https://my-private-docker-registry.com
$ sudo docker push <YOUR_RASA_CORE_IMAGE>
3. Modifying docker-compose config¶
Case 1: Your custom servers are docker images¶
Rasa NLU and Core can both be run as standalone docker containers which can
be used on Rasa Platform instead of an official build. You have to make sure
docker-compose
points to a Docker image of your custom version of Rasa
NLU and Core. To do this, edit or create docker-compose.override.yml
,
and create entries for the nlu
or core
service (or both).
Here’s an example in which both core
and nlu
point to custom images:
version: "3.4"
services:
nlu:
image: <YOUR_RASA_NLU_IMAGE>
core:
image: <YOUR_RASA_CORE_IMAGE>
Replace <YOUR_RASA_NLU_IMAGE>
and <YOUR_RASA_CORE_IMAGE>
with the image names of your custom NLU and Core versions. These could either
be images that you’ve built locally on your server, or they could be
URLs pointing to your private Docker registry.
Case 2: Your custom servers run natively in python¶
If your custom code runs as plain python code on another server, you will
have to modify your docker-compose.override.yml
to make sure the
Platform does not run the default Core and NLU services, and instead
runs dummy images that do nothing but print a “Hello” message. Your
docker-compose.override.yml
should contain the following:
version: "3.4"
services:
nlu:
image: hello-world
restart: "no"
volumes: []
depends_on: []
core:
image: hello-world
restart: "no"
volumes: []
depends_on: []
app:
depends_on: []
logger:
depends_on:
- api
- platform-ui
- event-service
- app
- nginx
- mongo
- duckling
- rabbit
4. Deploying the updated Platform¶
Start your Platform with your custom NLU or Core images with
$ cd /etc/rasaplatform
$ sudo docker login -u _json_key -p "$(cat gcr-auth.json)" https://gcr.io
$ sudo docker-compose pull
$ sudo docker-compose up -d
Note
In case your images point to a Docker registry, you need to pull them
before starting up the Platform. For example, if your NLU image is
located at my-private-docker-registry.com/my-custom-nlu:latest
, and your
Core image at my-private-docker-registry.com/my-custom-core:latest
,
the commands are
$ sudo docker login -u USER -p PASSWORD https://my-private-docker-registry.com
$ sudo docker pull my-private-docker-registry.com/my-custom-nlu:latest
$ sudo docker pull my-private-docker-registry.com/my-custom-core:latest