Running in Docker¶
Contents
Images¶
Rasa NLU docker images are provided for different backends:
spacy
: If you use the pretrained_embeddings_spacy pipelinetensorflow
: If you use the supervised_embeddings pipelinemitie
: If you use the mitie pipelinebare
: If you want to take a base image and enhance it with your custom dependenciesfull
(default): If you use components from different pre-defined pipelines and want to have everything included in the image.
Note
For the tensorflow
and the full
image a x86_64 CPU with AVX support
is required.
Training NLU¶
To train a NLU model you need to mount two directories into the Docker container:
- a directory containing your project which in turn includes your NLU configuration and your NLU training data
- a directory which will contain the trained NLU model
docker run \
-v <project_directory>:/app/project \
-v <model_output_directory>:/app/model \
rasa/rasa_nlu:latest \
run \
python -m rasa_nlu.train \
-c /app/project/<nlu configuration>.yml \
-d /app/project/<nlu data> \
-o /app/model \
--project <nlu project name>
Running NLU with Rasa Core¶
See this guide which describes how to set up all Rasa components as Docker containers and how to connect them.
Running NLU as Standalone Server¶
To run NLU as server you have to
- mount a directory with the trained NLU models
- expose a port
docker run \
-p 5000:5000 \
-v <directory with nlu models>:/app/projects \
rasa/rasa_nlu:latest \
start \
--path /app/projects
--port 5000
You can then send requests to your NLU server as it is described in the HTTP API, e.g. if it is running on the localhost:
curl --request GET \
--url 'http://localhost:5000/parse?q=Hello%20world!'
Have questions or feedback?¶
We have a very active support community on Rasa Community Forum that is happy to help you with your questions. If you have any feedback for us or a specific suggestion for improving the docs, feel free to share it by creating an issue on Rasa NLU GitHub repository.