Warning: This document is for an old version of Rasa NLU. The latest version is 0.15.1.

# Component Configuration¶

Note

For clarity, we have renamed the pre-defined pipelines to reflect what they do rather than which libraries they use as of Rasa NLU 0.15. The tensorflow_embedding pipeline is now called supervised_embeddings, and spacy_sklearn is now known as pretrained_embeddings_spacy. Please update your code if you are using these.

This is a reference of the configuration options for every built-in component in Rasa NLU. If you want to build a custom component, check out section_customcomponents.

## MitieNLP¶

Short: MITIE initializer nothing Initializes mitie structures. Every mitie component relies on this, hence this should be put at the beginning of every pipeline that uses any mitie components. The MITIE library needs a language model file, that must be specified in the configuration: pipeline: - name: "MitieNLP" # language model to load model: "data/total_word_feature_extractor.dat"  For more information where to get that file from, head over to Installation.

## SpacyNLP¶

Short: spacy language initializer nothing Initializes spacy structures. Every spacy component relies on this, hence this should be put at the beginning of every pipeline that uses any spacy components. Language model, default will use the configured language. If the spacy model to be used has a name that is different from the language tag ("en", "de", etc.), the model name can be specified using this configuration variable. The name will be passed to spacy.load(name). pipeline: - name: "SpacyNLP" # language model to load model: "en_core_web_md" # when retrieving word vectors, this will decide if the casing # of the word is relevant. E.g. hello and Hello will # retrieve the same vector, if set to false. For some # applications and models it makes sense to differentiate # between these two words, therefore setting this to true. case_sensitive: false 

## MitieFeaturizer¶

Short: MITIE intent featurizer nothing, used as an input to intent classifiers that need intent features (e.g. SklearnIntentClassifier) Creates feature for intent classification using the MITIE featurizer. Note NOT used by the MitieIntentClassifier component. Currently, only SklearnIntentClassifier is able to use precomputed features. pipeline: - name: "MitieFeaturizer" 

## SpacyFeaturizer¶

Short: spacy intent featurizer nothing, used as an input to intent classifiers that need intent features (e.g. SklearnIntentClassifier) Creates feature for intent classification using the spacy featurizer.

## NGramFeaturizer¶

Short: Appends char-ngram features to feature vector nothing, appends its features to an existing feature vector generated by another intent featurizer This featurizer appends character ngram features to a feature vector. During training the component looks for the most common character sequences (e.g. app or ing). The added features represent a boolean flag if the character sequence is present in the word sequence or not. Note There needs to be another intent featurizer previous to this one in the pipeline! pipeline: - name: "NGramFeaturizer" # Maximum number of ngrams to use when augmenting # feature vectors with character ngrams max_number_of_ngrams: 10 

## CountVectorsFeaturizer¶

Short: Creates bag-of-words representation of intent features nothing, used as an input to intent classifiers that need bag-of-words representation of intent features (e.g. EmbeddingIntentClassifier) Creates bag-of-words representation of intent features using sklearn’s CountVectorizer. All tokens which consist only of digits (e.g. 123 and 99 but not a123d) will be assigned to the same feature. Note If the words in the model language cannot be split by whitespace, a language-specific tokenizer is required in the pipeline before this component (e.g. using JiebaTokenizer for Chinese). See sklearn’s CountVectorizer docs for detailed description of the configuration parameters This featurizer can be configured to use word or character n-grams, using analyzer config parameter. By default analyzer is set to word so word token counts are used as features. If you want to use character n-grams, set analyzer to char or char_wb. Note Option ‘char_wb’ creates character n-grams only from text inside word boundaries; n-grams at the edges of words are padded with space. This option can be used to create Subword Semantic Hashing Note For character n-grams do not forget to increase min_ngram and max_ngram parameters. Otherwise the vocabulary will contain only single letters Handling Out-Of-Vacabulary (OOV) words: Note Enabled only if analyzer is word. Since the training is performed on limited vocabulary data, it cannot be guaranteed that during prediction an algorithm will not encounter an unknown word (a word that were not seen during training). In order to teach an algorithm how to treat unknown words, some words in training data can be substituted by generic word OOV_token. In this case during prediction all unknown words will be treated as this generic word OOV_token. For example, one might create separate intent outofscope in the training data containing messages of different number of OOV_token s and maybe some additional general words. Then an algorithm will likely classify a message with unknown words as this intent outofscope. Note This featurizer creates a bag-of-words representation by counting words, so the number of OOV_token in the sentence might be important. OOV_token set a keyword for unseen words; if training data contains OOV_token as words in some messages, during prediction the words that were not seen during training will be substituted with provided OOV_token; if OOV_token=None (default behaviour) words that were not seen during training will be ignored during prediction time; OOV_words set a list of words to be treated as OOV_token during training; if a list of words that should be treated as Out-Of-Vacabulary is known, it can be set to OOV_words instead of manually changing it in trainig data or using custom preprocessor. Note Providing OOV_words is optional, training data can contain OOV_token input manually or by custom additional preprocessor. Unseen words will be substituted with OOV_token only if this token is present in the training data or OOV_words list is provided. pipeline: - name: "CountVectorsFeaturizer" # whether to use word or character n-grams # 'char_wb' creates character n-grams only inside word boundaries # n-grams at the edges of words are padded with space. "analyzer": 'word', # use 'char' or 'char_wb' for character # the parameters are taken from # sklearn's CountVectorizer # regular expression for tokens "token_pattern": r'(?u)\b\w\w+\b' # remove accents during the preprocessing step "strip_accents": None # {'ascii', 'unicode', None} # list of stop words "stop_words": None # string {'english'}, list, or None (default) # min document frequency of a word to add to vocabulary # float - the parameter represents a proportion of documents # integer - absolute counts "min_df": 1 # float in range [0.0, 1.0] or int # max document frequency of a word to add to vocabulary # float - the parameter represents a proportion of documents # integer - absolute counts "max_df": 1.0 # float in range [0.0, 1.0] or int # set ngram range "min_ngram": 1 # int "max_ngram": 1 # int # limit vocabulary size "max_features": None # int or None # if convert all characters to lowercase "lowercase": true # bool # handling Out-Of-Vacabulary (OOV) words # will be converted to lowercase if lowercase is true "OOV_token": None # string or None "OOV_words": [] # list of strings 

## KeywordIntentClassifier¶

Short: Simple keyword matching intent classifier. intent { "intent": {"name": "greet", "confidence": 0.98343} }  This classifier is mostly used as a placeholder. It is able to recognize hello and goodbye intents by searching for these keywords in the passed messages.

## MitieIntentClassifier¶

Short: MITIE intent classifier (using a text categorizer) intent { "intent": {"name": "greet", "confidence": 0.98343} }  This classifier uses MITIE to perform intent classification. The underlying classifier is using a multi class linear SVM with a sparse linear kernel (see mitie trainer code). pipeline: - name: "MitieIntentClassifier" 

## SklearnIntentClassifier¶

Short: sklearn intent classifier intent and intent_ranking { "intent": {"name": "greet", "confidence": 0.78343}, "intent_ranking": [ { "confidence": 0.1485910906220309, "name": "goodbye" }, { "confidence": 0.08161531595656784, "name": "restaurant_search" } ] }  The sklearn intent classifier trains a linear SVM which gets optimized using a grid search. In addition to other classifiers it also provides rankings of the labels that did not “win”. The spacy intent classifier needs to be preceded by a featurizer in the pipeline. This featurizer creates the features used for the classification. During the training of the SVM a hyperparameter search is run to find the best parameter set. In the config, you can specify the parameters that will get tried pipeline: - name: "SklearnIntentClassifier" # Specifies the list of regularization values to # cross-validate over for C-SVM. # This is used with the kernel hyperparameter in GridSearchCV. C: [1, 2, 5, 10, 20, 100] # Specifies the kernel to use with C-SVM. # This is used with the C hyperparameter in GridSearchCV. kernels: ["linear"] 

## EmbeddingIntentClassifier¶

Short: Embedding intent classifier intent and intent_ranking { "intent": {"name": "greet", "confidence": 0.8343}, "intent_ranking": [ { "confidence": 0.385910906220309, "name": "goodbye" }, { "confidence": 0.28161531595656784, "name": "restaurant_search" } ] }  The embedding intent classifier embeds user inputs and intent labels into the same space. Supervised embeddings are trained by maximizing similarity between them. This algorithm is based on the starspace idea from: https://arxiv.org/abs/1709.03856. However, in this implementation the mu parameter is treated differently and additional hidden layers are added together with dropout. This algorithm also provides similarity rankings of the labels that did not “win”. The embedding intent classifier needs to be preceded by a featurizer in the pipeline. This featurizer creates the features used for the embeddings. It is recommended to use CountVectorsFeaturizer that can be optionally preceded by SpacyNLP and SpacyTokenizer. Note If during prediction time a message contains only words unseen during training, and no Out-Of-Vacabulary preprocessor was used, empty intent "" is predicted with confidence 0.0. If you want to split intents into multiple labels, e.g. for predicting multiple intents or for modeling hierarchical intent structure, use these flags: tokenization of intent labels: intent_tokenization_flag if true the algorithm will split the intent labels into tokens and use bag-of-words representations for them, default false; intent_split_symbol sets the delimiter string to split the intent labels, default _. The algorithm also has hyperparameters to control: neural network’s architecture: hidden_layers_sizes_a sets a list of hidden layer sizes before the embedding layer for user inputs, the number of hidden layers is equal to the length of the list hidden_layers_sizes_b sets a list of hidden layer sizes before the embedding layer for intent labels, the number of hidden layers is equal to the length of the list training: batch_size sets the number of training examples in one forward/backward pass, the higher the batch size, the more memory space you’ll need; epochs sets the number of times the algorithm will see training data, where one epoch = one forward pass and one backward pass of all the training examples; embedding: embed_dim sets the dimension of embedding space; mu_pos controls how similar the algorithm should try to make embedding vectors for correct intent labels; mu_neg controls maximum negative similarity for incorrect intents; similarity_type sets the type of the similarity, it should be either cosine or inner; num_neg sets the number of incorrect intent labels, the algorithm will minimize their similarity to the user input during training; use_max_sim_neg if true the algorithm only minimizes maximum similarity over incorrect intent labels; random_seed (None or int) An integer sets the random seed for numpy and tensorflow, so that the random initialisation is always the same and produces the same training result regularization: C2 sets the scale of L2 regularization C_emb sets the scale of how important is to minimize the maximum similarity between embeddings of different intent labels; droprate sets the dropout rate, it should be between 0 and 1, e.g. droprate=0.1 would drop out 10% of input units; Note For cosine similarity mu_pos and mu_neg should be between -1 and 1. Note There is an option to use linearly increasing batch size. The idea comes from https://arxiv.org/abs/1711.00489. In order to do it pass a list to batch_size, e.g. "batch_size": [64, 256] (default behaviour). If constant batch_size is required, pass an int, e.g. "batch_size": 64. In the config, you can specify these parameters: pipeline: - name: "EmbeddingIntentClassifier" # nn architecture "hidden_layers_sizes_a": [256, 128] "hidden_layers_sizes_b": [] "batch_size": [64, 256] "epochs": 300 # embedding parameters "embed_dim": 20 "mu_pos": 0.8 # should be 0.0 < ... < 1.0 for 'cosine' "mu_neg": -0.4 # should be -1.0 < ... < 1.0 for 'cosine' "similarity_type": "cosine" # string 'cosine' or 'inner' "num_neg": 20 "use_max_sim_neg": true # flag which loss function to use "random_seed": None # set to any int to generate a reproducible training result # regularization "C2": 0.002 "C_emb": 0.8 "droprate": 0.2 # flag for tokenizing intents "intent_tokenization_flag": false "intent_split_symbol": "_" # visualization of accuracy "evaluate_every_num_epochs": 10 # small values may hurt performance "evaluate_on_num_examples": 1000 # large values may hurt performance  Note Parameter mu_neg is set to a negative value to mimic the original starspace algorithm in the case mu_neg = mu_pos and use_max_sim_neg = False. See starspace paper for details.

## RegexFeaturizer¶

Short: regex feature creation to support intent and entity classification text_features and tokens.pattern During training, the regex intent featurizer creates a list of regular expressions defined in the training data format. For each regex, a feature will be set marking whether this expression was found in the input, which will later be fed into intent classifier / entity extractor to simplify classification (assuming the classifier has learned during the training phase, that this set feature indicates a certain intent). Regex features for entity extraction are currently only supported by the CRFEntityExtractor component! .. note:: There needs to be a tokenizer previous to this featurizer in the pipeline!

## WhitespaceTokenizer¶

Short: Tokenizer using whitespaces as a separator nothing Creates a token for every whitespace separated character sequence. Can be used to define tokens for the MITIE entity extractor.

## JiebaTokenizer¶

Short: Tokenizer using Jieba for Chinese language nothing Creates tokens using the Jieba tokenizer specifically for Chinese language. For language other than Chinese, Jieba will work as WhitespaceTokenizer. Can be used to define tokens for the MITIE entity extractor. Make sure to install Jieba, pip install jieba. User’s custom dictionary files can be auto loaded by specific the files’ directory path via dictionary_path pipeline: - name: "JiebaTokenizer" dictionary_path: "path/to/custom/dictionary/dir" 

If the dictionary_path is None (the default), then no custom dictionary will be used.

## MitieTokenizer¶

Short: Tokenizer using MITIE nothing Creates tokens using the MITIE tokenizer. Can be used to define tokens for the MITIE entity extractor. pipeline: - name: "MitieTokenizer" 

## SpacyTokenizer¶

Short: Tokenizer using spacy nothing Creates tokens using the spacy tokenizer. Can be used to define tokens for the MITIE entity extractor.

## MitieEntityExtractor¶

Short: MITIE entity extraction (using a mitie ner trainer) appends entities { "entities": [{"value": "New York City", "start": 20, "end": 33, "confidence": null, "entity": "city", "extractor": "MitieEntityExtractor"}] }  This uses the MITIE entitiy extraction to find entities in a message. The underlying classifier is using a multi class linear SVM with a sparse linear kernel and custom features. The MITIE component does not provide entity confidence values. pipeline: - name: "MitieEntityExtractor" 

## SpacyEntityExtractor¶

Short: spacy entity extraction appends entities { "entities": [{"value": "New York City", "start": 20, "end": 33, "entity": "city", "confidence": null, "extractor": "SpacyEntityExtractor"}] }  Using spacy this component predicts the entities of a message. spacy uses a statistical BILOU transition model. As of now, this component can only use the spacy builtin entity extraction models and can not be retrained. This extractor does not provide any confidence scores. Configure which dimensions, i.e. entity types, the spacy component should extract. A full list of available dimensions can be found in the spacy documentation. Leaving the dimensions option unspecified will extract all available dimensions. pipeline: - name: "SpacyEntityExtractor" # dimensions to extract dimensions: ["PERSON", "LOC", "ORG", "PRODUCT"] 

## EntitySynonymMapper¶

Short: Maps synonymous entity values to the same value. modifies existing entities that previous entity extraction components found If the training data contains defined synonyms (by using the value attribute on the entity examples). this component will make sure that detected entity values will be mapped to the same value. For example, if your training data contains the following examples: [{ "text": "I moved to New York City", "intent": "inform_relocation", "entities": [{"value": "nyc", "start": 11, "end": 24, "entity": "city", }] }, { "text": "I got a new flat in NYC.", "intent": "inform_relocation", "entities": [{"value": "nyc", "start": 20, "end": 23, "entity": "city", }] }]  this component will allow you to map the entities New York City and NYC to nyc. The entitiy extraction will return nyc even though the message contains NYC. When this component changes an exisiting entity, it appends itself to the processor list of this entity.

## CRFEntityExtractor¶

Short: conditional random field entity extraction appends entities { "entities": [{"value":"New York City", "start": 20, "end": 33, "entity": "city", "confidence": 0.874, "extractor": "CRFEntityExtractor"}] }  This component implements conditional random fields to do named entity recognition. CRFs can be thought of as an undirected Markov chain where the time steps are words and the states are entity classes. Features of the words (capitalisation, POS tagging, etc.) give probabilities to certain entity classes, as are transitions between neighbouring entity tags: the most likely set of tags is then calculated and returned. If POS features are used (pos or pos2), spaCy has to be installed. pipeline: - name: "CRFEntityExtractor" # The features are a [before, word, after] array with # before, word, after holding keys about which # features to use for each word, for example, "title" # in array before will have the feature # "is the preceding word in title case?". # Available features are: # low, title, suffix5, suffix3, suffix2, # suffix1, pos, pos2, prefix5, prefix2, # bias, upper and digit features: [["low", "title"], ["bias", "suffix3"], ["upper", "pos", "pos2"]] # The flag determines whether to use BILOU tagging or not. BILOU # tagging is more rigorous however # requires more examples per entity. Rule of thumb: use only # if more than 100 examples per entity. BILOU_flag: true # This is the value given to sklearn_crfcuite.CRF tagger before training. max_iterations: 50 # This is the value given to sklearn_crfcuite.CRF tagger before training. # Specifies the L1 regularization coefficient. L1_c: 0.1 # This is the value given to sklearn_crfcuite.CRF tagger before training. # Specifies the L2 regularization coefficient. L2_c: 0.1 

## DucklingHTTPExtractor¶

Short: Duckling lets you extract common entities like dates, amounts of money, distances, and others in a number of languages. appends entities { "entities": [{"end": 53, "entity": "time", "start": 48, "value": "2017-04-10T00:00:00.000+02:00", "confidence": 1.0, "extractor": "DucklingHTTPExtractor"}] }  To use this component you need to run a duckling server. The easiest option is to spin up a docker container using docker run -p 8000:8000 rasa/duckling. Alternatively, you can install duckling directly on your machine and start the server. Duckling allows to recognize dates, numbers, distances and other structured entities and normalizes them (for a reference of all available entities see the duckling documentation). Please be aware that duckling tries to extract as many entity types as possible without providing a ranking. For example, if you specify both number and time as dimensions for the duckling component, the component will extract two entities: 10 as a number and in 10 minutes as a time from the text I will be there in 10 minutes. In such a situation, your application would have to decide which entity type is be the correct one. The extractor will always return 1.0 as a confidence, as it is a rule based system. Configure which dimensions, i.e. entity types, the duckling component should extract. A full list of available dimensions can be found in the duckling documentation. Leaving the dimensions option unspecified will extract all available dimensions. pipeline: - name: "DucklingHTTPExtractor" # url of the running duckling server url: "http://localhost:8000" # dimensions to extract dimensions: ["time", "number", "amount-of-money", "distance"] # allows you to configure the locale, by default the language is # used locale: "de_DE" # if not set the default timezone of Duckling is going to be used # needed to calculate dates from relative expressions like "tomorrow" timezone: "Europe/Berlin" 

### Have questions or feedback?¶

We have a very active support community on Rasa Community Forum that is happy to help you with your questions. If you have any feedback for us or a specific suggestion for improving the docs, feel free to share it by creating an issue on Rasa NLU GitHub repository.