WARNING: Version 2.2 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Compound Word Token Filter
editCompound Word Token Filter
editThe hyphenation_decompounder
and dictionary_decompounder
token filters can
decompose compound words found in many German languages into word parts.
Both token filters require a dictionary of word parts, which can be provided as:
|
An array of words, specified inline in the token filter configuration, or |
|
The path (either absolute or relative to the |
Hyphenation decompounder
editThe hyphenation_decompounder
uses hyphenation grammars to find potential
subwords that are then checked against the word dictionary. The quality of the
output tokens is directly connected to the quality of the grammar file you
use. For languages like German they are quite good.
XML based hyphenation grammar files can be found in the
Objects For Formatting Objects
(OFFO) Sourceforge project. Currently only FOP v1.2 compatible hyphenation files
are supported. You can download offo-hyphenation_v1.2.zip
directly and look in the offo-hyphenation/hyph/
directory.
Credits for the hyphenation code go to the Apache FOP project .
Dictionary decompounder
editThe dictionary_decompounder
uses a brute force approach in conjunction with
only the word dictionary to find subwords in a compound word. It is much
slower than the hyphenation decompounder but can be used as a first start to
check the quality of your dictionary.
Compound token filter parameters
editThe following parameters can be used to configure a compound word token filter:
|
Either |
|
A array containing a list of words to use for the word dictionary. |
|
The path (either absolute or relative to the |
|
The path (either absolute or relative to the |
|
Minimum word size. Defaults to 5. |
|
Minimum subword size. Defaults to 2. |
|
Maximum subword size. Defaults to 15. |
|
Whether to include only the longest matching subword or not. Defaults to |
Here is an example:
index : analysis : analyzer : myAnalyzer2 : type : custom tokenizer : standard filter : [myTokenFilter1, myTokenFilter2] filter : myTokenFilter1 : type : dictionary_decompounder word_list: [one, two, three] myTokenFilter2 : type : hyphenation_decompounder word_list_path: path/to/words.txt hyphenation_patterns_path: path/to/fop.xml max_subword_size : 22