Text field type

edit

A field to index full-text values, such as the body of an email or the description of a product. These fields are analyzed, that is they are passed through an analyzer to convert the string into a list of individual terms before being indexed. The analysis process allows Elasticsearch to search for individual words within each full text field. Text fields are not used for sorting and seldom used for aggregations (although the significant text aggregation is a notable exception).

If you need to index structured content such as email addresses, hostnames, status codes, or tags, it is likely that you should rather use a keyword field.

Below is an example of a mapping for a text field:

PUT my-index-000001
{
  "mappings": {
    "properties": {
      "full_name": {
        "type":  "text"
      }
    }
  }
}

Use a field as both text and keyword

edit

Sometimes it is useful to have both a full text (text) and a keyword (keyword) version of the same field: one for full text search and the other for aggregations and sorting. This can be achieved with multi-fields.

Parameters for text fields

edit

The following parameters are accepted by text fields:

analyzer

The analyzer which should be used for the text field, both at index-time and at search-time (unless overridden by the search_analyzer). Defaults to the default index analyzer, or the standard analyzer.

boost

Mapping field-level query time boosting. Accepts a floating point number, defaults to 1.0.

eager_global_ordinals

Should global ordinals be loaded eagerly on refresh? Accepts true or false (default). Enabling this is a good idea on fields that are frequently used for (significant) terms aggregations.

fielddata

Can the field use in-memory fielddata for sorting, aggregations, or scripting? Accepts true or false (default).

fielddata_frequency_filter

Expert settings which allow to decide which values to load in memory when fielddata is enabled. By default all values are loaded.

fields

Multi-fields allow the same string value to be indexed in multiple ways for different purposes, such as one field for search and a multi-field for sorting and aggregations, or the same string value analyzed by different analyzers.

index

Should the field be searchable? Accepts true (default) or false.

index_options

What information should be stored in the index, for search and highlighting purposes. Defaults to positions.

index_prefixes

If enabled, term prefixes of between 2 and 5 characters are indexed into a separate field. This allows prefix searches to run more efficiently, at the expense of a larger index.

index_phrases

If enabled, two-term word combinations (shingles) are indexed into a separate field. This allows exact phrase queries (no slop) to run more efficiently, at the expense of a larger index. Note that this works best when stopwords are not removed, as phrases containing stopwords will not use the subsidiary field and will fall back to a standard phrase query. Accepts true or false (default).

norms

Whether field-length should be taken into account when scoring queries. Accepts true (default) or false.

position_increment_gap

The number of fake term position which should be inserted between each element of an array of strings. Defaults to the position_increment_gap configured on the analyzer which defaults to 100. 100 was chosen because it prevents phrase queries with reasonably large slops (less than 100) from matching terms across field values.

store

Whether the field value should be stored and retrievable separately from the _source field. Accepts true or false (default).

search_analyzer

The analyzer that should be used at search time on the text field. Defaults to the analyzer setting.

search_quote_analyzer

The analyzer that should be used at search time when a phrase is encountered. Defaults to the search_analyzer setting.

similarity

Which scoring algorithm or similarity should be used. Defaults to BM25.

term_vector

Whether term vectors should be stored for the field. Defaults to no.

meta

Metadata about the field.

fielddata mapping parameter

edit

text fields are searchable by default, but by default are not available for aggregations, sorting, or scripting. If you try to sort, aggregate, or access values from a script on a text field, you will see this exception:

Fielddata is disabled on text fields by default. Set fielddata=true on your_field_name in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory.

Field data is the only way to access the analyzed tokens from a full text field in aggregations, sorting, or scripting. For example, a full text field like New York would get analyzed as new and york. To aggregate on these tokens requires field data.

Before enabling fielddata

edit

It usually doesn’t make sense to enable fielddata on text fields. Field data is stored in the heap with the field data cache because it is expensive to calculate. Calculating the field data can cause latency spikes, and increasing heap usage is a cause of cluster performance issues.

Most users who want to do more with text fields use multi-field mappings by having both a text field for full text searches, and an unanalyzed keyword field for aggregations, as follows:

PUT my-index-000001
{
  "mappings": {
    "properties": {
      "my_field": { 
        "type": "text",
        "fields": {
          "keyword": { 
            "type": "keyword"
          }
        }
      }
    }
  }
}

Use the my_field field for searches.

Use the my_field.keyword field for aggregations, sorting, or in scripts.

Enabling fielddata on text fields

edit

You can enable fielddata on an existing text field using the PUT mapping API as follows:

PUT my-index-000001/_mapping
{
  "properties": {
    "my_field": { 
      "type":     "text",
      "fielddata": true
    }
  }
}

The mapping that you specify for my_field should consist of the existing mapping for that field, plus the fielddata parameter.

fielddata_frequency_filter mapping parameter

edit

Fielddata filtering can be used to reduce the number of terms loaded into memory, and thus reduce memory usage. Terms can be filtered by frequency:

The frequency filter allows you to only load terms whose document frequency falls between a min and max value, which can be expressed an absolute number (when the number is bigger than 1.0) or as a percentage (eg 0.01 is 1% and 1.0 is 100%). Frequency is calculated per segment. Percentages are based on the number of docs which have a value for the field, as opposed to all docs in the segment.

Small segments can be excluded completely by specifying the minimum number of docs that the segment should contain with min_segment_size:

PUT my-index-000001
{
  "mappings": {
    "properties": {
      "tag": {
        "type": "text",
        "fielddata": true,
        "fielddata_frequency_filter": {
          "min": 0.001,
          "max": 0.1,
          "min_segment_size": 500
        }
      }
    }
  }
}