Text type family

edit

The text family includes the following field types:

  • text, the traditional field type for full-text content such as the body of an email or the description of a product.
  • match_only_text, a space-optimized variant of text that disables scoring and performs slower on queries that need positions. It is best suited for indexing log messages.

Text field type

edit

A field to index full-text values, such as the body of an email or the description of a product. These fields are analyzed, that is they are passed through an analyzer to convert the string into a list of individual terms before being indexed. The analysis process allows Elasticsearch to search for individual words within each full text field. Text fields are not used for sorting and seldom used for aggregations (although the significant text aggregation is a notable exception).

text fields are best suited for unstructured but human-readable content. If you need to index unstructured machine-generated content, see Mapping unstructured content.

If you need to index structured content such as email addresses, hostnames, status codes, or tags, it is likely that you should rather use a keyword field.

Below is an example of a mapping for a text field:

response = client.indices.create(
  index: 'my-index-000001',
  body: {
    mappings: {
      properties: {
        full_name: {
          type: 'text'
        }
      }
    }
  }
)
puts response
PUT my-index-000001
{
  "mappings": {
    "properties": {
      "full_name": {
        "type":  "text"
      }
    }
  }
}

Use a field as both text and keyword

edit

Sometimes it is useful to have both a full text (text) and a keyword (keyword) version of the same field: one for full text search and the other for aggregations and sorting. This can be achieved with multi-fields.

Parameters for text fields

edit

The following parameters are accepted by text fields:

analyzer

The analyzer which should be used for the text field, both at index-time and at search-time (unless overridden by the search_analyzer). Defaults to the default index analyzer, or the standard analyzer.

eager_global_ordinals

Should global ordinals be loaded eagerly on refresh? Accepts true or false (default). Enabling this is a good idea on fields that are frequently used for (significant) terms aggregations.

fielddata

Can the field use in-memory fielddata for sorting, aggregations, or scripting? Accepts true or false (default).

fielddata_frequency_filter

Expert settings which allow to decide which values to load in memory when fielddata is enabled. By default all values are loaded.

fields

Multi-fields allow the same string value to be indexed in multiple ways for different purposes, such as one field for search and a multi-field for sorting and aggregations, or the same string value analyzed by different analyzers.

index

Should the field be searchable? Accepts true (default) or false.

index_options

What information should be stored in the index, for search and highlighting purposes. Defaults to positions.

index_prefixes

If enabled, term prefixes of between 2 and 5 characters are indexed into a separate field. This allows prefix searches to run more efficiently, at the expense of a larger index.

index_phrases

If enabled, two-term word combinations (shingles) are indexed into a separate field. This allows exact phrase queries (no slop) to run more efficiently, at the expense of a larger index. Note that this works best when stopwords are not removed, as phrases containing stopwords will not use the subsidiary field and will fall back to a standard phrase query. Accepts true or false (default).

norms

Whether field-length should be taken into account when scoring queries. Accepts true (default) or false.

position_increment_gap

The number of fake term position which should be inserted between each element of an array of strings. Defaults to the position_increment_gap configured on the analyzer which defaults to 100. 100 was chosen because it prevents phrase queries with reasonably large slops (less than 100) from matching terms across field values.

store

Whether the field value should be stored and retrievable separately from the _source field. Accepts true or false (default).

search_analyzer

The analyzer that should be used at search time on the text field. Defaults to the analyzer setting.

search_quote_analyzer

The analyzer that should be used at search time when a phrase is encountered. Defaults to the search_analyzer setting.

similarity

Which scoring algorithm or similarity should be used. Defaults to BM25.

term_vector

Whether term vectors should be stored for the field. Defaults to no.

meta

Metadata about the field.

Synthetic _source

edit

Synthetic _source is Generally Available only for TSDB indices (indices that have index.mode set to time_series). For other indices synthetic _source is in technical preview. Features in technical preview may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

text fields support synthetic _source if they have a keyword sub-field that supports synthetic _source or if the text field sets store to true. Either way, it may not have copy_to.

If using a sub-keyword field then the values are sorted in the same way as a keyword field’s values are sorted. By default, that means sorted with duplicates removed. So:

response = client.indices.create(
  index: 'idx',
  body: {
    mappings: {
      _source: {
        mode: 'synthetic'
      },
      properties: {
        text: {
          type: 'text',
          fields: {
            raw: {
              type: 'keyword'
            }
          }
        }
      }
    }
  }
)
puts response

response = client.index(
  index: 'idx',
  id: 1,
  body: {
    text: [
      'the quick brown fox',
      'the quick brown fox',
      'jumped over the lazy dog'
    ]
  }
)
puts response
PUT idx
{
  "mappings": {
    "_source": { "mode": "synthetic" },
    "properties": {
      "text": {
        "type": "text",
        "fields": {
          "raw": {
            "type": "keyword"
          }
        }
      }
    }
  }
}
PUT idx/_doc/1
{
  "text": [
    "the quick brown fox",
    "the quick brown fox",
    "jumped over the lazy dog"
  ]
}

Will become:

{
  "text": [
    "jumped over the lazy dog",
    "the quick brown fox"
  ]
}

Reordering text fields can have an effect on phrase and span queries. See the discussion about position_increment_gap for more detail. You can avoid this by making sure the slop parameter on the phrase queries is lower than the position_increment_gap. This is the default.

If the text field sets store to true then order and duplicates are preserved.

response = client.indices.create(
  index: 'idx',
  body: {
    mappings: {
      _source: {
        mode: 'synthetic'
      },
      properties: {
        text: {
          type: 'text',
          store: true
        }
      }
    }
  }
)
puts response

response = client.index(
  index: 'idx',
  id: 1,
  body: {
    text: [
      'the quick brown fox',
      'the quick brown fox',
      'jumped over the lazy dog'
    ]
  }
)
puts response
PUT idx
{
  "mappings": {
    "_source": { "mode": "synthetic" },
    "properties": {
      "text": { "type": "text", "store": true }
    }
  }
}
PUT idx/_doc/1
{
  "text": [
    "the quick brown fox",
    "the quick brown fox",
    "jumped over the lazy dog"
  ]
}

Will become:

{
  "text": [
    "the quick brown fox",
    "the quick brown fox",
    "jumped over the lazy dog"
  ]
}

fielddata mapping parameter

edit

text fields are searchable by default, but by default are not available for aggregations, sorting, or scripting. If you try to sort, aggregate, or access values from a text field using a script, you’ll see an exception indicating that field data is disabled by default on text fields. To load field data in memory, set fielddata=true on your field.

Loading field data in memory can consume significant memory.

Field data is the only way to access the analyzed tokens from a full text field in aggregations, sorting, or scripting. For example, a full text field like New York would get analyzed as new and york. To aggregate on these tokens requires field data.

Before enabling fielddata

edit

It usually doesn’t make sense to enable fielddata on text fields. Field data is stored in the heap with the field data cache because it is expensive to calculate. Calculating the field data can cause latency spikes, and increasing heap usage is a cause of cluster performance issues.

Most users who want to do more with text fields use multi-field mappings by having both a text field for full text searches, and an unanalyzed keyword field for aggregations, as follows:

response = client.indices.create(
  index: 'my-index-000001',
  body: {
    mappings: {
      properties: {
        my_field: {
          type: 'text',
          fields: {
            keyword: {
              type: 'keyword'
            }
          }
        }
      }
    }
  }
)
puts response
res, err := es.Indices.Create(
	"my-index-000001",
	es.Indices.Create.WithBody(strings.NewReader(`{
	  "mappings": {
	    "properties": {
	      "my_field": {
	        "type": "text",
	        "fields": {
	          "keyword": {
	            "type": "keyword"
	          }
	        }
	      }
	    }
	  }
	}`)),
)
fmt.Println(res, err)
PUT my-index-000001
{
  "mappings": {
    "properties": {
      "my_field": { 
        "type": "text",
        "fields": {
          "keyword": { 
            "type": "keyword"
          }
        }
      }
    }
  }
}

Use the my_field field for searches.

Use the my_field.keyword field for aggregations, sorting, or in scripts.

Enabling fielddata on text fields

edit

You can enable fielddata on an existing text field using the update mapping API as follows:

response = client.indices.put_mapping(
  index: 'my-index-000001',
  body: {
    properties: {
      my_field: {
        type: 'text',
        fielddata: true
      }
    }
  }
)
puts response
res, err := es.Indices.PutMapping(
	[]string{"my-index-000001"},
	strings.NewReader(`{
	  "properties": {
	    "my_field": {
	      "type": "text",
	      "fielddata": true
	    }
	  }
	}`),
)
fmt.Println(res, err)
PUT my-index-000001/_mapping
{
  "properties": {
    "my_field": { 
      "type":     "text",
      "fielddata": true
    }
  }
}

The mapping that you specify for my_field should consist of the existing mapping for that field, plus the fielddata parameter.

fielddata_frequency_filter mapping parameter

edit

Fielddata filtering can be used to reduce the number of terms loaded into memory, and thus reduce memory usage. Terms can be filtered by frequency:

The frequency filter allows you to only load terms whose document frequency falls between a min and max value, which can be expressed an absolute number (when the number is bigger than 1.0) or as a percentage (eg 0.01 is 1% and 1.0 is 100%). Frequency is calculated per segment. Percentages are based on the number of docs which have a value for the field, as opposed to all docs in the segment.

Small segments can be excluded completely by specifying the minimum number of docs that the segment should contain with min_segment_size:

response = client.indices.create(
  index: 'my-index-000001',
  body: {
    mappings: {
      properties: {
        tag: {
          type: 'text',
          fielddata: true,
          fielddata_frequency_filter: {
            min: 0.001,
            max: 0.1,
            min_segment_size: 500
          }
        }
      }
    }
  }
)
puts response
res, err := es.Indices.Create(
	"my-index-000001",
	es.Indices.Create.WithBody(strings.NewReader(`{
	  "mappings": {
	    "properties": {
	      "tag": {
	        "type": "text",
	        "fielddata": true,
	        "fielddata_frequency_filter": {
	          "min": 0.001,
	          "max": 0.1,
	          "min_segment_size": 500
	        }
	      }
	    }
	  }
	}`)),
)
fmt.Println(res, err)
PUT my-index-000001
{
  "mappings": {
    "properties": {
      "tag": {
        "type": "text",
        "fielddata": true,
        "fielddata_frequency_filter": {
          "min": 0.001,
          "max": 0.1,
          "min_segment_size": 500
        }
      }
    }
  }
}

Match-only text field type

edit

A variant of text that trades scoring and efficiency of positional queries for space efficiency. This field effectively stores data the same way as a text field that only indexes documents (index_options: docs) and disables norms (norms: false). Term queries perform as fast if not faster as on text fields, however queries that need positions such as the match_phrase query perform slower as they need to look at the _source document to verify whether a phrase matches. All queries return constant scores that are equal to 1.0.

Analysis is not configurable: text is always analyzed with the default analyzer (standard by default).

span queries are not supported with this field, use interval queries instead, or the text field type if you absolutely need span queries.

Other than that, match_only_text supports the same queries as text. And like text, it does not support sorting and has only limited support for aggregations.

response = client.indices.create(
  index: 'logs',
  body: {
    mappings: {
      properties: {
        "@timestamp": {
          type: 'date'
        },
        message: {
          type: 'match_only_text'
        }
      }
    }
  }
)
puts response
PUT logs
{
  "mappings": {
    "properties": {
      "@timestamp": {
        "type": "date"
      },
      "message": {
        "type": "match_only_text"
      }
    }
  }
}

Parameters for match-only text fields

edit

The following mapping parameters are accepted:

fields

Multi-fields allow the same string value to be indexed in multiple ways for different purposes, such as one field for search and a multi-field for sorting and aggregations, or the same string value analyzed by different analyzers.

meta

Metadata about the field.