Tutorial: semantic search with semantic_text
editTutorial: semantic search with semantic_text
editThis functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
This tutorial shows you how to use the semantic text feature to perform semantic search on your data.
Semantic text simplifies the inference workflow by providing inference at ingestion time and sensible default values automatically. You don’t need to define model related settings and parameters, or create inference ingest pipelines.
The recommended way to use semantic search in the Elastic Stack is following the semantic_text
workflow.
When you need more control over indexing and query settings, you can still use the complete inference workflow (refer to this tutorial to review the process).
This tutorial uses the elser
service for demonstration, but you can use any service and their supported models offered by the Inference API.
Requirements
editTo use the semantic_text
field type, you must have an inference endpoint deployed in
your cluster using the Create inference API.
Create the inference endpoint
editCreate an inference endpoint by using the Create inference API:
resp = client.inference.put( task_type="sparse_embedding", inference_id="my-elser-endpoint", inference_config={ "service": "elser", "service_settings": { "adaptive_allocations": { "enabled": True, "min_number_of_allocations": 3, "max_number_of_allocations": 10 }, "num_threads": 1 } }, ) print(resp)
const response = await client.inference.put({ task_type: "sparse_embedding", inference_id: "my-elser-endpoint", inference_config: { service: "elser", service_settings: { adaptive_allocations: { enabled: true, min_number_of_allocations: 3, max_number_of_allocations: 10, }, num_threads: 1, }, }, }); console.log(response);
PUT _inference/sparse_embedding/my-elser-endpoint { "service": "elser", "service_settings": { "adaptive_allocations": { "enabled": true, "min_number_of_allocations": 3, "max_number_of_allocations": 10 }, "num_threads": 1 } }
The task type is |
|
The |
|
This setting enables and configures adaptive allocations. Adaptive allocations make it possible for ELSER to automatically scale up or down resources based on the current load on the process. |
You might see a 502 bad gateway error in the response when using the Kibana Console.
This error usually just reflects a timeout, while the model downloads in the background.
You can check the download progress in the Machine Learning UI.
If using the Python client, you can set the timeout
parameter to a higher value.
Create the index mapping
editThe mapping of the destination index - the index that contains the embeddings that the inference endpoint will generate based on your input text - must be created.
The destination index must have a field with the semantic_text
field type to index the output of the used inference endpoint.
resp = client.indices.create( index="semantic-embeddings", mappings={ "properties": { "content": { "type": "semantic_text", "inference_id": "my-elser-endpoint" } } }, ) print(resp)
const response = await client.indices.create({ index: "semantic-embeddings", mappings: { properties: { content: { type: "semantic_text", inference_id: "my-elser-endpoint", }, }, }, }); console.log(response);
PUT semantic-embeddings { "mappings": { "properties": { "content": { "type": "semantic_text", "inference_id": "my-elser-endpoint" } } } }
The name of the field to contain the generated embeddings. |
|
The field to contain the embeddings is a |
|
The |
If you’re using web crawlers or connectors to generate indices, you have to
update the index mappings for these indices to
include the semantic_text
field. Once the mapping is updated, you’ll need to run
a full web crawl or a full connector sync. This ensures that all existing
documents are reprocessed and updated with the new semantic embeddings,
enabling semantic search on the updated data.
Load data
editIn this step, you load the data that you later use to create embeddings from it.
Use the msmarco-passagetest2019-top1000
data set, which is a subset of the MS
MARCO Passage Ranking data set. It consists of 200 queries, each accompanied by
a list of relevant text passages. All unique passages, along with their IDs,
have been extracted from that data set and compiled into a
tsv file.
Download the file and upload it to your cluster using the Data Visualizer in the Machine Learning UI.
After your data is analyzed, click Override settings.
Under Edit field names, assign id
to the first column and content
to the second.
Click Apply, then Import.
Name the index test-data
, and click Import.
After the upload is complete, you will see an index named test-data
with 182,469 documents.
Reindex the data
editCreate the embeddings from the text by reindexing the data from the test-data
index to the semantic-embeddings
index.
The data in the content
field will be reindexed into the content
semantic text field of the destination index.
The reindexed data will be processed by the inference endpoint associated with the content
semantic text field.
resp = client.reindex( wait_for_completion=False, source={ "index": "test-data", "size": 10 }, dest={ "index": "semantic-embeddings" }, ) print(resp)
const response = await client.reindex({ wait_for_completion: "false", source: { index: "test-data", size: 10, }, dest: { index: "semantic-embeddings", }, }); console.log(response);
POST _reindex?wait_for_completion=false { "source": { "index": "test-data", "size": 10 }, "dest": { "index": "semantic-embeddings" } }
The default batch size for reindexing is 1000. Reducing size to a smaller number makes the update of the reindexing process quicker which enables you to follow the progress closely and detect errors early. |
The call returns a task ID to monitor the progress:
resp = client.tasks.get( task_id="<task_id>", ) print(resp)
const response = await client.tasks.get({ task_id: "<task_id>", }); console.log(response);
GET _tasks/<task_id>
Reindexing large datasets can take a long time. You can test this workflow using only a subset of the dataset. Do this by cancelling the reindexing process, and only generating embeddings for the subset that was reindexed. The following API request will cancel the reindexing task:
resp = client.tasks.cancel( task_id="<task_id>", ) print(resp)
const response = await client.tasks.cancel({ task_id: "<task_id>", }); console.log(response);
POST _tasks/<task_id>/_cancel
Semantic search
editAfter the data set has been enriched with the embeddings, you can query the data using semantic search.
Provide the semantic_text
field name and the query text in a semantic
query type.
The inference endpoint used to generate the embeddings for the semantic_text
field will be used to process the query text.
resp = client.search( index="semantic-embeddings", query={ "semantic": { "field": "content", "query": "How to avoid muscle soreness while running?" } }, ) print(resp)
const response = await client.search({ index: "semantic-embeddings", query: { semantic: { field: "content", query: "How to avoid muscle soreness while running?", }, }, }); console.log(response);
GET semantic-embeddings/_search { "query": { "semantic": { "field": "content", "query": "How to avoid muscle soreness while running?" } } }
As a result, you receive the top 10 documents that are closest in meaning to the
query from the semantic-embedding
index:
"hits": [ { "_index": "semantic-embeddings", "_id": "Jy5065EBBFPLbFsdh_f9", "_score": 21.487484, "_source": { "id": 8836652, "content": { "text": "There are a few foods and food groups that will help to fight inflammation and delayed onset muscle soreness (both things that are inevitable after a long, hard workout) when you incorporate them into your postworkout eats, whether immediately after your run or at a meal later in the day. Advertisement. Advertisement.", "inference": { "inference_id": "my-elser-endpoint", "model_settings": { "task_type": "sparse_embedding" }, "chunks": [ { "text": "There are a few foods and food groups that will help to fight inflammation and delayed onset muscle soreness (both things that are inevitable after a long, hard workout) when you incorporate them into your postworkout eats, whether immediately after your run or at a meal later in the day. Advertisement. Advertisement.", "embeddings": { (...) } } ] } } } }, { "_index": "semantic-embeddings", "_id": "Ji5065EBBFPLbFsdh_f9", "_score": 18.211695, "_source": { "id": 8836651, "content": { "text": "During Your Workout. There are a few things you can do during your workout to help prevent muscle injury and soreness. According to personal trainer and writer for Iron Magazine, Marc David, doing warm-ups and cool-downs between sets can help keep muscle soreness to a minimum.", "inference": { "inference_id": "my-elser-endpoint", "model_settings": { "task_type": "sparse_embedding" }, "chunks": [ { "text": "During Your Workout. There are a few things you can do during your workout to help prevent muscle injury and soreness. According to personal trainer and writer for Iron Magazine, Marc David, doing warm-ups and cool-downs between sets can help keep muscle soreness to a minimum.", "embeddings": { (...) } } ] } } } }, { "_index": "semantic-embeddings", "_id": "Wi5065EBBFPLbFsdh_b9", "_score": 13.089405, "_source": { "id": 8800197, "content": { "text": "This is especially important if the soreness is due to a weightlifting routine. For this time period, do not exert more than around 50% of the level of effort (weight, distance and speed) that caused the muscle groups to be sore.", "inference": { "inference_id": "my-elser-endpoint", "model_settings": { "task_type": "sparse_embedding" }, "chunks": [ { "text": "This is especially important if the soreness is due to a weightlifting routine. For this time period, do not exert more than around 50% of the level of effort (weight, distance and speed) that caused the muscle groups to be sore.", "embeddings": { (...) } } ] } } } } ]
Further examples and reading
edit-
If you want to use
semantic_text
in hybrid search, refer to this notebook for a step-by-step guide. - For more information on how to optimize your ELSER endpoints, refer to the ELSER recommendations section in the model documentation.
- To learn more about model autoscaling, refer to the trained model autoscaling page.