embeddingService
Embedding services convert text into an embedding vector. External embedding services can produce embeddings from various models, including large language models (LLMs). They can be used in addition or as a replacement to the built-in Lingo4G label and document embeddings.
The following embedding​Service:​*
stage types are available for use in analysis request JSONs:
-
embedding​Service:​ollama
-
Computes embeddings using the Ollama project.
embedding​Service:​reference
-
References a
embedding​Service:​*
component defined in the request or in the project's default components.
embedding​Service:​ollama
Computes embeddings using the Ollama project running locally or remotely. Ollama
must be initialized, and a language model capable of producing embeddings must be running. Inspect the output of
Ollama's list
command to see which models are available.
{
"type": "embeddingService:ollama",
"url": "http://localhost:11434/api/embed"
}
model
The model name to use to compute embeddings. The model must be capable of producing embedding vectors.
prompt
An optional prompt prefix, prepended to any text passed to Ollama. Embedding models rarely use prompts, so typically empty.
url
Ollama's service URL, if different than the default (http:​//localhost:​11434/api/embeddings
).
embedding​Service:​*
Consumers of
The following stages and components take embedding​Service:​*
as
input:
Stage or component | Property |
---|---|
vector:​from​Embedding​Service | embedding​Service |
vectors:​from​Embedding​Service | embedding​Service |