embeddingService

Embedding services convert text into an embedding vector. External embedding services can produce embeddings from various models, including large language models (LLMs). They can be used in addition or as a replacement to the built-in Lingo4G label and document embeddings.

The following embedding​Service:​* stage types are available for use in analysis request JSONs:

embedding​Service:​ollama

Computes embeddings using the Ollama project.


embedding​Service:​reference

References a embedding​Service:​* component defined in the request or in the project's default components.


embedding​Service:​ollama

Computes embeddings using the Ollama project running locally or remotely. Ollama must be initialized, and a language model capable of producing embeddings must be running. Inspect the output of Ollama's list command to see which models are available.

{
  "type": "embeddingService:ollama",
  "url": "http://localhost:11434/api/embed"
}

model

Type
string
Default
undefined
Required
yes

The model name to use to compute embeddings. The model must be capable of producing embedding vectors.

prompt

Type
string
Default
undefined
Required
no

An optional prompt prefix, prepended to any text passed to Ollama. Embedding models rarely use prompts, so typically empty.

url

Type
string
Default
"http://localhost:11434/api/embed"
Required
no

Ollama's service URL, if different than the default (http:​//localhost:​11434/api/embeddings).

Consumers of embedding​Service:​*

The following stages and components take embedding​Service:​* as input:

Stage or component Property
vector:​from​Embedding​Service
  • embedding​Service
  • vectors:​from​Embedding​Service
  • embedding​Service