THub Technical Documentation
  • Introduction
  • 🔗 LangChain
    • 🕵️ Agents
    • 🗄️Cache
    • ⛓️Chains
    • 🗨️Chat Models
    • 📁Document Loaders
    • 🧬Embeddings
    • Graph
    • 🧠Large Language Models(LLM)
    • 💾Memory
    • 🛡️Moderation
    • 👥Multi Agents
    • 🔀Output Parsers
    • 📝Prompts
    • 📊Record Managers
    • 📑Retrieval-Augmented Generation
    • 🔍Retrivers
    • Sequential Agent
    • ✂️Text Splitters
    • 🛠️Tools
    • 🔌Tools (MCP)
    • 🗃️Vector Stores
  • 🦙LLama Index
    • 🕵️ Agents
    • 🗨️Chat Models
    • 🧬Embeddings
    • 🚀Engine's
    • 🧪Response Synthesizer
    • 🛠️Tools
    • 🗃️Vector Stores
Powered by GitBook
On this page
  1. 🔗 LangChain

🧬Embeddings

Embeddings can be used to create a numerical representation of textual data. This numerical representation is useful because it can be used to find similar documents.

Previous📁Document LoadersNextGraph

Last updated 21 days ago

An embedding is a vector (list) of floating-point numbers. The distance between two vectors measures their relatedness. Small distances suggest high relatedness and large distances suggest low relatedness.

They are commonly used for:

· Search (where results are ranked by relevance to a query string)

· Clustering (where text strings are grouped by similarity)

· Recommendations (where items with related text strings are recommended)

· Anomaly detection (where outliers with little relatedness are identified)

· Diversity measurement (where similarity distributions are analyzed)

· Classification (where text strings are classified by their most similar label)

1)AWS Bedrock Embeddings

AWSBedrock embedding models to generate embeddings for a given text.

2)Azure OpenAI Embeddings

Prerequisite

1. Log in or sign up to Azure

2. Create your Azure OpenAI and wait for approval approximately 10 business days

3. Your API key will be available at Azure OpenAI > click name_azure_openai > click Click here to manage keys

Setup

Azure OpenAI Embeddings

1. Click Go to Azure OpenaAI Studio

2. Click Deployments

3. Click Create new deployment

4. Select as shown below and click Create

5. Successfully created Azure OpenAI Embeddings

· Deployment name: text-embedding-ada-002

· Instance name: top right conner

THub

1. Embeddings > drag Azure OpenAI Embeddings node

2. Connect Credential > click Create New

3. Copy & Paste each details (API Key, Instance & Deployment name, API Version) into Azure OpenAI Embeddings credential

4. Voila 🎉, you have created Azure OpenAI Embeddings node in THub

3)Cohere Embeddings

Cohere API to generate embeddings for a given text

4)Google GenerativeAI Embeddings

Google Generative API to generate embeddings for a given text.

5)Google VertexAI Embeddings

Google vertexAI API to generate embeddings for a given text.

6)HuggingFace Inference Embeddings

HuggingFace Inference API to generate embeddings for a given text.

7)LocalAI Embeddings

LocalAI Setup

LocalAI is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format.

To use LocalAI Embeddings within THub, follow the steps below:

1. git clone https://github.com/go-skynet/LocalAI

2. cd LocalAI

LocalAI provides an API endpoint to download/install the model. In this example, we are going to use BERT Embeddings model:

4. In the /models folder, you should be able to see the downloaded model in there:

5. You can now test the embeddings:

curl http://localhost:8080/v1/embeddings -H "Content-Type: application/json" -d '{

"input": "Test",

"model": "text-embedding-ada-002"

}'

6. Response should look like:

Setup

Drag and drop a new LocalAIEmbeddings component to canvas:

Fill in the fields:

· Base Path: The base url from LocalAI such as http://localhost:8080/v1

· Model Name: The model you want to use. Note that it must be inside /model folder of LocalAI directory. For instance: text-embedding-ada-002

·

·That's it! For more information, refer to LocalAI docs.

8)MistralAI Embeddings

MistralAI API to generate embeddings for a given text.

9)Ollama Embeddings

Generate embeddings for a given text using opensource model on Ollama.

10)OpenAI Embeddings

OpenAI API to generate embeddings for a given text.

11)OpenAI Embeddings Custom

OpenAI API to generate embeddings for a given text.

12)TogetherAI Embedding

TogetherAI Embedding models to generate embeddings for a given text.

13)VoyageAI Embeddings

Voyage AI API to generate embeddings for a given text.

14)IBM WatsonX Embedding Node

The IBM WatsonX Embedding node integrates IBM's WatsonX.ai platform into THub, enabling users to generate vector representations (embeddings) of text data. These embeddings are essential for tasks like semantic search, document comparison, and retrieval-augmented generation (RAG).

Key Features:

  • Supported Models: Includes models such as granite-embedding-107m-multilingual, granite-embedding-278m-multilingual, slate-30m-english-rtrvr, slate-125m-english-rtrvr, all-minilm-l6-v2, all-minilm-l12-v2, and multilingual-e5-large.

  • Use Cases: Ideal for vectorizing text for semantic search, reranking passages, integrating with AutoAI for RAG workflows, and grounding prompts in Prompt Lab.

  • Configuration Requirements: To set up this node in Flowise, users need to provide:

    • WatsonX.ai URL

    • Project ID

    • API Key

    • Model ID

15)Jina Embedding Node

The Jina Embedding node connects THub to Jina AI's embedding models, facilitating the transformation of text and code into high-dimensional vectors suitable for various AI tasks.

Key Features:

  • Model Variants: Jina offers models like jina-embeddings-v2-base-code, which is multilingual and trained on a vast corpus of code and documentation.

  • Performance: The jina-embeddings-v3 model, with 570 million parameters, supports up to 8192 tokens and includes task-specific LoRA adapters for enhanced performance in retrieval, clustering, classification, and text matching tasks.

  • Integration: Recent updates have introduced features like late chunking support, enhancing the flexibility and efficiency of embedding operations within Flowise.

Use Cases: Suitable for developers and organizations aiming to implement advanced search functionalities, code understanding, or multilingual support in their AI applications.