# 🗃️Vector Stores

#### 1)AstraDB

Setup

1\.     Register an account on [AstraDB](https://astra.datastax.com/)

2\.     Login to portal. Create a Database

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FwEDRGWYCAdZNOMfsiyIS%2Fimage.png?alt=media&#x26;token=7c3191d7-16a2-4a63-bb43-92e72dac6fb2" alt=""><figcaption></figcaption></figure>

3. Choose Serverless (Vector), fill in the Database name, Provider, and Region

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F8mV4NVDASajoiyiL05hI%2Fimage.png?alt=media&#x26;token=9f66c376-d330-4b8d-9a5f-380abde1ab34" alt=""><figcaption></figcaption></figure>

4. After database has been setup, grab the API Endpoint, and generate Application Token

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FpWYbOJo8U4RFfIsvbv9P%2Fimage.png?alt=media&#x26;token=1e3f9b27-eda5-421d-89ce-450a71ed3765" alt=""><figcaption></figcaption></figure>

5. Create a new collection, select the desired dimenstion and similarity metric:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F6uAAZGbhVPTmfV3NuPSa%2Fimage.png?alt=media&#x26;token=b9ab1fac-a436-418c-946b-defeec8b2137" alt=""><figcaption></figcaption></figure>

6\.     Back to THub canvas, drag and drop Astra node. Click **Create New** from the Credentials dropdown:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FiLmdBEjGN8P1ZZr4oECP%2FScreenshot%202024-07-09%20112552.png?alt=media&#x26;token=8e8d7148-4023-4d36-97b7-51060f581ddf" alt=""><figcaption></figcaption></figure>

7. Specify the API Endpoint and Application Token:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FTbnYGZY9mayIWup5qbNx%2Fimage.png?alt=media&#x26;token=d9c01104-a722-4ae3-ae80-e1e6a0fad40b" alt=""><figcaption></figcaption></figure>

8. You can now upsert data to AstraDB

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FJqodldtk3Zb4cUHnQ6Ne%2Fimage.png?alt=media&#x26;token=294d6170-90cd-4b45-98f1-656b9a9be069" alt=""><figcaption></figcaption></figure>

Navigate back to Astra portal, and to your collection, you will be able to see all the data that has been upserted:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2Fdfc7e70BNmNB8V78fTrU%2Fimage.png?alt=media&#x26;token=59eabbd6-acd7-4539-92a0-ab07ac04351f" alt=""><figcaption></figcaption></figure>

10. Start querying!

#### 2)Chroma

Prereuisite

1\.     Download & install [Docker ](https://www.docker.com/)and [Git](https://git-scm.com/)

2\.     Clone [Chroma's repository](https://github.com/chroma-core/chroma) with your terminal

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FJHTlHN05v9tk1dUAdvGR%2Fimage.png?alt=media&#x26;token=c89d94b7-aa14-4da3-92d1-854d9aece720" alt=""><figcaption></figcaption></figure>

3\.     Change directory path to your cloned Chroma   &#x20;

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FFX7U4Im4W48QeCILx8W4%2Fimage.png?alt=media&#x26;token=faee7cd4-606c-4ef4-b078-d86c138e1f56" alt=""><figcaption></figcaption></figure>

Run docker compose to build up Chroma image and container  &#x20;

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FeHeCa9VCIfNCCzwC8g6K%2Fimage.png?alt=media&#x26;token=e4563301-81c3-452c-bffb-92e643df7465" alt=""><figcaption></figcaption></figure>

If success, you will be able to see the docker images spun up:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FBYqDeTxBtcntTOimW9Qi%2Fimage.png?alt=media&#x26;token=32192b41-da2b-4905-b479-57afc61ccc47" alt=""><figcaption></figcaption></figure>

**Setup**

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FtGrGozfjuZsvWuhJ3T3P%2Fimage.png?alt=media&#x26;token=edb8af9a-9e23-42c3-8929-b1cf1e99b92b" alt=""><figcaption></figcaption></figure>

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FLfkmNDyenp4wfMo8irMz%2FScreenshot%202024-07-09%20112611.png?alt=media&#x26;token=14857cfb-87d0-4bda-80ce-bf2b756cb888" alt=""><figcaption></figcaption></figure>

Additional

1.If you are running both THub and Chroma on Docker, there are additional steps involved.\
2.Open `docker-compose.yml` in THub

Cd THub && cd Docker

3.Modify the file to:

4.Spin up THub docker image

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2Fgqe8cffUubDQ8dSK5U8f%2Fimage.png?alt=media&#x26;token=db431e85-c633-4003-8fbd-ae95845038ac" alt=""><figcaption></figcaption></figure>

5.On the Chroma URL, for Windows and MacOS Operating Systems specify [http://host.docker.internal:8000](http://host.docker.internal:8000/). For Linux based systems the default docker gateway should be used since host.docker.internal is not available: [http://172.17.0.1:8000](http://172.17.0.1:8000/)

#### 3)Elastic **Prerequisite**

1\.     You can use the [official Docker image](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) to get started, or you can use [Elastic Cloud](https://www.elastic.co/cloud/), Elastic's official cloud service. In this guide, we will be using cloud version.

2\.     [Register](https://cloud.elastic.co/registration) an account or [login](https://cloud.elastic.co/login) with existing account on Elastic cloud.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FPfk4jJM60WMAWAqxWlPX%2Fimage.png?alt=media&#x26;token=b744db43-dbbf-4a5d-ae73-5b5169a75649" alt=""><figcaption></figcaption></figure>

3\. Click **Create deployment**. Then, name your deployment, and choose the provider.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F1MR8bF9DLd3SrFrgkMXu%2Fimage.png?alt=media&#x26;token=f1b69aa8-3643-4639-a01c-7f447dff09e0" alt=""><figcaption></figcaption></figure>

4.After deployment is finished, you should be able to see the setup guides as shown below. Click the **Set up vector search** option.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FtMuFCSC5ih47gx9aZRq7%2Fimage.png?alt=media&#x26;token=fb5e4db7-343a-449e-8b25-bfc9a670af23" alt=""><figcaption></figcaption></figure>

5.You should now see the **Getting started** page for **Vector Search**.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F5e8bxWnJ9zBXVhP0mElK%2Fimage.png?alt=media&#x26;token=edd68b2d-add7-43ea-8c65-7a119980661e" alt=""><figcaption></figcaption></figure>

6.On the left hand side bar, click **Indices**. Then, **Create a new index**.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FUwxFMEy08wXCVygUlXeT%2Fimage.png?alt=media&#x26;token=14a08a23-0433-418e-bcd0-1d939fbeca9b" alt=""><figcaption></figcaption></figure>

7\. Select **API** ingestion method

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2Fmr5tqEcKNllf03P5KMQl%2Fimage.png?alt=media&#x26;token=f0b5640a-f823-4d1c-8ce9-c41729fd2c5d" alt=""><figcaption></figcaption></figure>

8 .Name your search index name, then **Create Index**

9\. After the index has been created, generate a new API key, take note of both generated API key and the URL

&#x20;Setup

1\. Add a new **Elasticsearch** node on canvas and fill in the **Index Name**

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FL9kTWArr8DxhM6vyuZO0%2FScreenshot%202024-07-09%20112620.png?alt=media&#x26;token=0f4698bf-6af8-44a8-b091-ccad76b119b1" alt=""><figcaption></figcaption></figure>

2\. Add new credential via **Elasticsearch API**

3.Take the URL and API Key from Elasticsearch, fill in the fields

4.After credential has been created successfully, you can start upserting the data

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FAL5VcAjvDJA8iOmaMIBM%2Fimage.png?alt=media&#x26;token=fc5a2373-e2b7-4242-9002-c485c4151e3f" alt=""><figcaption></figcaption></figure>

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FNYUlnjqLlt59MWHL98MY%2Fimage.png?alt=media&#x26;token=195383a0-f3c1-4136-bc3a-ddf011815efa" alt=""><figcaption></figcaption></figure>

3. After data has been upserted successfully, you can verify it from Elastic dashboard:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F6WfQ6FiapkyV9EW3QBHq%2Fimage.png?alt=media&#x26;token=54625ae3-a033-474c-a9a8-be2eeac001e6" alt=""><figcaption></figcaption></figure>

4. Voila! You can now start asking question in the chat

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FIsbdjoAXEzwkjulmKm7R%2Fimage.png?alt=media&#x26;token=69359716-ad39-4fd3-bf07-140c826a5eb4" alt=""><figcaption></figcaption></figure>

#### 4)Faiss

Upsert embedded data and perform similarity search upon query using Faiss library from Meta.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2Fv9FdhsChqLGIhVXPpYOH%2FScreenshot%202024-07-09%20112629.png?alt=media&#x26;token=69686cc2-d4a7-4a50-b188-a2e548800ca3" alt=""><figcaption></figcaption></figure>

#### 5)In-Memory Vector Store

In-memory vectorstore that stores embeddings and does an exact, linear search for the most similar embeddings.

&#x20;

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FPjHO0lMO3wPnfjzV4sF5%2FScreenshot%202024-07-09%20112639.png?alt=media&#x26;token=def1a8e0-3f42-4dc3-89b3-56394619fc17" alt=""><figcaption></figcaption></figure>

#### 6)Milvus

Upsert embedded data and perform similarity search upon query using Milvus, world's most advanced open-source vector database.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2Fj8qSYpppXceXZIlJT2By%2FScreenshot%202024-07-09%20112649.png?alt=media&#x26;token=c2d4b875-7e1d-4c70-b5d9-3ecc5b775c9c" alt=""><figcaption></figcaption></figure>

#### 7)MongoDB Atlas

Upsert embedded data and perform similarity or mmr search upon query using MongoDB Atlas, a managed cloud mongodb database.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2Fufm40rKTqxLaTQbjKdsO%2FScreenshot%202024-07-09%20112703.png?alt=media&#x26;token=eff9a05f-f9d0-4c19-8a0b-d49f23e4aa03" alt=""><figcaption></figcaption></figure>

#### 8)OpenSearch

Upsert embedded data and perform similarity search upon query using OpenSearch, an open-source, all-in-one vector database.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FeoehGBPxeUJd6hY9DoTZ%2FScreenshot%202024-07-09%20112711.png?alt=media&#x26;token=ffa389ce-8e94-444d-858a-81a6d948b23c" alt=""><figcaption></figcaption></figure>

#### 9)Pinecone

Prerequisite

1\.     Register an account for [Pinecone](https://app.pinecone.io/)

2\.     Click **Create index**

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FgFm0qVeKNIvJKwYZsieb%2Fimage.png?alt=media&#x26;token=4dbb44ce-a6ed-47d3-b975-1896c022a5bc" alt=""><figcaption></figcaption></figure>

1\.     Fill in required fields:

•                    **Index Name**, name of the index to be created. (e.g. "THub-demo")

•                    **Dimensions**, size of the vectors to be inserted in the index. (e.g. 1536)

&#x20;

2\.     Click **Create Index**

**Setup**

1.Get/Create your API Key

2\. Add a new **Pinecone** node to canvas and fill in the parameters:

o   Pinecone Index

o   Pinecone namespace (optional)

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FxN8CcqEuupoxxRSN7Nun%2FScreenshot%202024-07-09%20112721.png?alt=media&#x26;token=265a2220-7173-4dd4-b8b2-1e96e044ac60" alt=""><figcaption></figcaption></figure>

1\.     Create new Pinecone credential -> Fill in **API Key**

&#x20;

4 Add additional nodes to canvas and start the upsert process

·       **Document** can be connected with any node under [**Document Loader**](https://docs.flowiseai.com/integrations/langchain/document-loaders) category

·       **Embeddings** can be connected with any node under [**Embeddings** ](https://docs.flowiseai.com/integrations/langchain/embeddings)category

&#x20;

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FmciIOfyDlV2kCWuOwYcX%2Fimage.png?alt=media&#x26;token=8241a369-b03c-4c60-89f0-976f47acb85b" alt=""><figcaption></figcaption></figure>

5.Verify from [Pinecone dashboard](https://app.pinecone.io/) to see if data has been successfully upserted:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F0NxhOxh5CWUE5OLy4UTO%2Fimage.png?alt=media&#x26;token=1d52553d-4d17-4f8c-b361-770996cdf23f" alt=""><figcaption></figcaption></figure>

#### 10)Postgres

Upsert embedded data and perform similarity search upon query using pgvector on Postgres.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FdvlDOV7wiPOXOzCHGR4w%2FScreenshot%202024-07-09%20112733.png?alt=media&#x26;token=2e44f692-e262-4789-ab7d-e10177f1eb76" alt=""><figcaption></figcaption></figure>

#### 11)Qdrant

**Prerequisites**

A [locally running instance of Qdrant](https://qdrant.tech/documentation/quick-start/) or a Qdrant cloud instance.

To get a Qdrant cloud instance:

1\.     Head to the Clusters section of the [Cloud Dashboard](https://cloud.qdrant.io/overview).

2\.     Select **Clusters** and then click **+ Create**.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FxjbBunzLCXleHOQJ24jI%2Fimage.png?alt=media&#x26;token=42687628-fb29-48a2-8dc6-8c60ffdd6233" alt=""><figcaption></figcaption></figure>

3\.     Choose your cluster configurations and region.

4\.     Hit **Create** to provision your cluster.

**Setup**

1\.     Get/Create your **API Key** from the **Data Access Control** section of the [Cloud Dashboard](https://cloud.qdrant.io/overview).

2\.     Add a new **Qdrant** node on canvas.

3\.     Create new Qdrant credential using the API Key

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2Fa2ZALk4bDMNLoFk2a29E%2Fimage.png?alt=media&#x26;token=d1733359-a9fd-4239-ba8a-c550e9af5f48" alt=""><figcaption></figcaption></figure>

4\.     Enter the required info into the **Qdrant** node:

·       Qdrant server URL

·       Collection name

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F4UBR3jOw9EfoIbmTClYC%2FScreenshot%202024-07-09%20112746.png?alt=media&#x26;token=9f5eef13-e2b1-4d39-ae08-baa32dd86581" alt=""><figcaption></figcaption></figure>

**5. Document** input can be connected with any node under [**Document Loader**](https://docs.flowiseai.com/integrations/langchain/document-loaders) category.

**6.Embeddings** input can be connected with any node under [**Embeddings**](https://docs.flowiseai.com/integrations/langchain/embeddings) category.

**Filtering**

Let's say you have different documents upserted, each specified with a unique value under the metadata key `{source}`

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F2XMacJs1Faif6wd26KWG%2Fimage.png?alt=media&#x26;token=22002455-1146-4f8d-a882-de6fcccfb7dc" alt=""><figcaption></figcaption></figure>

&#x20;

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FXsIEgUpqNhAmBHVdk4B0%2Fimage.png?alt=media&#x26;token=0b2c9485-dd08-406e-9930-ac06a4e0b872" alt=""><figcaption></figcaption></figure>

Then, you want to filter by it. Qdrant supports following [syntax](https://qdrant.tech/documentation/concepts/filtering/#nested-key) when it comes to filtering:

**UI**

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FyNXWIXOznfY490ADSCg5%2Fimage.png?alt=media&#x26;token=fd27110c-9e31-4b60-a768-0bb39ac02cec" alt=""><figcaption></figcaption></figure>

**API**

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FY8Csmlo2ILjMy4KLQStn%2Fimage.png?alt=media&#x26;token=3475a293-7331-402c-ad34-99a0fb25ff2d" alt=""><figcaption></figcaption></figure>

#### 12)Redis

Prerequisite

&#x20;    Spin up a Redis-Stack Server using Docker

&#x20;     docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest

Setup

1\.     Add a new **Redis** node on canvas.

2\.     Create new Redis credential.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FN4EfDMhxxBQ8bihhj0C8%2FScreenshot%202024-07-09%20112754.png?alt=media&#x26;token=3f2b9f90-d88c-4254-a1fd-5a34aae696c8" alt=""><figcaption></figcaption></figure>

3. Select type of Redis Credential. Choose Redis API if you have username and password, otherwise Redis URL:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F98urqZvfkZTokBbTAqnj%2Fimage.png?alt=media&#x26;token=f89b0540-2044-417a-9877-0f14f321e84c" alt=""><figcaption></figcaption></figure>

4. Fill in the url:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FKCSykjK04fT6bx8JEXIT%2Fimage.png?alt=media&#x26;token=466b0be0-ee1e-4fdf-a4de-2a4a791c5a53" alt=""><figcaption></figcaption></figure>

5. Now you can start upserting data with Redis:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FoSG0Jx3Tzfp2l7c4xEDS%2Fimage.png?alt=media&#x26;token=ebf2197f-838e-4cf6-b260-38de205d9606" alt=""><figcaption></figcaption></figure>

6. Navigate to Redis Insight portal, and to your database, you will be able to see all the data that has been upserted:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F6x8oP47nuJQgSxvInfeK%2Fimage.png?alt=media&#x26;token=868aa341-3d11-40f4-853f-034d166ba19f" alt=""><figcaption></figcaption></figure>

#### 13)SingleStore

Setup

1\.     Register an account on [SingleStore](https://www.singlestore.com/)

2\.     Login to portal. On the left side panel, click **CLOUD** -> **Create new workspace group.** Then click **Create Workspace** button.

3\.     Select cloud provider and data region, then click **Next**:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FzPxq5fgYPIO4Mg80aJdF%2Fimage.png?alt=media&#x26;token=55bbcbd5-92d1-4531-8b9f-f2c4f74f6ba9" alt=""><figcaption></figcaption></figure>

4\.     Review and click **Create Workspace**:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FVsnE79fodDuqBhLSUYnf%2Fimage.png?alt=media&#x26;token=3c689338-83a9-4cfb-af0c-24e01ca632ea" alt=""><figcaption></figcaption></figure>

5\.     You should now see your workspace created:

6\.      Proceed to create a database

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FQAIFZ9nLN5d2EVX39184%2Fimage.png?alt=media&#x26;token=9bd9b4a3-a97a-4ba2-934e-8e5282f18a17" alt=""><figcaption></figcaption></figure>

7. You should be able to see your database created and attached to the workspace:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FDHPOOP4WxVn9vQPrFja3%2Fimage.png?alt=media&#x26;token=a146a291-871d-4f1e-a165-2ab32bf3adce" alt=""><figcaption></figcaption></figure>

8. Click Connect from the workspace dropdown -> Connect Directly:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2Fij3aZsJQlrK09eWt2P4y%2Fimage.png?alt=media&#x26;token=e53788c5-d8db-4985-a72d-dde9c1f744ed" alt=""><figcaption></figcaption></figure>

9. You can specify a new password or use the default generated one. Then click Continue:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FsiknLt2idIdJmeK1jwlc%2Fimage.png?alt=media&#x26;token=8f0c394f-a2e9-4a54-a0da-9f870eefad50" alt=""><figcaption></figcaption></figure>

10\.     On the tabs, switch to **Your App**, and select **Node.js** from the dropdown. Take note/save the `Username`, `Host`, `Password` as you will need these in THub later.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2Fe7RiPOMSbhsV8aQrz8u9%2Fimage.png?alt=media&#x26;token=b8691c41-1d55-4cef-841a-a5d7735eb085" alt=""><figcaption></figcaption></figure>

11\.     Back to THub canvas, drag and drop SingleStore nodes. Click **Create New** from the Credentials dropdown:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FFv66z7a2DNUOw5rjRCXp%2FScreenshot%202024-07-09%20112802.png?alt=media&#x26;token=d898358a-853a-43e4-9de9-c3fd15298f2d" alt=""><figcaption></figcaption></figure>

12\.     Put in the Username and Password

&#x20;

13\.     Then specify the Host and Database Name:

&#x20;

14\.     Now you can start upserting data with SingleStore:

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FNheQ3zRFQBnNGMoiXSOG%2Fimage.png?alt=media&#x26;token=9bbda819-7b3d-48cd-aa1f-bffaa0e12ff5" alt=""><figcaption></figcaption></figure>

15. Navigate back to SingleStore portal, and to your database, you will be able to see all the data that has been upserted:

#### 14)Supabase

**Prerequisite**

1.Register an account for Supabase

•                    Click New project

2.Input required fields

| Field Name        | Description                                    |
| ----------------- | ---------------------------------------------- |
| Name              | name of the project to be created. (e.g. THub) |
| Database Password | password to your postgres database             |

&#x20;

3\.     Click Create new project and wait for the project to finish setting up

4\.     Click SQL Editor

5\.     Click New query

6\.     Copy and Paste the below SQL query and run it by Ctrl + Enter or click RUN. Take note of the table name and function name.

&#x20;     Table name: documents

&#x20;     Query name: match\_documents

&#x20;

**Setup**

·       Click Project Settings

·       Get your Project URL & API Key

·       Copy and Paste each details (API Key, URL, Table Name, Query Name) into Supabase node

·       Document can be connected with any node under Document Loader category

·       Embeddings can be connected with any node under Embeddings category

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F6t8Pto4GQcphqukzhdEc%2FScreenshot%202024-07-09%20112811.png?alt=media&#x26;token=553199c2-4436-473c-a91f-a9f45f869e7a" alt=""><figcaption></figcaption></figure>

#### 15)Upstash Vector

Upsert data as embedding or string and perform similarity search with Upstash, the leading serverless data platform.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F83phmGp5ZU5Ecd46vRQ1%2FScreenshot%202024-07-09%20112920.png?alt=media&#x26;token=d309e086-a2fe-4862-9172-a788c5eca321" alt=""><figcaption></figcaption></figure>

·       Document can be connected with any node under Document Loader category

·       Embeddings can be connected with any node under Embeddings category

·       Record manager can be conneted with the node under Record manager

#### 16)Vectara

**Prerequisite**

·       Register an account for Vectara

·       Click Create Corpus

·       Name the corpus to be created and click Create Corpus then wait for the corpus to finish setting up.

&#x20;

**Setup**

·       Click on the "Access Control" tab in the corpus view

·       Click on the "Create API Key" button, choose a name for the API key and pick the QueryService & IndexService option

·       Click Create to create the API key

·       Get your Corpus ID, API Key, and Customer ID by clicking the down-arrow under "copy" for your new API key:

·       Back to THub canvas, and create your chatflow. Click Create New from the Credentials dropdown and enter your Vectara credentials.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FmW07KXuU3XC5fnR5XsPt%2FScreenshot%202024-07-09%20112933.png?alt=media&#x26;token=2fcdcbc4-83d5-4d5f-bb67-4ca99c9c6d5b" alt=""><figcaption></figcaption></figure>

·       Document can be connected with any node under Document Loader category

&#x20;

**Vectara Query Parameters**

·       For finer control over the Vectara query parameters, click on "Additional Parameters" and then you can update the following parameters from their default:

·       Metadata Filter: Vectara supports meta-data filtering. To use filtering, ensure that metadata fields you want to filter by are defined in your Vectara corpus.

·       "Sentences before" and "Sentences after": these control how many sentences before/after the matching text are returned as results from the Vectara retrieval engine

·       Lambda: defines the behavior of hybrid search in Vectara

·       Top-K: how many results to return from Vectara for the query

·       MMR-K: number of results to use for MMR (max marginal relvance)

#### 17)Weaviate

Upsert embedded data and perform similarity or mmr search using Weaviate, a scalable open-source vector database.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FghIQS6ELc529LfLzYh2l%2FScreenshot%202024-07-09%20112943.png?alt=media&#x26;token=8496e48a-c8bd-4fd4-a76f-289d2f80d2ed" alt=""><figcaption></figcaption></figure>

·       Document can be connected with any node under Document Loader category

·       Embeddings can be connected with any node under Embeddings category

·       Record manager can be conneted with the node under Record manager

#### 18)Zep Collection - Open Source

Upsert embedded data and perform similarity or mmr search upon query using Zep, a fast and scalable building block for LLM apps.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FW3iXjee8Yqpbps74OXWw%2FScreenshot%202024-07-09%20112952.png?alt=media&#x26;token=bc68e9e5-9106-427e-860f-334ec80deafb" alt=""><figcaption></figcaption></figure>

·       Document can be connected with any node under Document Loader category

·       Embedding node can be connected with any node under Embedding  category

Upsert embedded data and perform similarity or mmr search upon query using Zep, a fast and scalable building block for LLM apps.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2F3YQQfE6b3jkWN4NWBzkT%2FScreenshot%202024-07-09%20112959.png?alt=media&#x26;token=758637e9-6333-49ff-b849-986760d03437" alt=""><figcaption></figcaption></figure>

&#x20; • Document can be connected with any node under Document Loader category

**19)Couchbase Vector Store**

Couchbase integrates seamlessly with THub as a high-performance vector store, enabling efficient storage and retrieval of vector embeddings.

**Key Features:**

* **Data Upsertion**: Allows upserting of embedded data into Couchbase buckets, scopes, and collections.
* **Vector Search**: Supports vector similarity searches using approximate nearest neighbor (ANN) algorithms.[Couchbase](https://www.couchbase.com/blog/rag-applications-with-vector-search-and-couchbase/?utm_source=chatgpt.com)
* **Integration with Flowise**: Facilitates the creation of Retrieval-Augmented Generation (RAG) pipelines by combining document loaders, embedding models, and retrievers.

**Use Case Example:**

* **Workflow Setup**: A typical THub setup includes nodes for uploading documents (e.g., PDFs), splitting text into chunks, generating embeddings (e.g., using OpenAI models), and storing them in Couchbase. Retrieval nodes can then fetch relevant documents based on user queries.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FgIV630JjfCASunZaeACs%2Fimage.png?alt=media&#x26;token=a38af214-21fc-4934-a202-48123c679060" alt="" width="313"><figcaption></figcaption></figure>

**20) Document Store (Vector)**

The Document Store (Vector) node in THub offers a centralized approach to managing and retrieving vectorized documents.

**Key Features:**

* **Data Management**: Enables uploading, splitting, and preparing datasets for upsertion in a single location.
* **Versatility**: Supports various data formats, simplifying data handling within THub.
* **API Operations**: Provides endpoints for creating, retrieving, updating, and deleting document stores and their contents.&#x20;

**Use Case Example:**

* **Insurance Policy Retrieval**: Setting up a system to retrieve information about specific insurance policies by uploading relevant documents, processing them into vector embeddings, and enabling semantic search capabilities.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2FCwPvuaXGie1TJZFlbK1B%2Fimage.png?alt=media&#x26;token=fa1d43d0-20c9-4f06-baf3-662447698854" alt="" width="272"><figcaption></figcaption></figure>

**21) Meilisearch Vector Store**

Meilisearch, known for its lightweight and fast search capabilities, has introduced vector search functionalities, making it suitable for semantic and hybrid search applications.

**Key Features:**

* **AI-Powered Search**: Utilizes large language models (LLMs) to retrieve search results based on the meaning and context of queries.
* **Embedding Integration**: Supports configuring embedders (e.g., OpenAI) to translate documents into embeddings for semantic search.
* **Hybrid Search**: Combines traditional keyword-based search with vector search for enhanced relevance.

**Use Case Example:**

* **E-commerce Search**: Implementing a search system that understands user intent and context, providing more accurate product recommendations and search results.

<figure><img src="https://1720595571-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxWXmt1Z68dgle5JORrEw%2Fuploads%2Fk4upEqEQ460DI6AG0MKV%2Fimage.png?alt=media&#x26;token=16ec1302-bc9c-4415-a26b-3a2236f30fd1" alt="" width="264"><figcaption></figcaption></figure>
