🗨️Chat Models
Chat models take a list of messages as input and return a model-generated message as output.
Last updated
Chat models take a list of messages as input and return a model-generated message as output.
Last updated
Wrapper around AWS Bedrock large language models that use the Chat endpoint.
Prerequisite
2. Create your Azure OpenAI and wait for approval approximately 10 business days
3. Your API key will be available at Azure OpenAI > click name_azure_openai > click Click here to manage keys
Setup
1. Click Go to Azure OpenaAI Studio
2. Click Deployments
3. Click Create new deployment
4. Select as shown below and click Create
5. Successfully created Azure ChatOpenAI
· Deployment name: gpt-35-turbo
·
Instance name: top right conner
1.Chat Models in Thub > drag Azure ChatOpenAI node
2. Copy & Paste each details (API Key, Instance & Deployment name, API Version) into Azure ChatOpenAI credential
Wrapper around Bittensor subnet 1 large language models.
Wrapper around ChatAnthropic large language models that use the Chat endpoint.
Wrapper around Cohere Chat Endpoints.
Prerequisite
1. Register a Google account
2. Create an API key
Chat Models > drag ChatGoogleGenerativeAI node
1) Connect Credential > click Create New
2) Fill in the Google AI credential
3) Voila 🎉, you can now use ChatGoogleGenerativeAI node in Thub
Safety Attributes Configuration
· Click Additonal Parameters
· When configuring Safety Attributes, the amount of selection in Harm Category & Harm Block Threshold should be the same amount. If not it will throw an error Harm Category & Harm Block Threshold are not the same length
· The combination of Safety Attributes below will result in Dangerous is set to Low and Above and Harassment is set to Medium and Above
Wrapper around Google MakerSuite PaLM large language models using the Chat endpoint.
8)Google VertexAI
Prerequisites
1. Start your GCP
2. Install the Google Cloud CLI
Setup
Enable vertex AI API
· Go to Vertex AI on GCP and click "ENABLE ALL RECOMMENDED API"
Create credential file (Optional)
There are 2 ways to create credential file
There are 2 ways to create credential file
No. 1: Use GCP CLI
1) Open terminal and run the following command
Gcloud_authapplication_default_login.
2) Login to your GCP account
3) Check your credential file. You can find your credential file in
No. 2: Use GCP console
1. Go to GCP console and click "CREATE CREDENTIALS"
1. Fill in the form of Service account details and click "CREATE AND CONTINUE"
2. Select proper role (for example Vertex AI User) and click "DONE"
Click service account that you created and click "ADD KEY" -> "Create new key"
6. Select JSON and click "CREATE" then you can download your credential file
Without credential file
If you are using a GCP service like Cloud Run, or if you have installed default credentials on your local machine, you do not need to set this credential.
With credential file
1. Go to Credential page on THub and click "Add credential"
2. Click Google Vertex Auth
Register your credential file. There are 2 ways to register your credential file.
· Option 1: Enter path of your credential file
If you have credential file on your machine, you can enter the path of your credential file into Google Application Credential File Path
· Option 2: Paste text of your credential file
Or you can copy all text in the credential file and paste it into Google Credential JSON Object
1. Finally, click "Add" button.
2. You can now use ChatGoogleVertexAI with the credential in THub now!
Wrapper around HuggingFace large language models.
LocalAI is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format.
• To use ChatLocalAI within THub, follow the steps below:
For example:
· Download one of the models from gpt4all.io
# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
· In the /models folder, you should be able to see the downloaded model in there:
· Refer here for list of supported models.
THub Setup
· Drag and drop a new ChatLocalAI component to canvas
Fill in the fields:
· Base Path: The base url from LocalAI such as http://localhost:8080/v1
· Model Name: The model you want to use. Note that it must be inside /models folder of LocalAI directory. For instance: ggml-gpt4all-j.bin.
• Register a Mistral AI account
• Create an API key
Setup
• Chat Models > drag ChatMistralAI node
1) Connect Credential > click Create New
2) Fill in the MistralAI credential
3) Provide the details of Model name and temperature
4) Voila 🎉, you can now use MistralAI node in Thub
• Download Ollama or run it on Docker.
• For example, you can use the following command to spin up a Docker instance with llama2
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollamadocker
exec -it ollama ollama run llama3
Setup
• Chat Models > drag ChatOllama node
• Fill in the model that is running on Ollama. For example: llama2. You can also use additional parameters.
• you can now use ChatOllama node in THub
Additional
• If you are running both THub and Ollama on docker. You'll have to change the Base URL for ChatOllama.
• For Windows and MacOS Operating Systems specify http://host.docker.internal:8000. For Linux based systems the default docker gateway should be used since host.docker.internal is not available: http://172.17.0.1:8000
ChatOllama function which is also known as Function Based Chat Ollama (FBCO), is a wrapper for the langchain OllamaFunctions class that allows for easy function creation and integration using the BaseFunction template. FBCO is designed to run locally on your machine using Ollama. It is not yet recommended to use FBCO in a production environment
Fill in the fields:
• Base Path: The base url from LocalAI such as http://localhost:8080/v1
• Model Name: The model you want to use .
• Temperature Details.
Prerequisite
• An OpenAI account
• Create an API key
Setup
• Chat Models > drag ChatOpenAI node
• Connect Credential > click Create New
• Fill in the ChatOpenAI credential
• you can now use ChatOpenAI node in THub
Custom base URL and headers
THub supports using custom base URL and headers for Chat OpenAI. Users can easily use integrations like OpenRouter, TogetherAI and others that support OpenAI API compatibility.
TogetherAI
• Refer to official docs from TogetherAI
• Create a new credential with TogetherAI API key
• Click Additional Parameters on ChatOpenAI node.
• Change the Base Path.
Open Router
• Refer to official docs from OpenRouter
• Create a new credential with OpenRouter API key
• Click Additional Parameters on ChatOpenAI node
• Change the Base Path and Base Options.
Custom Model
For models that are not supported on ChatOpenAI node, you can use ChatOpenAI Custom for that. This allow users to fill in model name such as mistralai/Mixtral-8x7B-Instruct-v0.1
Image Upload
• You can also allow images to be uploaded and analyzed by LLM. Under the hood, Flowise will use OpenAI Vison model to process the image.
• From the chat interface, you will now see a new image upload button
Custom/FineTuned model using OpenAI Chat compatible API.
Setup
• Chat Models > drag ChatOpenAI Custom node
• Fill in the ChatOpenAI credential.
• Select the model and provide temperature details
• you can now use ChatOpenAI Custom node in THub.
16)Chat Together AI
Wrapper around TogetherAI large language models
Prerequisite
• An TogetherAI account
• Create an API key
Setup
• Chat Models > drag ChatTogetherAI node
• Connect Credential > click Create New
• Fill in the TogetherAI credential, Model name and temperature details.
• you can now use ChatOpenAI node in THub
17)GroqChat
Wrapper around Groq API with LPU Inference Engine.
Prerequisite
• An Groqchat account
• Create an API key
Setup
• Chat Models > drag GroqChat node
• Connect Credential > click Create New
• Fill in the Groqchat credential, Model name and temperature details.
• you can now use Groqchat node in THub.