🧠Large Language Models(LLM)
LLMs are advanced AI systems designed to understand and generate human language. They are trained on vast amount of data and can perform a variety of language-related tasks with impressive accuracy.
Features
• Natural Language Processing (NLP): LLMs can understand, interpret, and generate human language.
• Contextual Understanding: They can grasp the context of conversations or text, providing relevant responses.
• Multilingual Capabilities: Many LLMs support multiple languages, broadening their applicability.
• Scalability: These models can handle a wide range of applications, from chatbots to complex data analysis.
Applications
• Chatbots and Virtual Assistants: Enhance customer service by providing accurate and timely responses.
• Content Creation: Assist in generating articles, reports, and other written content.
• Translation Services: Improve the accuracy and efficiency of translating text between languages.
• Data Analysis: Aid in interpreting and summarizing large datasets.
1)AWS Bedrock
The AWS Bedrock LLM node integrates Amazon's fully managed foundation model service into THub, allowing users to leverage a wide range of AI models — including Amazon Titan, Anthropic Claude, and Meta Llama — within their workflows.

Setup Requirements:
To configure this node, you'll need the following:
AWS Credential: An IAM user or role with
bedrock:InvokeModelpermission.Region: The AWS region where Bedrock is enabled on your account.
Model Name: The foundation model you want to invoke (e.g.
amazon.titan-tg1-large).Custom Model Name (optional): The ARN of a fine-tuned or custom model if you've provisioned one in Bedrock.
Key Features:
Access multiple foundation model providers (Amazon, Anthropic, Meta, AI21) from a single node.
Native AWS IAM authentication — no separate API key management required.
Supports custom and fine-tuned models via model ARN.
Cache toggle to reduce repeated API calls for identical prompts.
Use Cases:
Summarising documents and reports
Answering questions over enterprise data
Calling child flows with AI-generated output
SQL Q&A and data analysis
Web scraping Q&A
2)Azure OpenAI
The Azure OpenAI LLM node integrates Microsoft Azure's hosted OpenAI service into THub, allowing users to leverage GPT-4, GPT-3.5, and other OpenAI models within their workflows — with enterprise-grade compliance and data residency.

Setup Requirements:
To configure this node, you'll need the following:
Connect Credential: An Azure OpenAI credential containing your Azure endpoint URL and API key.
Model Name: Your Azure deployment name — the name you gave the model when deploying it in the Azure portal (e.g.
text-davinci-003).Temperature (optional): Controls response randomness. Range
0.0to2.0. Default is0.9.
Key Features:
Uses OpenAI's GPT models hosted entirely within your Azure subscription.
Data stays within your chosen Azure region — meets enterprise compliance and data residency requirements.
Billed through your existing Azure account.
Cache toggle to reuse responses for repeated prompts.
Use Cases:
Interacting with APIs
Multiple document Q&A
Calling child flows
Data summarisation and transformation
SQL Q&A
3)Cohere
The Cohere LLM node integrates Cohere's enterprise language models into THub, allowing users to leverage Cohere's suite of instruction-following and NLP models within their workflows.

Setup Requirements:
To configure this node, you'll need the following:
Connect Credential: A Cohere API key from your Cohere dashboard.
Model Name: The Cohere model to use (e.g.
command,command-r,command-r-plus).Temperature (optional): Controls response randomness. Range
0.0to1.0. Default is0.7.Max Tokens (optional): Maximum length of the generated response. Leave blank to use the model's default.
Key Features:
Purpose-built models for enterprise NLP tasks — summarisation, classification, and generation.
command-r-plussupports complex multi-step reasoning and long documents.Lightweight
command-lightmodel for fast, cost-efficient tasks.Cache toggle to reuse responses for repeated prompts.
Use Cases:
Document summarisation
Text classification and tagging
Multiple document Q&A
Data upsertion
Web scraping Q&A
4)GoogleVertex AI
The Google Vertex AI LLM node integrates Google Cloud's managed AI platform into THub, allowing users to leverage Google's PaLM 2 and Gemini foundation models within their workflows.

Setup Requirements:
To configure this node, you'll need the following:
Connect Credential: A Google Cloud service account JSON key with the Vertex AI User role (
roles/aiplatform.user).Model Name: The Vertex AI model to invoke (e.g.
text-bison,gemini-pro).Temperature (optional): Controls response randomness. Range
0.0to1.0. Default is0.7.
Key Features:
Access Google's PaLM 2 and Gemini model families from a single node.
Runs within your GCP project — billed through Google Cloud.
gemini-prosupports extended context and stronger reasoning tasks.Cache toggle to reuse responses for repeated prompts.
Use Cases:
Calling child flows
Interacting with APIs
SQL Q&A
Multiple document Q&A
Data upsertion
5)Ollama
The Ollama LLM node integrates a locally hosted Ollama instance into THub, allowing users to run open-source foundation models entirely within their own infrastructure — with no data sent to external APIs.

Setup Requirements:
To configure this node, you'll need the following:
Base URL: The address where your Ollama server is running (default:
http://localhost:11434).Model Name: The name of the model you have pulled locally (e.g.
llama3,mistral,phi3). Runollama listin your terminal to see available models.Temperature (optional): Controls response randomness. Range
0.0to1.0. Default is0.9.
Key Features:
Fully self-hosted — no data leaves your infrastructure.
No per-token API costs; runs on your own hardware.
Supports a wide range of open-source models: Llama 3, Mistral, Phi-3, Gemma, Code Llama, and more.
Cache toggle to reduce local inference overhead for repeated prompts.
Use Cases:
Privacy-sensitive document Q&A
Offline and air-gapped workflow automation
Web scraping Q&A
SQL Q&A
Data upsertion
6)OpenAI
The OpenAI LLM node integrates OpenAI's API directly into THub, allowing users to leverage GPT-4, GPT-4o, GPT-3.5, and other flagship models within their workflows.

Setup Requirements:
To configure this node, you'll need the following:
Connect Credential: An OpenAI API key from your OpenAI account.
Model Name: The OpenAI model to use (e.g.
gpt-4o,gpt-3.5-turbo,gpt-3.5-turbo-instruct).Temperature (optional): Controls response randomness. Range
0.0to2.0. Default is0.7.
Key Features:
Direct access to OpenAI's full model lineup — GPT-4o, GPT-4 Turbo, GPT-3.5, and legacy completion models.
Simplest setup among all LLM nodes — just an API key and model name.
Supports both chat models (
gpt-4o,gpt-3.5-turbo) and legacy completion models (gpt-3.5-turbo-instruct).Cache toggle to reuse responses for repeated prompts.
Use Cases:
Calling child flows
Interacting with APIs
Multiple document Q&A
SQL Q&A
Data upsertion
Web scraping Q&A
Last updated