THub Technical Documentation
  • Introduction
  • 🔗 LangChain
    • 🕵️ Agents
    • 🗄️Cache
    • ⛓️Chains
    • 🗨️Chat Models
    • 📁Document Loaders
    • 🧬Embeddings
    • Graph
    • 🧠Large Language Models(LLM)
    • 💾Memory
    • 🛡️Moderation
    • 👥Multi Agents
    • 🔀Output Parsers
    • 📝Prompts
    • 📊Record Managers
    • 📑Retrieval-Augmented Generation
    • 🔍Retrivers
    • Sequential Agent
    • ✂️Text Splitters
    • 🛠️Tools
    • 🔌Tools (MCP)
    • 🗃️Vector Stores
  • 🦙LLama Index
    • 🕵️ Agents
    • 🗨️Chat Models
    • 🧬Embeddings
    • 🚀Engine's
    • 🧪Response Synthesizer
    • 🛠️Tools
    • 🗃️Vector Stores
Powered by GitBook
On this page
  1. 🔗 LangChain

🧠Large Language Models(LLM)

LLMs are advanced AI systems designed to understand and generate human language. They are trained on vast amount of data and can perform a variety of language-related tasks with impressive accuracy.

PreviousGraphNext💾Memory

Last updated 21 days ago

Features

• Natural Language Processing (NLP): LLMs can understand, interpret, and generate human language.

• Contextual Understanding: They can grasp the context of conversations or text, providing relevant responses.

• Multilingual Capabilities: Many LLMs support multiple languages, broadening their applicability.

• Scalability: These models can handle a wide range of applications, from chatbots to complex data analysis.

Applications

• Chatbots and Virtual Assistants: Enhance customer service by providing accurate and timely responses.

• Content Creation: Assist in generating articles, reports, and other written content.

• Translation Services: Improve the accuracy and efficiency of translating text between languages.

• Data Analysis: Aid in interpreting and summarizing large datasets.

1)AWS Bedrock

Wrapper around AWS Bedrock large language models.

2)Azure OpenAI

Wrapper around Azure OpenAI large language models.

3)Fireworks

The Fireworks LLM node in THub serves as a wrapper around Fireworks AI's chat endpoints, enabling integration of their LLM capabilities into your workflows.

Key Features:

  • Integration with Fireworks AI: Allows seamless connection to Fireworks AI's LLM services.

  • Use Cases: Suitable for various applications, including:

    • Calling child flows

    • Interacting with APIs

    • Multiple document Q&A

    • SQL Q&A

    • Data upsertion

    • Web scraping Q&A

4)Cohere

Wrapper around Cohere large language models.

5)IBM WatsonX

The IBM WatsonX LLM node integrates IBM's WatsonX.ai platform into THub, allowing users to leverage IBM's suite of foundation models within their workflows.

Setup Requirements:

To configure this node, you'll need the following:

  • WatsonX.ai URL: The endpoint for the WatsonX.ai service.

  • Project ID: Identifier for your WatsonX project.

  • API Key: Authentication key for accessing WatsonX services.

  • Model ID: Identifier for the specific model you wish to use.

  • Model Version: The version of the model to be utilized.

Use Cases:

  • Calling child flows

  • Interacting with APIs

  • Multiple document Q&A

  • SQL Q&A

  • Data upsertion

  • Web scraping Q&A

6)GoogleVertex AI

Wrapper around GoogleVertexAI large language models.

7)HuggingFace Inference

Wrapper around HuggingFace large language models.

8)Ollama

Wrapper around opensource large language models on Ollama.

9)OpenAI

Wrapper around OpenAI large language models.

10)Replicate

Use Replicate to run opensource models on cloud.

11) Together AI

Together AI offers a collection of open-source LLMs optimized for tasks such as chat, code generation, and reasoning. By integrating the Together AI node into THub, users can leverage these models within their workflows to build sophisticated AI solutions.


Key Features

  • Access to Diverse Models: Utilize models like LLaMA, Mistral, DeepSeek, and others for various tasks.

  • OpenAI-Compatible API: Together AI's API is compatible with OpenAI's, allowing for straightforward integration.

  • Custom Base URL Configuration: In Flowise, users can set a custom base URL to point to Together AI's API endpoints.

  • Use Cases:

    • Calling child flows

    • Interacting with APIs

    • Multiple document Q&A

    • SQL Q&A

    • Data upsertion

    • Web scraping Q&A