⛓️Chains
In the context of chatbots and large language models, "chains" typically refer to sequences of text or conversation turns. These chains are used to store and manage the conversation history.
Last updated
In the context of chatbots and large language models, "chains" typically refer to sequences of text or conversation turns. These chains are used to store and manage the conversation history.
Last updated
Here's how chains work:
Conversation History: When a user interacts with a chatbot or language model, the conversation is often represented as a series of text messages or conversation turns. Each message from the user and the model is stored in chronological order to maintain the context of the conversation.
Input and Output: Each chain consists of both user input and model output. The user's input is usually referred to as the "input chain," while the model's responses are stored in the "output chain." This allows the model to refer back to previous messages in the conversation.
Contextual Understanding: By preserving the entire conversation history in these chains, the model can understand the context and refer to earlier messages to provide coherent and contextually relevant responses. This is crucial for maintaining a natural and meaningful conversation with users.
Maximum Length: Chains have a maximum length to manage memory usage and computational resources. When a chain becomes too long, older messages may be removed or truncated to make room for new messages. This can potentially lead to loss of context if important conversation details are removed.
Continuation of Conversation: In a real-time chatbot or language model interaction, the input chain is continually updated with the user's new messages, and the output chain is updated with the model's responses. This allows the model to keep track of the ongoing conversation and respond appropriately.
Chains are a fundamental concept in building and maintaining chatbot and language model conversations. They ensure that the model has access to the context it needs to generate meaningful and context-aware responses, making the interaction more engaging and useful for users.
Chain to run queries against GET API.
• AnyChat model can be connected under Chat model category
A GET API Chain is a sequence of operations that allows running queries against a GET API endpoint. This chain can be used to fetch data from the API, process the responses, and integrate the results into your application. The chain enhances the functionality by allowing multiple API requests to be managed, processed, and used efficiently.
Features
· Sequential Execution: Handles a series of GET requests in a defined order.
· Data Processing: Processes and transforms the API responses as needed.
· Error Handling: Manages errors and exceptions during API calls.
. Integration: Facilitates integration of API responses into the application workflow.
Chain that automatically select and call APIs based only on an OpenAPI spec.
• AnyChat model can be connected under Chat model category
• Input Moderation can be connected with any node under Moderation category
The OpenAPI Chain is a mechanism that allows automatic selection and invocation of APIs based on an OpenAPI specification. By leveraging the OpenAPI spec, this chain can dynamically discover API endpoints, understand their required parameters, and make appropriate API calls without requiring hardcoded logic.
Features
· Automatic API Discovery: Utilizes the OpenAPI spec to discover available API endpoints and their details.
· Dynamic Parameter Handling: Automatically identifies and handles required parameters for each API call.
· Flexible Integration: Can be integrated into various applications to streamline API interactions.
· Error Handling: Manages errors and exceptions during API calls, ensuring robust operations.
Chain to run queries against POST API.
A POST API Chain is a sequence of operations designed to execute queries against a POST API endpoint. This chain facilitates the automation of sending data to the API, processing the responses, and integrating the results into your application workflow.
Features
· Query Execution: Executes queries against a POST API endpoint using the POST method.
· Data Processing: Processes and transforms the API responses as needed.
· Error Handling: Manages errors and exceptions during API calls.
· Integration: Facilitates integration of API responses into the application workflow.
Chat models specific conversational chain with memory.
• Chat Prompt Template can be connected with any node under Prompt category
• AnyChat model can be connected under Chat model category
• memory can be connected with any node under memory category
• Input Moderation can be connected with any node under Moderation category
A Conversation Chain with Memory is a specialized sequence of operations designed to facilitate conversational interactions using chat models. Unlike traditional chatbot architectures, this chain incorporates a memory component to retain context and history, enabling more engaging and coherent conversations over time.
Features
· Conversational Flow: Manages the flow of conversation between the user and the chat model.
· Memory Management: Maintains a memory store to retain context, history, and user preferences.
· Response Generation: Utilizes chat models to generate responses based on user inputs and context.
· Contextual Understanding: Enhances conversation quality by considering the context of previous interactions.
· Error Handling: Manages errors and exceptions during conversation processing.
A chain for performing question-answering tasks with a retrieval component
• Vectore store Retriever can be connected with any node under Retriever category
• AnyChat model can be connected under Chat model category
• memory can be connected with any node under memory category
• Input Moderation can be connected with any node under Moderation category
The Conversational Retrieval QA Chain is a specialized sequence of operations designed for performing question-answering tasks with a retrieval component. This chain combines the capabilities of a question-answering model with a retrieval mechanism to provide accurate and relevant responses to user queries in conversational settings.
Features
· Question Answering Model: Utilizes a question-answering model to generate responses to user queries.
· Retrieval Component: Incorporates a retrieval mechanism to retrieve relevant information from a knowledge base or corpus.
· Contextual Understanding: Enhances response generation by considering the context of the conversation and previous interactions.
· Error Handling: Manages errors and exceptions during question-answering and retrieval processes.
· Feedback Loop: Optionally includes a feedback loop to improve response accuracy over time based on user feedback.
6)LLM Chain
Chain to run queries against LLMs.
• Tool can be connected with any node under Tools model category
• Any Prompt node can be connected under Prompt category
• Any Output Parser can be connected with node under Output Parser category
• Input Moderation can be connected with any node under Moderation category
The LLM Chain is a specialized sequence of operations designed to run queries against Large Language Models (LLMs). This chain enables users to interact with LLMs to generate responses, perform tasks, or retrieve information by providing queries in natural language.
Features
· Query Execution: Executes queries against LLMs to generate responses or perform tasks.
· Natural Language Understanding: Interprets user queries in natural language format.
· Response Generation: Utilizes LLMs to generate contextually relevant responses based on user queries.
· Contextual Understanding: Considers the context of previous interactions to provide more relevant and coherent responses.
· Error Handling: Manages errors and exceptions during query execution to ensure smooth interactions.
Chain automatically picks an appropriate prompt from multiple prompt templates.
• Vector store Retriever can be connected with any node under Retriever category
• AnyChat model can be connected under Chat model category
• Input Moderation can be connected with any node under Embeddings category
The Multi-Prompt Chain is a specialized sequence of operations designed to automatically select an appropriate prompt from multiple prompt templates. This chain facilitates the generation of diverse outputs by leveraging different prompt variations tailored to specific tasks or scenarios.
Features
· Prompt Selection: Automatically selects an appropriate prompt from a collection of prompt templates based on the task or scenario.
· Diverse Output: Generates diverse outputs by using different prompt variations to elicit varied responses from language models.
· Task-specific Prompts: Tailors prompt templates to specific tasks or scenarios to improve response relevance and quality.
· Contextual Understanding: Considers the context of the conversation or task to select the most relevant prompt.
· Error Handling: Manages errors and exceptions during prompt selection to ensure smooth operation.
QA Chain that automatically picks an appropriate vector store from multiple retrievers.
• Vector store Retriever can be connected with any node under Retriever category
• AnyChat model can be connected under Chat model category
• Input Moderation can be connected with any node under Embeddings category
The Multi Retrieval QA Chain is a specialized sequence of operations designed to perform question-answering tasks by automatically selecting an appropriate vector store from multiple retrievers. This chain combines the capabilities of retrieval-based question-answering systems with the flexibility of choosing from different vector stores to retrieve relevant information for answering user queries.
Features
· Vector Store Selection: Automatically selects an appropriate vector store from a collection of retrievers based on the query and context.
· Question Answering: Utilizes the selected vector store to retrieve relevant information for generating responses to user queries.
· Contextual Understanding: Considers the context of the conversation or query to select the most relevant vector store.
· Error Handling: Manages errors and exceptions during retrieval to ensure accurate and reliable question-answering.
QA chain to answer a question based on the retrieved documents.
• Vector store Retriever can be connected with any node under Retriever category
• AnyChat model can be connected under Chat model category
• Input Moderation can be connected with any node under Embeddings category
The Retrieval QA Chain is a specialized sequence of operations designed to answer a question based on the retrieved documents from a knowledge base or corpus. This chain combines retrieval-based techniques with question-answering models to accurately answer user queries by first retrieving relevant documents and then extracting answers from them.
Features
· Document Retrieval: Retrieves relevant documents from a knowledge base or corpus based on the user query.
· Question Answering: Utilizes question-answering models to extract answers from the retrieved documents.
· Contextual Understanding: Considers the context of the user query and retrieved documents to generate accurate answers.
· Error Handling: Manages errors and exceptions during retrieval and question-answering processes to ensure reliable performance.
Answer questions over a SQL database.
• AnyChat model can be connected under Chat model category
• Input Moderation can be connected with any node under Embeddings category
The SQL Database Chain is a specialized sequence of operations designed to answer questions over a SQL database. This chain enables users to query a SQL database and retrieve relevant information in response to their questions, facilitating efficient data retrieval and analysis.
Features
· SQL Query Execution: Executes SQL queries against the database to retrieve relevant data.
· Question Interpretation: Interprets user questions and translates them into SQL queries.
· Data Retrieval: Retrieves data from the database based on the executed SQL queries.
· Error Handling: Manages errors and exceptions during query execution to ensure reliable performance
A chain for performing question-answering tasks with Vectara.
• Any Vector store node can be connected under Vector category
• Input Moderation can be connected with any node under Embeddings category
The Vectara QA Chain is a specialized sequence of operations designed to perform question-answering tasks using Vectara, a powerful question-answering system. This chain combines the capabilities of Vectara with efficient processing techniques to accurately answer user queries across various domains and topics. Features
· Question Answering with Vectara: Utilizes Vectara's advanced question-answering capabilities to generate accurate responses to user queries.
· Natural Language Understanding: Interprets user questions in natural language format and converts them into queries understandable by Vectara.
· Contextual Understanding: Considers the context of the user query and previous interactions to provide relevant and accurate answers.
· Error Handling: Manages errors and exceptions during question-answering processes
to ensure reliable performance.
QA chain for vector databases.
• Any Vector store Node can be connected Vector Store category
• AnyChat model can be connected under Chat model category
• Input Moderation can be connected with any node under Embeddings category
The VectorDB QA Chain is a specialized sequence of operations designed to perform question-answering tasks using vector databases. This chain enables users to query vector databases and retrieve relevant information in response to their questions, facilitating efficient data retrieval and analysis based on vector representations. Features
· Vector Database Querying: Executes queries against vector databases to retrieve relevant data represented as vectors.
· Question Interpretation: Interprets user questions and translates them into queries understandable by vector databases.
· Vector Similarity Search: Performs similarity search operations to find vectors similar to the query vector.
· Contextual Understanding: Considers the context of the user query and retrieved vectors to generate accurate answers.
· Error Handling: Manages errors and exceptions during query execution to ensure reliable performance.