⛓️Chains
In the context of chatbots and large language models, "chains" typically refer to sequences of text or conversation turns. These chains are used to store and manage the conversation history.
Here's how chains work:
Conversation History: When a user interacts with a chatbot or language model, the conversation is often represented as a series of text messages or conversation turns. Each message from the user and the model is stored in chronological order to maintain the context of the conversation.
Input and Output: Each chain consists of both user input and model output. The user's input is usually referred to as the "input chain," while the model's responses are stored in the "output chain." This allows the model to refer back to previous messages in the conversation.
Contextual Understanding: By preserving the entire conversation history in these chains, the model can understand the context and refer to earlier messages to provide coherent and contextually relevant responses. This is crucial for maintaining a natural and meaningful conversation with users.
Maximum Length: Chains have a maximum length to manage memory usage and computational resources. When a chain becomes too long, older messages may be removed or truncated to make room for new messages. This can potentially lead to loss of context if important conversation details are removed.
Continuation of Conversation: In a real-time chatbot or language model interaction, the input chain is continually updated with the user's new messages, and the output chain is updated with the model's responses. This allows the model to keep track of the ongoing conversation and respond appropriately.
Chains are a fundamental concept in building and maintaining chatbot and language model conversations. They ensure that the model has access to the context it needs to generate meaningful and context-aware responses, making the interaction more engaging and useful for users.
1)Conversation Chain
Chat models specific conversational chain with memory.

• Chat Prompt Template can be connected with any node under Prompt category
• AnyChat model can be connected under Chat model category
• memory can be connected with any node under memory category
• Input Moderation can be connected with any node under Moderation category
A Conversation Chain with Memory is a specialized sequence of operations designed to facilitate conversational interactions using chat models. Unlike traditional chatbot architectures, this chain incorporates a memory component to retain context and history, enabling more engaging and coherent conversations over time.
Features
· Conversational Flow: Manages the flow of conversation between the user and the chat model.
· Memory Management: Maintains a memory store to retain context, history, and user preferences.
· Response Generation: Utilizes chat models to generate responses based on user inputs and context.
· Contextual Understanding: Enhances conversation quality by considering the context of previous interactions.
· Error Handling: Manages errors and exceptions during conversation processing.
2)Conversational Retrieval QA Chain
A chain for performing question-answering tasks with a retrieval component

• Vectore store Retriever can be connected with any node under Retriever category
• AnyChat model can be connected under Chat model category
• memory can be connected with any node under memory category
• Input Moderation can be connected with any node under Moderation category
The Conversational Retrieval QA Chain is a specialized sequence of operations designed for performing question-answering tasks with a retrieval component. This chain combines the capabilities of a question-answering model with a retrieval mechanism to provide accurate and relevant responses to user queries in conversational settings.
Features
· Question Answering Model: Utilizes a question-answering model to generate responses to user queries.
· Retrieval Component: Incorporates a retrieval mechanism to retrieve relevant information from a knowledge base or corpus.
· Contextual Understanding: Enhances response generation by considering the context of the conversation and previous interactions.
· Error Handling: Manages errors and exceptions during question-answering and retrieval processes.
· Feedback Loop: Optionally includes a feedback loop to improve response accuracy over time based on user feedback.
3) Graph Cypher QA Chain
A chain used for answering questions by querying a Neo4j graph database using Cypher queries.

• Language Model can be connected with any node under Language model category
• Neo4j Graph can be connected with any node under Graph category
• Cypher Generation Prompt can be connected with any node under Prompts category
• Cypher Generation Model can be connected with any node under Language model category
• QA Prompt can be connected with any node under Prompts category
• QA Model can be connected with any node under Language model category
• Input Moderation can be connected with any node under Moderation category
The Graph Cypher QA Chain enables users to query graph databases using natural language. It converts user questions into Cypher queries, executes them on a Neo4j graph database, and then processes the results to generate meaningful answers. This chain is particularly useful for applications that rely on graph-based knowledge structures and relationship-driven data.
Features
· Natural Language Querying: Allows users to ask questions in natural language which are converted into Cypher queries.
· Graph Database Integration: Connects directly with Neo4j graph databases to retrieve structured relationship data.
· Cypher Query Generation: Automatically generates Cypher queries using language models.
· Contextual Answer Generation: Uses retrieved graph data to generate accurate and meaningful answers.
· Flexible Prompt Customization: Supports custom prompts for both Cypher query generation and question-answering tasks.
4)LLM Chain
Chain to run queries against LLMs.

• Tool can be connected with any node under Tools model category
• Any Prompt node can be connected under Prompt category
• Any Output Parser can be connected with node under Output Parser category
• Input Moderation can be connected with any node under Moderation category
The LLM Chain is a specialized sequence of operations designed to run queries against Large Language Models (LLMs). This chain enables users to interact with LLMs to generate responses, perform tasks, or retrieve information by providing queries in natural language.
Features
· Query Execution: Executes queries against LLMs to generate responses or perform tasks.
· Natural Language Understanding: Interprets user queries in natural language format.
· Response Generation: Utilizes LLMs to generate contextually relevant responses based on user queries.
· Contextual Understanding: Considers the context of previous interactions to provide more relevant and coherent responses.
· Error Handling: Manages errors and exceptions during query execution to ensure smooth interactions.
5)Multi Prompt Chain
Chain automatically picks an appropriate prompt from multiple prompt templates.

• Vector store Retriever can be connected with any node under Retriever category
• AnyChat model can be connected under Chat model category
• Input Moderation can be connected with any node under Embeddings category
The Multi-Prompt Chain is a specialized sequence of operations designed to automatically select an appropriate prompt from multiple prompt templates. This chain facilitates the generation of diverse outputs by leveraging different prompt variations tailored to specific tasks or scenarios.
Features
· Prompt Selection: Automatically selects an appropriate prompt from a collection of prompt templates based on the task or scenario.
· Diverse Output: Generates diverse outputs by using different prompt variations to elicit varied responses from language models.
· Task-specific Prompts: Tailors prompt templates to specific tasks or scenarios to improve response relevance and quality.
· Contextual Understanding: Considers the context of the conversation or task to select the most relevant prompt.
· Error Handling: Manages errors and exceptions during prompt selection to ensure smooth operation.
6)Multi Retrieval QA Chain
QA Chain that automatically picks an appropriate vector store from multiple retrievers.

• Vector store Retriever can be connected with any node under Retriever category
• AnyChat model can be connected under Chat model category
• Input Moderation can be connected with any node under Embeddings category
The Multi Retrieval QA Chain is a specialized sequence of operations designed to perform question-answering tasks by automatically selecting an appropriate vector store from multiple retrievers. This chain combines the capabilities of retrieval-based question-answering systems with the flexibility of choosing from different vector stores to retrieve relevant information for answering user queries.
Features
· Vector Store Selection: Automatically selects an appropriate vector store from a collection of retrievers based on the query and context.
· Question Answering: Utilizes the selected vector store to retrieve relevant information for generating responses to user queries.
· Contextual Understanding: Considers the context of the conversation or query to select the most relevant vector store.
· Error Handling: Manages errors and exceptions during retrieval to ensure accurate and reliable question-answering.
7)Retrieval QA Chain
QA chain to answer a question based on the retrieved documents.

• Vector store Retriever can be connected with any node under Retriever category
• AnyChat model can be connected under Chat model category
• Input Moderation can be connected with any node under Embeddings category
The Retrieval QA Chain is a specialized sequence of operations designed to answer a question based on the retrieved documents from a knowledge base or corpus. This chain combines retrieval-based techniques with question-answering models to accurately answer user queries by first retrieving relevant documents and then extracting answers from them.
Features
· Document Retrieval: Retrieves relevant documents from a knowledge base or corpus based on the user query.
· Question Answering: Utilizes question-answering models to extract answers from the retrieved documents.
· Contextual Understanding: Considers the context of the user query and retrieved documents to generate accurate answers.
· Error Handling: Manages errors and exceptions during retrieval and question-answering processes to ensure reliable performance.
Last updated