THub Technical Documentation
  • Introduction
  • 🔗 LangChain
    • 🕵️ Agents
    • 🗄️Cache
    • ⛓️Chains
    • 🗨️Chat Models
    • 📁Document Loaders
    • 🧬Embeddings
    • 🧠Large Language Models(LLM)
    • 💾Memory
    • 🛡️Moderation
    • 👥Multi Agents
    • 🔀Output Parsers
    • 📝Prompts
    • 📊Record Managers
    • 📑Retrieval-Augmented Generation
    • 🔍Retrivers
    • ✂️Text Splitters
    • 🛠️Tools
    • 🔌Utilities
    • 🗃️Vector Stores
  • 🦙LLama Index
    • 🕵️ Agents
    • 🗨️Chat Models
    • 🧬Embeddings
    • 🚀Engine's
    • 🧪Response Synthesizer
    • 🛠️Tools
    • 🗃️Vector Stores
Powered by GitBook
On this page
  • 1)Refine
  • 2)Compact And Refine
  • 3)Simple Response Builder
  • 4)Tree Summarize
  1. 🦙LLama Index

🧪Response Synthesizer

Response Synthesizer nodes are responsible for sending the query, nodes, and prompt templates to the LLM to generate a response. There are 4 modes for generating a response:

Previous🚀Engine'sNext🛠️Tools

Last updated 10 months ago

1)Refine

Create and refine an answer by sequentially going through each retrieved text chunk.

Pros: Good for more detailed answers

Cons: Separate LLM call per Node (can be expensive)

2)Compact And Refine

This is the default when no Response Synthesizer is explicilty defined.

Compact the prompt during each LLM call by stuffing as many text chunks that can fit within the maximum prompt size. If there are too many chunks to stuff in one prompt, "create and refine" an answer by going through multiple compact prompts.

Cons: Due to the multiple LLM calls , can be expensive

3)Simple Response Builder

Using a collection of text segments and a query, execute the query on each segment, gathering the responses into an array. Return a combined string containing all responses.

Pros: Useful for individually querying each text segment with the same query

Cons: Not suitable for complex and detailed answer

4)Tree Summarize

When provided with text chunks and a query, recursively build a tree structure and return the root node as the result.

Pros: Beneficial for summarization tasks

Cons: Accuracy of answer might be lost during traversal of tree structure

Pros: The same as , Good for more detailed answers, and should result in less LLM calls

Refine