THub Technical Documentation
  • Introduction
  • 🔗 LangChain
    • 🕵️ Agents
    • 🗄️Cache
    • ⛓️Chains
    • 🗨️Chat Models
    • 📁Document Loaders
    • 🧬Embeddings
    • Graph
    • 🧠Large Language Models(LLM)
    • 💾Memory
    • 🛡️Moderation
    • 👥Multi Agents
    • 🔀Output Parsers
    • 📝Prompts
    • 📊Record Managers
    • 📑Retrieval-Augmented Generation
    • 🔍Retrivers
    • Sequential Agent
    • ✂️Text Splitters
    • 🛠️Tools
    • 🔌Tools (MCP)
    • 🗃️Vector Stores
  • 🦙LLama Index
    • 🕵️ Agents
    • 🗨️Chat Models
    • 🧬Embeddings
    • 🚀Engine's
    • 🧪Response Synthesizer
    • 🛠️Tools
    • 🗃️Vector Stores
Powered by GitBook
On this page
  1. 🔗 LangChain

🛡️Moderation

Moderation nodes are used to check whether the input or output consists of harmful or inappropriate content.

Previous💾MemoryNext👥Multi Agents

Last updated 11 months ago

1)OpenAI Moderation

Check whether content complies with OpenAI usage policies.

2)Simple Prompt Moderation

Check whether input consists of any text from Deny list, and prevent being sent to LLM.