Adaptive AI Chat Workflow with Contextual Retrieval

somdn_product_page

This n8n workflow implements an advanced Adaptive Retrieval-Augmented Generation (RAG) system designed for smart, context-aware AI chat responses. It classifies user queries into four categories: Factual, Analytical, Opinion, and Contextual, then applies tailored retrieval and answer-generation strategies for each type to deliver highly relevant and precise answers. The process begins by classifying the query using a Google Gemini AI model, then routes the query through specialized strategies such as query refinement for factual questions, sub-question generation for analytical queries, perspective analysis for opinions, and context inference for contextual questions. These strategies leverage language models optimized for each case and maintain memory buffers to support ongoing conversations. The most relevant documents are retrieved from a Qdrant vector store using embedding-based search, and their contents are concatenated to form a context. Finally, a tailored language model generates a detailed, accurate response that considers the user query, context, and retrieved knowledge. This workflow is ideal for deploying intelligent, adaptable chatbots that provide precise, comprehensive, or nuanced answers depending on user intent, suitable for customer support, knowledge bases, or AI assistants.

Node Count

>20 Nodes

Nodes Used

@n8n/n8n-nodes-langchain.agent, @n8n/n8n-nodes-langchain.chatTrigger, @n8n/n8n-nodes-langchain.embeddingsGoogleGemini, @n8n/n8n-nodes-langchain.lmChatGoogleGemini, @n8n/n8n-nodes-langchain.memoryBufferWindow, @n8n/n8n-nodes-langchain.vectorStoreQdrant, executeWorkflowTrigger, respondToWebhook, set, stickyNote, summarize, switch

Reviews

There are no reviews yet.

Be the first to review “Adaptive AI Chat Workflow with Contextual Retrieval”

Your email address will not be published. Required fields are marked *