Adaptive RAG Workflow for Contextual AI Responses

somdn_product_page

This n8n workflow implements an advanced Adaptive Retrieval-Augmented Generation (RAG) system designed to provide tailored AI responses based on query classification. It intelligently identifies whether a user query is Factual, Analytical, Opinion-based, or Contextual, and dynamically applies specific retrieval and generation strategies for each. The workflow involves classifying the input query using Google Gemini models, routing the query through different strategies (such as query refinement, sub-question generation, perspective identification, or context inference), retrieving relevant documents from a Qdrant vector store, and generating refined responses using Gemini models. The system enhances accuracy and relevance by adapting to the nature of the query, making it highly suitable for intelligent chatbots, knowledge bases, or virtual assistants requiring nuanced understanding and responses.

Node Count

>20 Nodes

Nodes Used

@n8n/n8n-nodes-langchain.agent, @n8n/n8n-nodes-langchain.chatTrigger, @n8n/n8n-nodes-langchain.embeddingsGoogleGemini, @n8n/n8n-nodes-langchain.lmChatGoogleGemini, @n8n/n8n-nodes-langchain.memoryBufferWindow, @n8n/n8n-nodes-langchain.vectorStoreQdrant, executeWorkflowTrigger, respondToWebhook, set, stickyNote, summarize, switch

Reviews

There are no reviews yet.

Be the first to review “Adaptive RAG Workflow for Contextual AI Responses”

Your email address will not be published. Required fields are marked *