AI Model Selector for Dynamic Request Routing

somdn_product_page

This n8n workflow is designed to intelligently route user chat messages to the most suitable large language model (LLM) based on the type of request. It begins with a webhook trigger that activates when a chat message is received, capturing the input text. The workflow then classifies this input by assigning a request type—such as ‘coding’, ‘reasoning’, ‘general’, or ‘search’—using a chain LLM node.

A structured output parser extracts the classification result, which is used by a model selector node to determine the appropriate AI model dynamically. The selection rules are based on the request type, guiding the workflow to route the input to different models like GPT-4, Google Bard, Perplexity, Anthropic Claude, or other specialized AI services. Memory nodes are incorporated to maintain context during ongoing conversations.

This workflow enables a flexible and efficient AI-powered chat system, perfect for environments where multiple AI services are integrated to handle diverse user queries effectively. It ensures responses are generated by the most relevant model, improving relevance and resource management.

Node Count

11 – 20 Nodes

Nodes Used

@n8n/n8n-nodes-langchain.agent, @n8n/n8n-nodes-langchain.chainLlm, @n8n/n8n-nodes-langchain.chatTrigger, @n8n/n8n-nodes-langchain.lmChatAnthropic, @n8n/n8n-nodes-langchain.lmChatGoogleGemini, @n8n/n8n-nodes-langchain.lmChatOpenAi, @n8n/n8n-nodes-langchain.lmChatOpenRouter, @n8n/n8n-nodes-langchain.memoryBufferWindow, @n8n/n8n-nodes-langchain.modelSelector, @n8n/n8n-nodes-langchain.outputParserStructured, stickyNote

Reviews

There are no reviews yet.

Be the first to review “AI Model Selector for Dynamic Request Routing”

Your email address will not be published. Required fields are marked *