This n8n workflow is designed to intelligently route user chat messages to the most suitable large language model (LLM) based on the type of request. It begins with a webhook trigger that activates when a chat message is received, capturing the input text. The workflow then classifies this input by assigning a request type—such as ‘coding’, ‘reasoning’, ‘general’, or ‘search’—using a chain LLM node.
A structured output parser extracts the classification result, which is used by a model selector node to determine the appropriate AI model dynamically. The selection rules are based on the request type, guiding the workflow to route the input to different models like GPT-4, Google Bard, Perplexity, Anthropic Claude, or other specialized AI services. Memory nodes are incorporated to maintain context during ongoing conversations.
This workflow enables a flexible and efficient AI-powered chat system, perfect for environments where multiple AI services are integrated to handle diverse user queries effectively. It ensures responses are generated by the most relevant model, improving relevance and resource management.
Reviews
There are no reviews yet.