This n8n workflow automates the routing of user prompts to appropriate local language models (LLMs) in a privacy-centric setup using Ollama. When a chat message is received, the workflow analyzes the input and intelligently selects the most suitable Ollama model based on the nature of the request. It employs a dynamic decision framework, classification logic, and memory buffers to maintain context across interactions. This setup is ideal for AI enthusiasts, developers, and privacy-conscious users who want to leverage multiple specialized LLMs locally without transmitting data externally. Practical use cases include intelligent routing for reasoning, multilingual tasks, coding assistance, and visual data analysis—all handled securely on a local machine.
Smart Local LLM Router for Privacy-Focused AI Conversations
Node Count | 11 – 20 Nodes |
---|---|
Nodes Used | @n8n/n8n-nodes-langchain.agent, @n8n/n8n-nodes-langchain.chatTrigger, @n8n/n8n-nodes-langchain.lmChatOllama, @n8n/n8n-nodes-langchain.memoryBufferWindow, stickyNote |
Reviews
There are no reviews yet.