This n8n workflow is designed to automatically extract structured personal data from chat messages using a self-hosted language model. It begins with a webhook trigger that activates when a chat message is received, then processes the message through a language model (Ollama’s Mistral NeMo) to analyze and extract relevant details. The workflow employs a structured output parser to validate and organize the extracted information, following predefined JSON schema. If the model’s response does not meet the required format, an auto-fixer prompts the model to correct its output, ensuring data consistency and accuracy. This setup is practical for CRM integrations, customer service automation, or any system requiring detailed data capture from conversational inputs.
Automated Extraction of Personal Data Using Self-Hosted LLM
Node Count | 11 – 20 Nodes |
---|---|
Nodes Used | @n8n/n8n-nodes-langchain.chainLlm, @n8n/n8n-nodes-langchain.chatTrigger, @n8n/n8n-nodes-langchain.lmChatOllama, @n8n/n8n-nodes-langchain.outputParserAutofixing, @n8n/n8n-nodes-langchain.outputParserStructured, noOp, set, stickyNote |
Reviews
There are no reviews yet.