Buffered Telegram AI Chat for Seamless Conversations

somdn_product_page

This n8n workflow facilitates a buffered, coherent chat experience via Telegram by aggregating multiple user messages sent in quick succession. The process begins with receiving messages via a Telegram trigger, which are then stored in a Supabase message queue. The workflow waits for a configurable period (default 10 seconds) to collect potential follow-up messages. Once the wait time expires without additional messages, all queued messages are aggregated into a single conversation context, processed through an OpenAI GPT-4 model, and responded to with a unified reply. The conversation history is stored in Postgres for context continuity, and the original messages are cleaned from the queue to prepare for the next user interaction. This setup is ideal for chatbot implementations where users often send several brief messages instead of one long message, allowing for more natural and seamless AI conversations.

Node Count

>20 Nodes

Nodes Used

@n8n/n8n-nodes-langchain.agent, @n8n/n8n-nodes-langchain.lmChatOpenAi, @n8n/n8n-nodes-langchain.memoryPostgresChat, aggregate, if, noOp, sort, stickyNote, supabase, telegram, telegramTrigger, wait

Reviews

There are no reviews yet.

Be the first to review “Buffered Telegram AI Chat for Seamless Conversations”

Your email address will not be published. Required fields are marked *