Token Usage Metrics Extraction and Workflow Monitoring

somdn_product_page

This workflow is designed to analyze and extract token usage metrics from AI model executions within n8n. It begins with triggering a specific workflow execution using an execution ID, retrieves detailed execution data, and then processes this data to calculate total token usage across different models such as Gemini and OpenAI. The process includes a custom code node that recursively searches through the execution data for token usage details, aggregates totals, and identifies models used. The workflow is useful for AI developers and data analysts who need to monitor, audit, or optimize token consumption for AI model calls. It can be integrated into larger automation systems to track AI usage efficiently and generate detailed reports on token consumption, helping to manage costs and improve model deployment strategies.

Node Count

6 – 10 Nodes

Nodes Used

@n8n/n8n-nodes-langchain.agent, @n8n/n8n-nodes-langchain.chatTrigger, @n8n/n8n-nodes-langchain.lmChatGoogleGemini, @n8n/n8n-nodes-langchain.lmChatOpenAi, @n8n/n8n-nodes-langchain.memoryBufferWindow, code, executeWorkflow, executeWorkflowTrigger, n8n, stickyNote

Reviews

There are no reviews yet.

Be the first to review “Token Usage Metrics Extraction and Workflow Monitoring”

Your email address will not be published. Required fields are marked *