Testing Multiple Local LLMs with Dynamic Analysis

somdn_product_page

This n8n workflow automates the process of testing and analyzing multiple locally hosted Large Language Models (LLMs) using LM Studio. The main goal is to send chat prompts to different models, evaluate their responses based on readability, word count, sentence structure, and other metrics, and then log the results for review.

The workflow begins with setting up the environment, including configuring the LM Studio server IP and updating the list of loaded models. It then listens for incoming chat messages via a webhook trigger. When a message is received, it captures start times and prompts, applies a system prompt to guide model responses (e.g., ensuring responses are concise and easy to read), and then calls the selected models with dynamic inputs.

Each model’s response is analyzed for various text metrics such as readability score, word count, sentence length, and grammatical complexity. These results are then optionally stored in a Google Sheet for further review or analysis, with a detailed setup instruction included for creating the sheet. The workflow also calculates response time differences to evaluate model performance. It is ideal for developers tuning LLMs, comparing different models, or assessing local AI setups for readability and efficiency.

Key components include HTTP request nodes for retrieving models, webhook trigger, chat and analysis nodes, various sticky notes for instructions, and Google Sheets integration for logging. The workflow supports flexible model testing, performance evaluation, and result visualization.

Node Count

>20 Nodes

Nodes Used

@n8n/n8n-nodes-langchain.chainLlm, @n8n/n8n-nodes-langchain.chatTrigger, @n8n/n8n-nodes-langchain.lmChatOpenAi, code, dateTime, googleSheets, httpRequest, set, splitOut, stickyNote

Reviews

There are no reviews yet.

Be the first to review “Testing Multiple Local LLMs with Dynamic Analysis”

Your email address will not be published. Required fields are marked *