Automated Handwritten Code Evaluation Workflow

somdn_product_page

This n8n workflow automates the process of evaluating handwritten code snippets extracted from images against expected answers stored in a dataset. The goal is to measure the similarity between the handwritten code and the expected output, and calculate a similarity score, useful in educational or quality assurance settings.

The workflow begins with a webhook trigger, which receives image URLs for new handwritten code submissions. It then fetches the image using an HTTP request, and extracts the handwritten code from the top-right corner of the image using OpenAI’s language model, configured to parse specific code formats.

The extracted code is compared with the expected code from a dataset stored in Google Sheets. The comparison is performed using a Levenshtein distance algorithm to measure character-by-character similarity, then transforms this distance into a similarity score between 0 and 1.

Throughout the process, the workflow includes steps to check if evaluation is needed and to set the computed similarity metrics back into the dataset, enabling comprehensive performance tracking.

This workflow is particularly useful for automated grading of handwritten code exercises, quality control of handwritten input data, or any scenario where handwritten text needs to be analyzed and scored automatically.

Node Count

11 – 20 Nodes

Nodes Used

@n8n/n8n-nodes-langchain.openAi, code, evaluation, evaluationTrigger, httpRequest, respondToWebhook, set, stickyNote, webhook

Reviews

There are no reviews yet.

Be the first to review “Automated Handwritten Code Evaluation Workflow”

Your email address will not be published. Required fields are marked *