This workflow creates an automated pipeline for web data extraction, processing, vectorization, storage, and notification. It starts with a manual trigger and uses Scrapeless for scraping web pages, Claude AI for content extraction and formatting, Ollama for embeddings, and Qdrant for vector storage. The workflow includes data validation, enhancement, and multi-platform notifications via webhooks. Ideal for automated content analysis and semantic search implementations.
Building an AI-Powered Web Data Pipeline with n8n
Node Count | 11 – 20 Nodes |
---|---|
Nodes Used | code, httpRequest, if, manualTrigger, set, stickyNote |
Reviews
There are no reviews yet.