Self-Hosting n8n Securely with Caddy, Cloudflare & Local GPU-Accelerated AI using Ollama
In today’s world, automation and artificial intelligence are transforming workflows. n8n is a fantastic open-source platform for workflow automation, but running sensitive automations often necessitates self-hosting for privacy and control. Simultaneously, running Large Language Models (LLMs) locally using tools like Ollama offers similar benefits for AI tasks. What if we could combine them? Imagine triggering complex n8n workflows that leverage the power of a locally hosted, GPU-accelerated LLM, all served …