I Built a Personal AI Stack Without Writing a Line of Code. Here's How.
I'm not a developer. I used to be a chef. About four years ago I made a fairly random career change into IT and ended up as a team leader doing Microsoft 365 architecture. I mention this because it matters for what follows. Everything in this stack was built without me writing a single line of code.
If you've been anywhere near GitHub or a tech forum recently, you've probably seen OpenClaw. Sixty thousand stars in seventy-two hours. An AI agent that runs on your own machine, connects to WhatsApp or Telegram, and gets things done while you sleep. The pitch is clean and the hype is real.
I've been watching it closely and I'll be trying it. But when I look at how most people are actually running it with API bills of £20–60 a month for typical daily use, others on Claude Max at £100–200 a month to control costs, some buying a Mac Mini specifically to run local models and avoid API costs altogether, I keep thinking there's a simpler path to most of what people actually want.
This is that path. It runs on a Claude Pro subscription (£18/month) and a £5 VPS. That's it.
The Problem with Built-in Connectors
Claude.ai comes with connectors out of the box. Google Drive, Gmail, Calendar and a handful of others. They're fine for what they are. But the moment you want to do something outside that list, you're stuck. Read from your own database, trigger a custom workflow, interact with a tool Anthropic hasn't partnered with. Nothing.
I wanted Claude to interact with my real systems. email, Notion, OneDrive, news sources. Not tell me what steps to take but actually take them. The built-in connectors weren't going to get there. I needed a way to build anything, connect anything, and have Claude call it cleanly.
That's what this stack does.
Why n8n
Honestly, two reasons. It was cheap and I already understood that type of tool.
I use Power Automate at work, so the visual workflow builder clicked immediately (and its pretty easy to pick up if you haven't used these kind of tools before). Nodes connected by lines, you can see exactly what's happening at every step. There wasn't much of a learning curve.
The self-hosting side mattered too. n8n runs on a Hetzner VPS for about £5 a month. I own the infrastructure, I'm not paying per execution, and my data isn't passing through a third-party automation platform. That felt right for a personal setup where I'm routing email content and personal notes through workflows and its cheap.
Worth flagging for anyone coming from a non-technical background: n8n is a visual builder first. You (or Claude) are connecting nodes on a canvas, not writing code. I describe what I want a workflow to do, Claude generates the structure, I import it and it mostly just works. I've barely had to touch anything manually. That combination of Claude generating the logic and me being able to see and understand it visually is basically the whole development process.
What's Running Right Now
The stack currently handles a daily news briefing, email triage across three accounts, Notion workspace management, OneDrive file operations and a handful of other automations. Each is a separate n8n workflow, exposed to Claude through the instance-level MCP integration.
I'm not going deep on any of these here. Each one will get its own post and there's enough in each to fill one properly. The point is that these are real daily-use tools, not demos I built once to see if I could.
The Instance-Level MCP Pattern
This is the bit that makes everything work.
MCP, Model Context Protocol, is Anthropic's standard for connecting Claude to external tools and data. In n8n you can implement it at a workflow level or at the instance level. The instance-level approach exposes all your MCP-enabled workflows through a single connection endpoint.
One connection in Claude's settings pointing at my n8n instance. Every workflow I've marked as available in MCP becomes a tool Claude can call. When I ask Claude to check my emails it's not using a built-in connector. It's calling my n8n email workflow via MCP, which talks to the Gmail and Outlook APIs through credentials I've set up. Same pattern for Notion, OneDrive, everything else. My infrastructure, my credentials, my logic.
There's also a layer of Claude skills sitting on top of this. Skills are structured context files that tell Claude how and when to use each workflow, what format to expect back, what it should and shouldn't do. They're the behavioural layer. MCP is the plumbing. They do different jobs and that distinction will get its own post.
The practical effect is that building a new Claude capability is a consistent, repeatable process. Build the n8n workflow, mark it as available in MCP, write a skill file, done. I've done it enough times now that a new tool takes at maximum an hour from idea to working, and I'm actively working toward making that process fully automated, which is something I'll cover in a future post.
As a concrete example, the daily news briefing. Claude starts with the news-briefing skill loaded, which tells it the topic areas, the format and how to handle stories it's already covered. Before writing anything it calls n8n via MCP to read the previous briefing log from Notion, so it knows what to skip. Then Claude does the actual research and writing. When it's done, n8n logs the new briefing back to Notion. Two minutes, completely unattended.
What I Tried Before This
I didn't land on this pattern straight away. My first instinct was to build a custom MCP server from scratch. Claude suggested Python and FastAPI running in Docker on the same Hetzner VPS my n8n is hosted on, giving Claude access to my Obsidian vault through the Microsoft Graph API (because I was syncing Obsidian via OneDrive which your not supposed to do but it worked).
To be clear about what building this actually meant: Claude wrote the code, I copied terminal output back into the chat and we debugged it together. I'm not writing Python. I'm following instructions and asking questions when things break.
And things broke a lot. The MCP Python library had an undocumented breaking API change between versions (so Claude tells me, honesty I had no idea what I was doing half the time). The vault path was wrong. The Graph API drive reference was pointing at a work tenant endpoint instead of the personal one I needed. Every fix was another round of copy, paste, rebuild, wait, check the logs, report back. It got there eventually but it took much longer than it should have for something that turned out to be fairly simple once I understood it.
The instance-level n8n MCP integration would have solved the same problem in a fraction of the time with no custom code to maintain. I've since moved my main knowledge base from Obsidian to Notion, so the custom server is legacy now. That said, Obsidian is a solid option and it's free. Notion isn't. So depending on what you're doing it's absolutely worth considering. I'll revisit that decision in a separate post because I'm not entirely sure I made the right call.
Sometimes the less interesting answer is the right one.
OpenClaw vs This Stack: An Honest Comparison
OpenClaw and what I've described solve similar problems in fundamentally different ways. Understanding the difference helps you pick the right starting point.
What OpenClaw actually is: an autonomous agent (or agents, plural). You configure it once, point it at your services, and it runs proactively in the background on a schedule, on triggers, or when you text it on Telegram. It acts without you being in the room. The installation is a single command and the interface is whatever messaging app you already use. That's a genuine UX achievement and a big part of why it went viral.
What this stack is: Claude-initiated. I'm the one starting the conversation. Claude doesn't do things in the background unless I ask it to. That's a deliberate design, not a limitation — it means I always know what's happening and why.
In practice, this distinction matters most at the edges. OpenClaw can monitor your inbox overnight and reply to emails before you wake up. My stack won't do that unprompted. What it will do is check your emails, summarise them, draft replies, and send them the moment you ask it to, with full visibility into every step. And if you want to close the gap a bit, Claude Cowork can schedule workflows to run on a timer, which gets you closer to the autonomous feeling without the infrastructure overhead.
For most things people actually need, the difference is smaller than the marketing suggests.
On cost: this is where the gap is more concrete.
My stack runs on Claude Pro (£18/month), a Hetzner VPS hostin n8n community edition (£5/month), and a Notion subscription for my knowledge base (£10/month at the lower tier). That last one is a personal preference — you could just as easily use local files, OneDrive, Google Drive, or a free Obsidian vault instead. The core of the stack is £23/month.
OpenClaw's software is free, but you're paying for every API call the agent makes. Typical daily use with a decent model lands around £20–60/month. Heavy users on Claude Max are spending £100–200/month to get predictable billing rather than pay-as-you-go surprises. Some people buy a Mac Mini at around £600 upfront specifically to run local models and eliminate API costs entirely. That works, but local models like Llama or Qwen, even on good hardware, are a step below frontier models like Claude Sonnet in quality. You can run a hybrid setup that routes simple tasks to a local model and complex ones to the API, which helps on cost but adds configuration overhead. It's a trade-off, not a solution.
One more thing worth knowing: in January 2026, Anthropic blocked the workaround that let people use Claude Pro or Max subscriptions directly inside OpenClaw. The only supported route now is a pay-as-you-go API key, so the "just use your existing subscription" shortcut is no longer Officially available, although I think some people have manged to side step that issue.
Neither approach is wrong. If you want a fully autonomous agent that acts without you prompting it, OpenClaw is the right tool. If you want Claude to actually do things rather than just answer questions, with full visibility into the process and costs you can predict, this stack is a reasonable place to start especially if you're coming from zero infrastructure.
Where to Start If This Is Too Much
If everything above sounds like too much for where you are right now, Anthropic recently released Cowork — a desktop tool that handles file and task automation without any self-hosting. It sits between standard Claude.ai and a self-hosted stack, and it's worth knowing about before you go buying a VPS.
The progression roughly looks like this: Claude.ai (question and answer) → Cowork (local automation, no infrastructure) → this stack (your own tools, your own data, full control) → OpenClaw (autonomous agents running 24/7).
Most people probably don't need to start at step four.
Where This Is Going
The next thing I'm working toward is self-tooling, essentially Claude identifying something worth automating and building and deploying the n8n workflow itself. The infrastructure is mostly there. That'll get its own post when it's ready.
The rest of the stack like the email workflows, the Obsidian build, the Notion integration, the news briefing pattern will all get proper writeups. Subscribe if any of that is what you came here for.
Former chef. Four years into IT. Building personal AI infrastructure mostly by trial and error, with Claude writing most of the actual code. It works, most of the time, and the parts where it didn't have usually been worth writing about.