There is a particular kind of paralysis that sets in when you follow the AI space professionally. Not ignorance. The opposite. Too much information, too fast, from too many directions at once.
I am an IT professional. I have been one for a long time. My day job is coordination, communication, managing third-party software across a range of quality levels that would make a QA engineer weep. I use chatbots daily. I have Copilot set up in VS Code. I understand, broadly, what large language models are doing and why people are excited about them.
And yet. Every time I tried to go deeper, I ran straight into a wall of LinkedIn posts, YouTube tutorials, Medium articles, podcasts, newsletters, framework announcements, and influencer threads all shouting simultaneously that this thing right here, today, is the one you need to understand immediately.
A complete sensory overload. And my reaction, for a while, was to read a bit more, subscribe to one more newsletter, tell myself I would get properly hands-on soon.
Here is the thing about “soon”: it does not happen on its own.
I am an IT generalist. I know a lot about a lot of things, and deep expertise about very few. In the current AI moment, that is a strange position to be in. The specialists get incredible productivity boosts. The generalists have to work harder to extract value, because the interesting applications require domain knowledge, infrastructure intuition, and real problems to solve. You cannot just spin up a demo and call it done.
I had real problems to solve. I had a homelab. I had leftover holiday days. I had a credit card.
So I pulled out the credit card.
The budget: fifty euros. The timeline: one week. The label I gave the whole endeavour, mostly to amuse myself: Maschinenraum-IT. Engine room IT. The unglamorous infrastructure work that nobody talks about at conferences but everybody relies on at two in the morning.
The name of this blog is EBKAC, which stands for Error Between Keyboard And Chair. I chose it because I believe the most instructive errors are your own. This week was going to involve a lot of instructive errors.
The plan, loosely sketched over a coffee:
- Anthropic Pro plan. Actually use Claude properly, not just for one-off questions.
- Get VS Code set up with the right extensions. Try to get some MCPs working. (Spoiler: this is harder than it sounds.)
- Tackle the homelab document chaos I had been ignoring for months.
- Build something that automates the boring parts. Something that learns.
- Document everything in the Gitea wiki so the effort is not wasted when I forget all of this in three months.
It was, on reflection, an ambitious plan for fifty euros and one week. But the point was not to finish. The point was to do. To get hands on something real and see what happened.
What happened was instructive.
This is part one of a seven-part series on a week of homelab learning: AI tools, infrastructure as code, RAG pipelines, and one very memorable failure involving 850 API requests fired at a small Paperless instance. Follow along with the lernreise tag.
Lernreise 2/7: The Starting Point: A Thousand Untagged Documents →
Lernreise 1/7 of 7. Follow the lernreise tag for the full series.