<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Homelab on Home</title><link>/categories/homelab/</link><description>Recent content in Homelab on Home</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Sat, 07 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="/categories/homelab/" rel="self" type="application/rss+xml"/><item><title>Extending the Local AI Stack with On-Demand GPU Inference on RunPod</title><link>/2026/extending-the-local-ai-stack-with-on-demand-gpu-inference-on-runpod/</link><pubDate>Sat, 07 Mar 2026 00:00:00 +0000</pubDate><guid>/2026/extending-the-local-ai-stack-with-on-demand-gpu-inference-on-runpod/</guid><description>&lt;figure&gt;&lt;img src="/images/posts/post_24/overview.png"data-src="/images/posts/post_24/overview.png"
/&gt;&lt;figcaption&gt;
&lt;h4&gt;Conceptual illustration of the extended AI stack with elastic cloud GPU resources for running large language models on demand - AI generated&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In this post, I want to describe how I extended the local AI stack I built in my homelab with on-demand GPU-backed model inference, without adding any GPU hardware to the lab itself.&lt;/p&gt;
&lt;p&gt;The two previous posts in this series provide the context for what follows. The &lt;a href="/2026/my-homelab-a-traefik-centered-self-hosting-setup/"&gt;homelab post&lt;/a&gt; covers the base infrastructure: thin clients, Docker Compose, Traefik, and internal DNS. The &lt;a href="/2026/my-local-ai-stack-open-webui-litellm-searxng-and-docling/"&gt;local AI stack post&lt;/a&gt; describes how &lt;em&gt;Open WebUI&lt;/em&gt;, &lt;em&gt;LiteLLM&lt;/em&gt;, &lt;em&gt;SearXNG&lt;/em&gt;, and &lt;em&gt;Docling&lt;/em&gt; sit on top of that infrastructure to form a self-hosted AI environment. That stack works well, and I have been using it for a while. Keeping the lab CPU-only is a deliberate choice. For orchestration, document workflows, and routing requests to publicly available AI services, dedicated GPU hardware at home is simply not necessary. When I want to try a particular model that is not available through a managed API, or experiment with something freshly released on Hugging Face, I rent the compute on demand rather than maintain it permanently.&lt;/p&gt;
&lt;p&gt;The solution is straightforward: rent GPU capacity on demand from a specialized cloud provider, expose it as an OpenAI-compatible endpoint, and wire it into the existing stack. No new hardware, no permanent cost, no changes to the tools I already use.&lt;/p&gt;
&lt;h2 id="a-note-on-neo-clouds"&gt;A Note on Neo Clouds&lt;/h2&gt;
&lt;p&gt;The providers that specialize in this type of GPU-first infrastructure are sometimes called &lt;em&gt;Neo Clouds&lt;/em&gt;. The term emerged around 2024 to distinguish GPU-specialist vendors such as RunPod, CoreWeave and others from traditional hyperscalers. In practice, I am not sure the new term adds much. For me these are specialized cloud providers focused on GPU compute and AI workloads. Useful services, somewhat unnecessary branding.&lt;/p&gt;
&lt;h2 id="why-runpod"&gt;Why RunPod&lt;/h2&gt;
&lt;p&gt;I use &lt;a href="https://www.runpod.io/"&gt;RunPod&lt;/a&gt; for this setup for a few practical reasons. The interface is intuitive, the deployment path from template to running pod is short, and the GPU catalog is broad enough to cover most use cases. Pricing is per second with no ingress or egress fees, which makes on-demand experimentation economical. RunPod also exposes an API for its core operations, so deployments can be automated rather than driven entirely through the UI.&lt;/p&gt;
&lt;p&gt;A detailed description of all RunPod services is out of scope for this post. The focus here is on one specific workflow: deploying a &lt;em&gt;vLLM&lt;/em&gt; inference server with a model loaded from &lt;em&gt;Hugging Face&lt;/em&gt;, and connecting the resulting endpoint to Open WebUI.&lt;/p&gt;
&lt;h2 id="deploying-a-vllm-inference-server-on-runpod"&gt;Deploying a vLLM Inference Server on RunPod&lt;/h2&gt;
&lt;p&gt;RunPod uses templates to save pod configurations for reuse. A template defines the container image, the start command, the storage allocation, and other runtime parameters. I maintain a small collection of private templates, each configured for a different model.&lt;/p&gt;
&lt;figure&gt;&lt;img src="/images/posts/post_24/list_of_private_templates.png"data-src="/images/posts/post_24/list_of_private_templates.png"
/&gt;&lt;figcaption&gt;
&lt;h4&gt;A selection of saved vLLM templates on RunPod, each using to a different model from Hugging Face&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;The container image for all of these templates is &lt;code&gt;vllm/vllm-openai:latest&lt;/code&gt;, which bundles &lt;em&gt;vLLM&lt;/em&gt; with an OpenAI-compatible API server. The model itself is specified in the container start command, which means swapping models is a matter of editing a single line.&lt;/p&gt;
&lt;h2 id="creating-a-template"&gt;Creating a Template&lt;/h2&gt;
&lt;p&gt;When creating or editing a template, the key fields are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Type:&lt;/strong&gt; Pod&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Compute type:&lt;/strong&gt; Nvidia GPU&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Container image:&lt;/strong&gt; &lt;code&gt;vllm/vllm-openai:latest&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Container start command:&lt;/strong&gt; the vLLM arguments, including the model reference&lt;/li&gt;
&lt;/ul&gt;
&lt;figure&gt;&lt;img src="/images/posts/post_24/vllm_start_cmd.png"data-src="/images/posts/post_24/vllm_start_cmd.png"
/&gt;&lt;figcaption&gt;
&lt;h4&gt;Template configuration for the vllm_gemma-3-12b template, showing the container image and start command&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Throughout the following steps, any value written in &lt;code&gt;&amp;lt;angle brackets&amp;gt;&lt;/code&gt; is a placeholder and must be replaced with your actual value before running the command.&lt;/p&gt;
&lt;p&gt;A start command for deploying the Red Hat&amp;rsquo;s validated &lt;code&gt;RedHatAI/Qwen3-8B-FP8-dynamic&lt;/code&gt; model looks like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;--host 0.0.0.0 --port &lt;span class="m"&gt;8000&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --model RedHatAI/Qwen3-8B-FP8-dynamic &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --dtype bfloat16 &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --enforce-eager &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --gpu-memory-utilization 0.95 &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --api-key &amp;lt;api_key&amp;gt; &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --max-model-len &lt;span class="m"&gt;8128&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The parameters worth noting:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;--model&lt;/code&gt;&lt;/strong&gt;: any model available on Hugging Face can be referenced here by its repository path&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;--dtype bfloat16&lt;/code&gt;&lt;/strong&gt;: sets the compute dtype; &lt;code&gt;bfloat16&lt;/code&gt; is a good default for inference on NVIDIA hardware&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;--enforce-eager&lt;/code&gt;&lt;/strong&gt;: disables CUDA graph capture, which reduces memory overhead at the cost of some throughput; useful when fitting larger models on a single GPU&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;--gpu-memory-utilization 0.95&lt;/code&gt;&lt;/strong&gt;: allows vLLM to use up to 95% of available GPU memory for the KV cache&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;--api-key&lt;/code&gt;&lt;/strong&gt;: sets a bearer token for the OpenAI-compatible endpoint; always set this when deploying a public endpoint&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;--max-model-len&lt;/code&gt;&lt;/strong&gt;: caps the maximum sequence length; reducing this frees memory and allows larger models to fit on smaller GPUs&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="selecting-a-gpu-and-deploying"&gt;Selecting a GPU and Deploying&lt;/h2&gt;
&lt;p&gt;Once the template is configured, deploying it requires selecting a GPU and clicking deploy. RunPod shows available hardware with current pricing.&lt;/p&gt;
&lt;figure&gt;&lt;img src="/images/posts/post_24/gpu_selection.png"data-src="/images/posts/post_24/gpu_selection.png"
/&gt;&lt;figcaption&gt;
&lt;h4&gt;GPU selection on RunPod, ranging from RTX 3090 class cards to H200 and B200 datacenter accelerators&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;For most inference workloads with 8 to 12 billion parameter models, an RTX 4090 or L4 is a practical and cost-effective choice. Larger models with higher memory requirements will need 48 GB or 80 GB class cards. The per-hour pricing shown in the interface makes it easy to estimate cost for a session before committing.&lt;/p&gt;
&lt;p&gt;After deployment, RunPod assigns a public HTTPS endpoint to the pod. The vLLM server is reachable at that endpoint on port 8000, with the path structure matching the OpenAI API.&lt;/p&gt;
&lt;h2 id="connecting-the-endpoint-to-open-webui"&gt;Connecting the Endpoint to Open WebUI&lt;/h2&gt;
&lt;p&gt;With the pod running and the model loaded, the endpoint can be added to Open WebUI as an external connection. In Open WebUI, navigate to &lt;strong&gt;Admin Panel&lt;/strong&gt; then &lt;strong&gt;Settings&lt;/strong&gt; and add a new connection with the following values:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Connection type:&lt;/strong&gt; External&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;URL:&lt;/strong&gt; &lt;code&gt;https://&amp;lt;runpod_endpoint&amp;gt;/v1&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Auth:&lt;/strong&gt; API key set in the vLLM start command&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Provider type:&lt;/strong&gt; OpenAI&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;API type:&lt;/strong&gt; Chat Completions&lt;/li&gt;
&lt;/ul&gt;
&lt;figure&gt;&lt;img src="/images/posts/post_24/open_webui_configuration.png"data-src="/images/posts/post_24/open_webui_configuration.png"
/&gt;&lt;figcaption&gt;
&lt;h4&gt;Adding the RunPod vLLM endpoint as an external OpenAI-compatible connection in Open WebUI&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Once saved, the model served by vLLM on RunPod appears in the model selector alongside any other configured backends. From a user perspective, the interface is identical to any other configured model, whether local or a commercial API.&lt;/p&gt;
&lt;p&gt;Alternatively, the endpoint can be added to LiteLLM as a named model alias. This is the better option if you want centralized credential management or want to expose the RunPod model alongside other backends under a consistent naming scheme across the stack.&lt;/p&gt;
&lt;h2 id="why-this-setup-works-well"&gt;Why This Setup Works Well&lt;/h2&gt;
&lt;p&gt;The combination of a self-hosted orchestration stack and on-demand GPU inference fits well with a homelab where tooling and workflows are in place but on-premises compute is intentionally kept lean.&lt;/p&gt;
&lt;p&gt;A few things make this pattern practical:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Low cost for experimentation.&lt;/strong&gt; Models run only when needed. A session of an hour or two to test a new model costs a few dollars at most.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Access to current models.&lt;/strong&gt; Many of the recently published models available on Hugging Face can be loaded into vLLM, which means it is straightforward to test recently released models without waiting for them to appear in a managed API.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No changes to the existing stack.&lt;/strong&gt; Open WebUI, LiteLLM, SearXNG, and Docling continue to work exactly as before. The RunPod endpoint is just another backend.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Automatable.&lt;/strong&gt; RunPod exposes an API for managing pods, so deployments can be triggered programmatically. Combined with LiteLLM&amp;rsquo;s routing, it becomes possible to bring a model endpoint up on demand and tear it down again when it is no longer needed.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;Adding RunPod as an on-demand GPU backend closes the main gap in a CPU-only homelab AI stack. The setup requires no changes to the existing infrastructure and takes only a few minutes from template to running endpoint. The result is the ability to experiment with current, capable models at low cost, using the same interface and workflows already in place.&lt;/p&gt;
&lt;p&gt;For on-demand model access that does not warrant the cost of persistent GPU hardware, this pattern is worth considering.&lt;/p&gt;
&lt;h2 id="references"&gt;References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;My Homelab: A Traefik-centered Self-hosting Setup - &lt;a href="/2026/my-homelab-a-traefik-centered-self-hosting-setup/"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;My Local AI Stack: Open WebUI, LiteLLM, SearXNG, and Docling - &lt;a href="/2026/my-local-ai-stack-open-webui-litellm-searxng-and-docling/"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;RunPod - project site - &lt;a href="https://www.runpod.io/"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;RunPod - documentation - &lt;a href="https://docs.runpod.io/overview"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;vLLM - project site - &lt;a href="https://docs.vllm.ai/"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Hugging Face - model hub - &lt;a href="https://huggingface.co/models"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;RedHatAI models on Hugging Face - &lt;a href="https://huggingface.co/RedHatAI"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>My Local AI Stack: Open WebUI, LiteLLM, SearXNG, and Docling</title><link>/2026/my-local-ai-stack-open-webui-litellm-searxng-and-docling/</link><pubDate>Sat, 14 Feb 2026 00:00:00 +0000</pubDate><guid>/2026/my-local-ai-stack-open-webui-litellm-searxng-and-docling/</guid><description>&lt;figure&gt;&lt;img src="/images/posts/post_19/overview.png"data-src="/images/posts/post_19/overview.png"
/&gt;&lt;figcaption&gt;
&lt;h4&gt;Overview of the modular self-hosted AI stack - AI generated&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In my previous post about my &lt;a href="/2026/my-homelab-a-traefik-centered-self-hosting-setup/"&gt;homelab&lt;/a&gt;, I described the foundation I use for self-hosted services: a small set of low-power machines, Docker Compose for deployment, Traefik as the reverse proxy, and internal DNS to expose services with clean HTTPS hostnames. I have been running this setup for several years with very little maintenance overhead. That setup turned out to be a good base not only for classic self-hosting, but also for local AI workloads. Over the past two year or so, I started extending it with tools to use and experiment with AI services.&lt;/p&gt;
&lt;p&gt;Over time, I wanted more than a single chat UI connected to a single model provider. I wanted a setup that would let me experiment with different models, keep sensitive data inside my own network, enrich prompts with live web results, and work with local documents in a structured way. I also wanted to reuse the same operational patterns I already trusted in the rest of the homelab.&lt;/p&gt;
&lt;p&gt;The result is a local AI stack built from four components:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Open WebUI as the browser-based user interface&lt;/li&gt;
&lt;li&gt;LiteLLM as the OpenAI-compatible model gateway&lt;/li&gt;
&lt;li&gt;SearXNG as the privacy-friendly web search backend&lt;/li&gt;
&lt;li&gt;Docling as the document parsing layer for file-based workflows&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Individually, each of these tools is useful. Combined, they form a practical self-hosted AI environment that fits neatly into the same Traefik-centered architecture as the rest of my homelab.&lt;/p&gt;
&lt;h2 id="base-platform-and-prerequisites"&gt;Base platform and prerequisites&lt;/h2&gt;
&lt;p&gt;The AI stack runs on the same infrastructure described in the &lt;a href="/2026/my-homelab-a-traefik-centered-self-hosting-setup/"&gt;previous post&lt;/a&gt;: refurbished thin clients running CentOS Stream 9, Docker and Docker Compose, Traefik as the reverse proxy, and internal DNS for clean HTTPS hostnames. The key design principle carries over as well: every externally reachable service joins the &lt;code&gt;external&lt;/code&gt; Docker network and is exposed through Traefik using labels, giving a consistent way to publish services under HTTPS without managing ports or certificates per application.&lt;/p&gt;
&lt;p&gt;My current setup is CPU-only. That matters. It is perfectly usable for orchestration, document processing, and web-augmented prompting, but it is not the right environment for large, latency-sensitive inference workloads. In practice, that constraint pushed me toward an architecture where the user interface, routing, tools, and document workflows run locally, while the model backend remains flexible enough to use either local or remote providers.&lt;/p&gt;
&lt;h2 id="architecture-overview"&gt;Architecture overview&lt;/h2&gt;
&lt;p&gt;At a high level, the request flow looks like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A user opens Open WebUI in the browser.&lt;/li&gt;
&lt;li&gt;Open WebUI sends model requests to LiteLLM through its OpenAI-compatible API.&lt;/li&gt;
&lt;li&gt;LiteLLM routes the request to the selected backend model.&lt;/li&gt;
&lt;li&gt;If a prompt requires live information, Open WebUI can use SearXNG as a search tool.&lt;/li&gt;
&lt;li&gt;If a prompt requires document context, uploaded files are parsed with Docling and converted into Markdown.&lt;/li&gt;
&lt;li&gt;The model response is returned to Open WebUI and displayed to the user.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This separation of concerns is what makes the stack useful:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Open WebUI handles the human interaction layer&lt;/li&gt;
&lt;li&gt;LiteLLM abstracts model backends and credentials&lt;/li&gt;
&lt;li&gt;SearXNG provides fresh web context&lt;/li&gt;
&lt;li&gt;Docling turns messy source documents into structured text&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Traefik remains the single public entry point. From an operations perspective, that is valuable because the AI stack behaves like any other part of the homelab.&lt;/p&gt;
&lt;h2 id="open-webui-as-the-central-interface"&gt;Open WebUI as the central interface&lt;/h2&gt;
&lt;p&gt;Open WebUI is the part of the stack I interact with every day. It provides the browser-based interface for conversations, model selection, file uploads, and tool-assisted prompting. The important point is that Open WebUI does not need to know anything about individual model providers. It only needs a single OpenAI-compatible endpoint, which in this setup is LiteLLM.&lt;/p&gt;
&lt;p&gt;That keeps the client configuration simple. If I want to add a new provider, swap one model for another, or change credentials, I do it behind the scenes in LiteLLM without having to reconfigure the user interface. Open WebUI also supports user and group management, making it straightforward to grant access to specific models or restrict certain users to a defined set of backends. A particularly useful feature is the ability to send a single prompt to multiple AI services simultaneously, which makes side-by-side model comparison a natural part of the workflow.&lt;/p&gt;
&lt;p&gt;A simplified Docker Compose service definition for Open WebUI in this setup looks like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;services&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;open-webui&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;ghcr.io/open-webui/open-webui:main&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;open-webui&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;restart&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;unless-stopped&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;OPENAI_API_BASE_URL=http://litellm:4000/v1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;OPENAI_API_KEY=${LITELLM_MASTER_KEY}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;./data/open-webui:/app/backend/data&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;external&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;internal&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.enable=true&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.docker.network=external&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.openwebui.rule=Host(`ai.home.example.com`)&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.openwebui.entrypoints=https&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.openwebui.tls.certresolver=cloudflare&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.services.openwebui.loadbalancer.server.port=8080&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The exact image tag and environment variables may differ depending on the release and your setup, but the pattern stays the same: persistent storage for state, Traefik labels for routing, and a backend API endpoint that points to LiteLLM.&lt;/p&gt;
&lt;h2 id="litellm-as-the-model-gateway"&gt;LiteLLM as the model gateway&lt;/h2&gt;
&lt;p&gt;LiteLLM is the glue that makes the rest of the system flexible. It exposes a single OpenAI-style API while allowing multiple backends underneath. That means I can define logical model names and map them to either local inference backends or remote providers.&lt;/p&gt;
&lt;p&gt;This is useful for several reasons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Open WebUI only has to speak to few API endpoints&lt;/li&gt;
&lt;li&gt;I can standardize naming across models&lt;/li&gt;
&lt;li&gt;Provider credentials stay centralized&lt;/li&gt;
&lt;li&gt;Swapping backends becomes operationally cheap&lt;/li&gt;
&lt;li&gt;Logging and usage controls are easier to centralize&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Compose service definition for LiteLLM follows the same pattern:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;services&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;litellm&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;ghcr.io/berriai/litellm:main-latest&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;litellm&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;restart&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;unless-stopped&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;--config&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;/app/config.yaml&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;--port&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;4000&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;LITELLM_MASTER_KEY=${LITELLM_MASTER_KEY}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;OPENAI_API_KEY=${OPENAI_API_KEY}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;./litellm/config.yaml:/app/config.yaml:ro&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;internal&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;external&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.enable=true&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.docker.network=external&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.litellm.rule=Host(`litellm.home.example.com`)&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.litellm.entrypoints=https&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.litellm.tls.certresolver=cloudflare&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.services.litellm.loadbalancer.server.port=4000&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;style type="text/css"&gt;.notice{--root-color:#444;--root-background:#eff;--title-color:#fff;--title-background:#7bd;--warning-title:#c33;--warning-content:#fee;--info-title:#fb7;--info-content:#fec;--note-title:#6be;--note-content:#e7f2fa;--tip-title:#5a5;--tip-content:#efe}@media (prefers-color-scheme:dark){.notice{--root-color:#ddd;--root-background:#eff;--title-color:#fff;--title-background:#7bd;--warning-title:#800;--warning-content:#400;--info-title:#a50;--info-content:#420;--note-title:#069;--note-content:#023;--tip-title:#363;--tip-content:#121}}body.dark .notice{--root-color:#ddd;--root-background:#eff;--title-color:#fff;--title-background:#7bd;--warning-title:#800;--warning-content:#400;--info-title:#a50;--info-content:#420;--note-title:#069;--note-content:#023;--tip-title:#363;--tip-content:#121}.notice{line-height:24px;margin-bottom:24px;border-radius:4px;color:var(--root-color);background:var(--root-background)}.notice p:last-child{margin-bottom:0; padding: .5rem 1.2rem 1rem;}.notice-title{margin:-18px -18px 12px;padding:4px 18px;border-radius:4px 4px 0 0;font-weight:700;color:var(--title-color);background:var(--title-background)}.notice.warning .notice-title{background:var(--warning-title)}.notice.warning{background:var(--warning-content)}.notice.info .notice-title{background:var(--info-title)}.notice.info{background:var(--info-content)}.notice.note .notice-title{background:var(--note-title)}.notice.note{background:var(--note-content)}.notice.tip .notice-title{background:var(--tip-title)}.notice.tip{background:var(--tip-content)}.icon-notice{display:inline-flex;align-self:center;margin-right:8px}.icon-notice img,.icon-notice svg{height:1em;width:1em;fill:currentColor}.icon-notice img,.icon-notice.baseline svg{top:.125em;position:relative}&lt;/style&gt;
&lt;div&gt;&lt;svg width="0" height="0" display="none" xmlns="http://www.w3.org/2000/svg"&gt;&lt;symbol id="tip-notice" viewBox="0 0 512 512" preserveAspectRatio="xMidYMid meet"&gt;&lt;path d="M504 256c0 136.967-111.033 248-248 248S8 392.967 8 256 119.033 8 256 8s248 111.033 248 248zM227.314 387.314l184-184c6.248-6.248 6.248-16.379 0-22.627l-22.627-22.627c-6.248-6.249-16.379-6.249-22.628 0L216 308.118l-70.059-70.059c-6.248-6.248-16.379-6.248-22.628 0l-22.627 22.627c-6.248 6.248-6.248 16.379 0 22.627l104 104c6.249 6.249 16.379 6.249 22.628.001z"/&gt;&lt;/symbol&gt;&lt;symbol id="note-notice" viewBox="0 0 512 512" preserveAspectRatio="xMidYMid meet"&gt;&lt;path d="M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zm-248 50c-25.405 0-46 20.595-46 46s20.595 46 46 46 46-20.595 46-46-20.595-46-46-46zm-43.673-165.346l7.418 136c.347 6.364 5.609 11.346 11.982 11.346h48.546c6.373 0 11.635-4.982 11.982-11.346l7.418-136c.375-6.874-5.098-12.654-11.982-12.654h-63.383c-6.884 0-12.356 5.78-11.981 12.654z"/&gt;&lt;/symbol&gt;&lt;symbol id="warning-notice" viewBox="0 0 576 512" preserveAspectRatio="xMidYMid meet"&gt;&lt;path d="M569.517 440.013C587.975 472.007 564.806 512 527.94 512H48.054c-36.937 0-59.999-40.055-41.577-71.987L246.423 23.985c18.467-32.009 64.72-31.951 83.154 0l239.94 416.028zM288 354c-25.405 0-46 20.595-46 46s20.595 46 46 46 46-20.595 46-46-20.595-46-46-46zm-43.673-165.346l7.418 136c.347 6.364 5.609 11.346 11.982 11.346h48.546c6.373 0 11.635-4.982 11.982-11.346l7.418-136c.375-6.874-5.098-12.654-11.982-12.654h-63.383c-6.884 0-12.356 5.78-11.981 12.654z"/&gt;&lt;/symbol&gt;&lt;symbol id="info-notice" viewBox="0 0 512 512" preserveAspectRatio="xMidYMid meet"&gt;&lt;path d="M256 8C119.043 8 8 119.083 8 256c0 136.997 111.043 248 248 248s248-111.003 248-248C504 119.083 392.957 8 256 8zm0 110c23.196 0 42 18.804 42 42s-18.804 42-42 42-42-18.804-42-42 18.804-42 42-42zm56 254c0 6.627-5.373 12-12 12h-88c-6.627 0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h12v-64h-12c-6.627 0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h64c6.627 0 12 5.373 12 12v100h12c6.627 0 12 5.373 12 12v24z"/&gt;&lt;/symbol&gt;&lt;/svg&gt;&lt;/div&gt;&lt;div class="notice warning" &gt;
&lt;p class="first notice-title"&gt;&lt;span class="icon-notice baseline"&gt;&lt;svg&gt;&lt;use href="#warning-notice"&gt;&lt;/use&gt;&lt;/svg&gt;&lt;/span&gt;Warning&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Security note:&lt;/strong&gt;&lt;br&gt;
In March 2026, LiteLLM was subject to a suspected supply chain attack in which versions v1.82.7 and v1.82.8 on PyPI contained a malicious payload designed to harvest credentials and exfiltrate them to an external domain. Users running the official LiteLLM Docker image were not affected, as that deployment path pins dependencies and does not rely on the compromised PyPI packages. If you installed LiteLLM via &lt;code&gt;pip&lt;/code&gt; during the affected window, treat any secrets on that system as compromised and rotate them immediately. See the official incident report for full details and verified safe versions.&lt;/p&gt;&lt;/div&gt;
&lt;h2 id="searxng-for-live-privacy-friendly-search"&gt;SearXNG for live, privacy-friendly search&lt;/h2&gt;
&lt;p&gt;One of the biggest limitations of a plain chat interface is the lack of current information. SearXNG solves that problem cleanly. It is a self-hosted metasearch engine that aggregates results from multiple sources and gives me a search API under my own control.&lt;/p&gt;
&lt;p&gt;Even outside the AI stack, SearXNG is useful as a search engine. Inside the stack, it becomes more interesting because it can be exposed as a tool for prompts that need fresh information.&lt;/p&gt;
&lt;p&gt;A minimal Compose service might look like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;services&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;searxng&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;docker.io/searxng/searxng:latest&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;searxng&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;restart&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;unless-stopped&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;./searxng:/etc/searxng&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;external&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.enable=true&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.docker.network=external&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.searxng.rule=Host(`search.home.example.com`)&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.searxng.entrypoints=https&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.searxng.tls.certresolver=cloudflare&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.services.searxng.loadbalancer.server.port=8080&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Once connected to Open WebUI as a tool, the flow is straightforward:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The user asks a question that requires current information.&lt;/li&gt;
&lt;li&gt;The model decides to call the search tool.&lt;/li&gt;
&lt;li&gt;SearXNG performs the search.&lt;/li&gt;
&lt;li&gt;Titles, snippets, and URLs are returned as context.&lt;/li&gt;
&lt;li&gt;The model synthesizes an answer grounded in current results.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="docling-for-document-parsing"&gt;Docling for document parsing&lt;/h2&gt;
&lt;p&gt;The fourth component, Docling, addresses a different problem. Large language models work best with clean text, but many real documents are messy. PDFs, slide decks, and office files often contain broken text flows, layout artifacts, or table structures that are not useful when passed to a model as-is.&lt;/p&gt;
&lt;p&gt;Docling converts these documents into a Markdown representation that is much easier to use as model context. That sounds small, but it is a major quality improvement for local document workflows.&lt;/p&gt;
&lt;p&gt;The Docling service definition is straightforward:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-yaml" data-lang="yaml"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;services&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;docling&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;quay.io/docling-project/docling-serve:latest&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;docling&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;restart&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;unless-stopped&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;internal&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;external&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.enable=true&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.docker.network=external&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.docling.rule=Host(`docling.home.example.com`)&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.docling.entrypoints=https&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.docling.tls.certresolver=cloudflare&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.services.docling.loadbalancer.server.port=5001&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The typical usage pattern is:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Upload a document in Open WebUI.&lt;/li&gt;
&lt;li&gt;Docling parses the file and converts it to Markdown.&lt;/li&gt;
&lt;li&gt;Feed that Markdown into the model as structured prompt context.&lt;/li&gt;
&lt;li&gt;Ask targeted questions against the extracted content.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This is especially useful for technical notes, whitepapers, internal PDFs, or vendor documentation where the raw file format is not suitable for direct prompting.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This stack did not start as an attempt to build a local alternative to a commercial AI product. It emerged naturally from an existing homelab that already had strong building blocks: containerized services, Traefik, DNS-based routing, and a bias toward self-hosting.&lt;/p&gt;
&lt;p&gt;Adding Open WebUI, LiteLLM, SearXNG, and Docling turned that base into a practical local AI environment. It gives me a single interface for model interaction, the ability to swap backends without changing clients, a way to enrich prompts with live web data, and a better workflow for document-driven tasks.&lt;/p&gt;
&lt;p&gt;Just as important, it stays operationally consistent with the rest of the homelab. That keeps the setup understandable, maintainable, and worth using day to day.&lt;/p&gt;
&lt;p&gt;Future extensions are obvious: adding a vector database, introducing GPU-backed local inference, routing requests to model endpoints running on specialized inference platforms, or using Open WebUI as a gateway to interact with AI agents. But even without those additions, this combination already covers a large share of the AI workflows I actually care about.&lt;/p&gt;
&lt;h2 id="references"&gt;References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;My Homelab: A Traefik-centered Self-hosting Setup - &lt;a href="/2026/my-homelab-a-traefik-centered-self-hosting-setup/"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Open WebUI - project site - &lt;a href="https://openwebui.com/"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Open WebUI - GitHub - &lt;a href="https://github.com/open-webui/open-webui"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;LiteLLM - project site - &lt;a href="https://www.litellm.ai/"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;LiteLLM - GitHub - &lt;a href="https://github.com/BerriAI/litellm"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;LiteLLM - Security incident report, March 2026 - &lt;a href="https://docs.litellm.ai/blog/security-update-march-2026"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;SearXNG - documentation - &lt;a href="https://docs.searxng.org/"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;SearXNG - GitHub - &lt;a href="https://github.com/searxng/searxng"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Docling - documentation - &lt;a href="https://docling-project.github.io/docling/"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Docling - GitHub - &lt;a href="https://github.com/docling-project/docling"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>My Homelab: A Traefik-centered Self-hosting Setup</title><link>/2026/my-homelab-a-traefik-centered-self-hosting-setup/</link><pubDate>Sat, 24 Jan 2026 00:00:00 +0000</pubDate><guid>/2026/my-homelab-a-traefik-centered-self-hosting-setup/</guid><description>&lt;figure&gt;&lt;img src="/images/posts/homelab.png"data-src="/images/posts/homelab.png"
/&gt;&lt;figcaption&gt;
&lt;h4&gt;Summary of Homelab services - AI generated&lt;/h4&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Several years ago, I began building a small homelab with two primary objectives in mind: gaining hands-on experience with containers and modern application deployment, and running selected services locally to avoid storing certain data in public cloud environments. In hindsight, this environment evolved into a solid foundation for a local AI stack as well, which I now operate alongside the rest of my setup and will detail in a future post. Although the focus here is on a homelab, the technical stack described can be deployed just as easily in any cloud environment, e.g. a VPS or or any hyperscaler, all that is required is a virtual machine running a Linux distribution of your choice and a container engine.&lt;/p&gt;
&lt;p&gt;What began as an experiment has turned into a stable setup that I use every day. At the center of this setup is Traefik, which handles all incoming HTTP and HTTPS traffic and lets me access every service over SSL with clean domains like &lt;em&gt;service-name.home.example.com&lt;/em&gt; instead of a collection of raw IP addresses and ports.&lt;/p&gt;
&lt;p&gt;In this post I will walk through how I structure this homelab, explain how Traefik ties everything together, and outline a selection of the services currently running in my lab.&lt;/p&gt;
&lt;h2 id="hardware-and-base-platform"&gt;Hardware and base platform&lt;/h2&gt;
&lt;p&gt;The homelab does not run on high-end servers. Most of the hosts are refurbished x86 thin clients with the following specifications:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;16 to 32 GB of RAM per node&lt;/li&gt;
&lt;li&gt;A modest amount of storage for container images, configuration files, and selected data&lt;/li&gt;
&lt;li&gt;Low power consumption, which is important for a system that runs 24/7&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The environment uses CentOS Stream 9 as the operating system. On top of that, I run Docker and Docker Compose. Nearly every component in the homelab is containerized, with Traefik positioned in front of these containers as a reverse proxy and routing layer.&lt;/p&gt;
&lt;h2 id="architecture-overview"&gt;Architecture overview&lt;/h2&gt;
&lt;p&gt;At a high level, the architecture looks like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Several containers run on the hosts&lt;/li&gt;
&lt;li&gt;A dedicated container network called &lt;code&gt;external&lt;/code&gt;, where Traefik and all services that are exposed to the home network reside&lt;/li&gt;
&lt;li&gt;An internal DNS setup and a private domain, such as &lt;code&gt;home.example.com&lt;/code&gt;, where services are exposed as subdomains like:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;https://pihole.home.example.com&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;https://ntfy.home.example.com&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Clients on the home network resolve these hostnames to the internal IP address of the homelab host, ensuring that traffic remains entirely within the local network. The local DNS server is automatically assigned to clients connected to the internal network, making all services immediately accessible to any device on the same network.&lt;br&gt;
Traefik acts as the single entry point for HTTP and HTTPS. It terminates TLS, routes requests to the appropriate container based on the hostname, and applies middlewares such as redirects and authentication where required.&lt;/p&gt;
&lt;h2 id="traefik-as-the-center-of-the-homelab"&gt;Traefik as the center of the homelab&lt;/h2&gt;
&lt;p&gt;Traefik is an open-source reverse proxy and edge router that integrates well with containerized environments. It monitors the container socket, automatically discovers running containers, and uses labels defined on those containers to configure routing.&lt;/p&gt;
&lt;p&gt;In my setup, Traefik provides three main benefits:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Automatic TLS for everything&lt;br&gt;
Traefik uses the DNS challenge with my DNS provider to request certificates from Let’s Encrypt. I can issue a wildcard certificate for &lt;code&gt;*.home.example.com&lt;/code&gt;, so every internal service gets proper HTTPS without having to manage individual certificates.&lt;/li&gt;
&lt;li&gt;Clean hostnames instead of ports&lt;br&gt;
Every service gets its own subdomain, such as &lt;code&gt;pihole.home.example.com&lt;/code&gt; or &lt;code&gt;ntfy.home.example.com&lt;/code&gt;. This means I do not have to remember that one service is on port 8080, another on 9090, and so on.&lt;/li&gt;
&lt;li&gt;Centralized routing and security&lt;br&gt;
Since everything goes through Traefik, I can:
&lt;ul&gt;
&lt;li&gt;Redirect all HTTP traffic to HTTPS&lt;/li&gt;
&lt;li&gt;Protect specific endpoints with basic auth or other middleware&lt;/li&gt;
&lt;li&gt;Inspect and debug routes using the Traefik dashboard&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id="traefik-docker-compose-configuration"&gt;Traefik Docker Compose configuration&lt;/h2&gt;
&lt;p&gt;Here is a simplified version of the Traefik &lt;code&gt;docker-compose.yml&lt;/code&gt; I use:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-YAML" data-lang="YAML"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;3&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;services&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;traefik&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;traefik:latest&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;container_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;traefik&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;restart&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;unless-stopped&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;security_opt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="kc"&gt;no&lt;/span&gt;-&lt;span class="l"&gt;new-privileges:true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;external&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="m"&gt;80&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;80&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="m"&gt;443&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="m"&gt;443&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;CF_API_EMAIL=${CF_API_EMAIL}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;CF_DNS_API_TOKEN=${CF_DNS_API_TOKEN}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;volumes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;/etc/localtime:/etc/localtime:ro&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;/var/run/docker.sock:/var/run/docker.sock:ro&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;./data/traefik.yml:/traefik.yml:ro&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;./data/acme.json:/acme.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;./data/config.yml:/config.yml:ro&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.enable=true&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# HTTP router for Traefik dashboard&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.traefik.entrypoints=http&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.traefik.rule=Host(`traefik.home.example.com`)&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Redirect HTTP to HTTPS&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.traefik.middlewares=traefik-https-redirect&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# Basic auth for the secure dashboard&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.middlewares.traefik-auth.basicauth.users=user:hashed-password&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="c"&gt;# HTTPS router for Traefik dashboard&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.traefik-secure.entrypoints=https&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.traefik-secure.rule=Host(`traefik.home.example.com`)&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.traefik-secure.middlewares=traefik-auth&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.traefik-secure.tls=true&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.traefik-secure.tls.certresolver=cloudflare&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.traefik-secure.tls.domains[0].main=home.example.com&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.traefik-secure.tls.domains[0].sans=*.home.example.com&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.traefik-secure.service=api@internal&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;external&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;external&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The important ideas are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Traefik listens on ports 80 and 443 and is connected to the &lt;code&gt;external&lt;/code&gt; network.&lt;/li&gt;
&lt;li&gt;It uses environment variables to access the DNS provider so it can request certificates from Let’s Encrypt.&lt;/li&gt;
&lt;li&gt;The dashboard is exposed at &lt;code&gt;https://traefik.home.example.com&lt;/code&gt;, protected by basic auth.&lt;/li&gt;
&lt;li&gt;The TLS configuration issues a wildcard certificate for &lt;code&gt;*.home.example.com&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Other services join the same &lt;code&gt;external&lt;/code&gt; network and define their own labels, for example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-YAML" data-lang="YAML"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nt"&gt;services&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;ntfy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l"&gt;binwiederhier/ntfy&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;networks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="l"&gt;external&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.enable=true&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.ntfy.entrypoints=https&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.ntfy.rule=Host(`ntfy.home.example.com`)&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="w"&gt; &lt;/span&gt;- &lt;span class="s2"&gt;&amp;#34;traefik.http.routers.ntfy.tls.certresolver=cloudflare&amp;#34;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;With this pattern, every service becomes available over HTTPS under its own subdomain without additional manual configuration in Traefik.&lt;/p&gt;
&lt;h2 id="core-services-in-my-homelab"&gt;Core services in my homelab&lt;/h2&gt;
&lt;p&gt;On top of Traefik, I run a set of core services that provide DNS, monitoring, automation, messaging, logging, and secrets management. The key components are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Pi-hole – DNS:&lt;/strong&gt; Provides network-wide DNS resolution and ad-blocking, and handles internal DNS for homelab hostnames such as &lt;code&gt;*.home.example.com&lt;/code&gt;. Blocking unwanted domains for devices on the network.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mafl – Dashboard:&lt;/strong&gt; A minimalistic and flexible homepage for organizing service links, grouping categories, and providing quick navigation. Mafl can perform health checks on linked services, is configured through a simple YAML file, and offers a Progressive Web App for mobile devices. Since each service sits behind Traefik with its own hostname, Mafl serves as a curated entry point to the entire environment.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ntfy – Messaging / Pub-Sub:&lt;/strong&gt; A lightweight HTTP-based publish/subscribe notification service used for event-driven messaging across the environment. Typical use cases include sending alerts when backups complete and receiving notifications when containers restart unexpectedly. Ntfy provides mobile and desktop apps, allowing access from phones and laptops both inside and outside the home network, depending on firewall and VPN settings.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Doozle – Container Logs:&lt;/strong&gt; A simple web-based UI for viewing Docker logs in real time. Logs are accessible through a browser, it is possible to filter by container, and tail logs as they update. This is particularly useful when testing new services or debugging automation workflows.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Beszel – Resource Monitoring:&lt;/strong&gt; A lightweight monitoring tool for tracking system metrics and container statistics across multiple machines. It provides CPU, memory, and disk usage insights, making it easy to identify overloaded or misbehaving nodes and maintain visibility into the health of thin clients and other devices.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Uptime Kuma – Service Monitoring:&lt;/strong&gt; A dashboard for monitoring the availability of both internal and external services. It checks defined endpoints, as well as public websites and APIs. If a service becomes unreachable, Uptime Kuma sends alerts, e.g. via Ntfy or other services, providing an early warning system for issues in the homelab.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;n8n – Automation Engine:&lt;/strong&gt; A workflow automation platform used to orchestrate tasks, trigger scripts or containers, and integrate events across services. Typical use cases include reacting to webhooks or scheduled triggers, executing scripts or container actions, and sending notifications through Ntfy when certain conditions are met. Instead of implementing automation logic in custom code, workflows can be modeled visually and integrated directly with containers and external services.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Vaultwarden – Secrets Management:&lt;/strong&gt; A self-hosted Bitwarden-compatible server for securely managing passwords and sensitive information within the homelab. It stores credentials and secrets for services and accounts, enables secure sharing across devices.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;What began as a simple playground for learning containers and avoiding public cloud services for certain use cases has evolved into a practical, resilient platform for running everyday services at home. Centering the setup around Traefik, standardizing on containerized services, and using a wildcard domain with automated TLS have kept the architecture both manageable and extensible. The use of modest, low-power refurbished thin clients has also proven effective in keeping costs and energy consumption low while still offering sufficient resources.&lt;/p&gt;
&lt;p&gt;Over time, the homelab has also turned out to be a solid foundation for hosting local AI services, content of a future post. Depending on the criticality of individual services and one’s tolerance for risk, it can be worthwhile to distribute components across independent hosts, monitor services across nodes, or run certain workloads in parallel for redundancy. It is equally important to think carefully about backups to avoid losing data or configurations during failures or experiments. That said, this remains a homelab project rather than a production environment governed by strict service-level agreements; temporary outages are acceptable, and part of the experimentation process.&lt;/p&gt;
&lt;p&gt;With these principles such as simple routing, consistent domains and TLS, lightweight hardware, and containerized services, one can build a flexible environment that supports DNS, monitoring, automation, messaging, secrets management, and more, tailored to one&amp;rsquo;s own needs.&lt;/p&gt;
&lt;h2 id="references"&gt;References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;CentOS Stream - &lt;a href="https://www.centos.org/"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Traefik - reverse proxy - &lt;a href="https://github.com/traefik/traefik"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Pi-hole - network-wide ad blocking and DNS - &lt;a href="https://github.com/pi-hole/pi-hole"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Mafl - dashboard for homelab services - &lt;a href="https://github.com/hywax/mafl"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;ntfy - publish/subscribe push notifications - &lt;a href="https://github.com/binwiederhier/ntfy"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Doozle - web based interface to monitor logs - &lt;a href="https://github.com/amir20/dozzle"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Beszel - resource monitoring for multiple clients - &lt;a href="https://github.com/henrygd/beszel"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Uptime Kuma - monitoring tool - &lt;a href="https://github.com/louislam/uptime-kuma"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;n8n - workflow automation - &lt;a href="https://github.com/n8n-io/n8n"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Vaultwarden - Bitwarden-compatible server - &lt;a href="https://github.com/dani-garcia/vaultwarden"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Youtube Video - Techno Tim: Put Wildcard Certificates and SSL on EVERYTHING - &lt;a href="https://www.youtube.com/watch?v=liV3c9m_OX8"&gt;link&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>