Initial: Add all homelab manifests
This commit is contained in:
3
.gitignore
vendored
Normal file
3
.gitignore
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
**/secret.yaml
|
||||
*.key
|
||||
*.pem
|
||||
30
CLAUDE.md
Normal file
30
CLAUDE.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# RNK Homelab Dokumentation
|
||||
|
||||
## Regel
|
||||
Nach JEDER Installation oder Konfiguration schreibst du automatisch
|
||||
eine Dokumentation in /home/mtkadmin/homelab/docs/
|
||||
|
||||
## Dokumentationsformat
|
||||
- Datum und Uhrzeit
|
||||
- Was wurde installiert/konfiguriert
|
||||
- Welche Befehle wurden ausgeführt
|
||||
- Ausgaben und Ergebnisse
|
||||
- Nächste Schritte
|
||||
|
||||
## Cluster Info
|
||||
- rnk-cp01: 192.168.11.170 (Control Plane)
|
||||
- rnk-wrk01: 192.168.11.171 (Worker 01)
|
||||
- rnk-wrk02: 192.168.11.172 (Worker 02 · AI)
|
||||
- User: mtkadmin
|
||||
|
||||
## Cluster Nodes
|
||||
- rnk-cp01: 192.168.11.170 (Control Plane) — lokaler Node
|
||||
- rnk-wrk01: 192.168.11.171 (Worker · Services)
|
||||
- rnk-wrk02: 192.168.11.172 (Worker · AI + Ops)
|
||||
- User überall: mtkadmin
|
||||
- SSH Keys sind eingerichtet, direkte Verbindung möglich
|
||||
|
||||
## Regeln
|
||||
- Nach jeder Installation Doku in /home/mtkadmin/homelab/docs/ schreiben
|
||||
- Dateinamen mit Nummer-Präfix: 01-..., 02-..., 03-...
|
||||
- Immer alle drei Nodes berücksichtigen wenn möglich
|
||||
30
cert-manager-issuers.yaml
Normal file
30
cert-manager-issuers.yaml
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
spec:
|
||||
acme:
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
email: homelab@befast.at
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-staging-account-key
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
ingressClassName: traefik
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-production
|
||||
spec:
|
||||
acme:
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
email: homelab@befast.at
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-production-account-key
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
ingressClassName: traefik
|
||||
25
docs/00-hardware.md
Normal file
25
docs/00-hardware.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# RNK Homelab — Hardware Übersicht
|
||||
|
||||
## rnk-cp01 (Control Plane) · 192.168.11.170
|
||||
- CPU: Intel Core i7-7700HQ @ 2.80GHz · 4C/8T · VT-x
|
||||
- RAM: 32 GB DDR4 2400 MT/s
|
||||
- Storage: 1 TB NVMe (Toshiba KXG50ZNV1T02)
|
||||
- GPU: NVIDIA GTX 1050 Mobile (4GB) + Intel HD 630
|
||||
|
||||
## rnk-wrk01 (Worker 01 · Services) · 192.168.11.171
|
||||
- CPU: Intel Core i7-10750H @ 2.60GHz · 6C/12T · VT-x
|
||||
- RAM: 32 GB
|
||||
- Storage: 954 GB NVMe (Kioxia KXG60ZNV1T02)
|
||||
- GPU: NVIDIA GTX 1650 Ti Mobile + Intel UHD (CometLake)
|
||||
|
||||
## rnk-wrk02 (Worker 02 · AI + Ops) · 192.168.11.172
|
||||
- CPU: Intel Core i7-12700H @ 4.7GHz · 14C/20T · VT-x
|
||||
- RAM: 64 GB
|
||||
- Storage: 2 TB NVMe (SK Hynix PC801)
|
||||
- GPU: NVIDIA RTX 3050 Ti Mobile + Intel Iris Xe
|
||||
|
||||
## Zusammenfassung
|
||||
- Gesamt CPU Threads: 40
|
||||
- Gesamt RAM: 128 GB
|
||||
- Gesamt Storage: ~3.9 TB NVMe
|
||||
- AI Node: rnk-wrk02 (RTX 3050 Ti · 64GB RAM)
|
||||
143
docs/01-kvm-libvirt.md
Normal file
143
docs/01-kvm-libvirt.md
Normal file
@@ -0,0 +1,143 @@
|
||||
# 01 — KVM & libvirt Installation
|
||||
|
||||
**Datum:** 2026-03-16
|
||||
**Host:** rnk-cp01
|
||||
**OS:** Ubuntu 24.04.4 LTS (Noble Numbat)
|
||||
|
||||
---
|
||||
|
||||
## Übersicht
|
||||
|
||||
Installation und Konfiguration von KVM (Kernel-based Virtual Machine) als Hypervisor sowie libvirt als Management-Layer für VMs im Homelab.
|
||||
|
||||
---
|
||||
|
||||
## Voraussetzungen
|
||||
|
||||
### Hardware-Virtualisierung prüfen
|
||||
|
||||
```bash
|
||||
kvm-ok
|
||||
```
|
||||
|
||||
Erwartete Ausgabe:
|
||||
```
|
||||
INFO: /dev/kvm exists
|
||||
KVM acceleration can be used
|
||||
```
|
||||
|
||||
Falls `kvm-ok` nicht verfügbar ist, zuerst `cpu-checker` installieren:
|
||||
```bash
|
||||
sudo apt-get install -y cpu-checker
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
### Pakete installieren
|
||||
|
||||
```bash
|
||||
sudo apt-get install -y \
|
||||
qemu-kvm \
|
||||
libvirt-daemon-system \
|
||||
libvirt-clients \
|
||||
bridge-utils \
|
||||
virtinst \
|
||||
virt-manager \
|
||||
cpu-checker
|
||||
```
|
||||
|
||||
| Paket | Beschreibung |
|
||||
|--------------------------|-----------------------------------------------|
|
||||
| `qemu-kvm` | QEMU mit KVM-Unterstützung (Hypervisor) |
|
||||
| `libvirt-daemon-system` | libvirt Daemon + systemd-Integration |
|
||||
| `libvirt-clients` | CLI-Tools (`virsh`) |
|
||||
| `bridge-utils` | Netzwerk-Bridging für VMs |
|
||||
| `virtinst` | `virt-install` zum Erstellen von VMs |
|
||||
| `virt-manager` | Grafische VM-Verwaltung |
|
||||
| `cpu-checker` | `kvm-ok` Tool |
|
||||
|
||||
---
|
||||
|
||||
## Konfiguration
|
||||
|
||||
### Benutzer zu Gruppen hinzufügen
|
||||
|
||||
```bash
|
||||
sudo usermod -aG libvirt mtkadmin
|
||||
sudo usermod -aG kvm mtkadmin
|
||||
```
|
||||
|
||||
Danach ab- und wieder anmelden oder:
|
||||
```bash
|
||||
newgrp libvirt
|
||||
```
|
||||
|
||||
Gruppen prüfen:
|
||||
```bash
|
||||
groups mtkadmin
|
||||
# mtkadmin : mtkadmin adm cdrom sudo dip plugdev kvm lxd libvirt
|
||||
```
|
||||
|
||||
### sudo ohne Passwort (für nicht-interaktive Prozesse)
|
||||
|
||||
```bash
|
||||
echo "mtkadmin ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/mtkadmin-nopasswd
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dienst starten
|
||||
|
||||
```bash
|
||||
sudo systemctl enable --now libvirtd
|
||||
sudo systemctl status libvirtd
|
||||
```
|
||||
|
||||
Der Dienst startet automatisch beim Booten (`enabled`).
|
||||
|
||||
---
|
||||
|
||||
## Versionen (Stand Installation)
|
||||
|
||||
| Komponente | Version |
|
||||
|------------|---------|
|
||||
| libvirt | 10.0.0 |
|
||||
| QEMU | 8.2.2 |
|
||||
| API | QEMU 10.0.0 |
|
||||
|
||||
---
|
||||
|
||||
## Verifikation
|
||||
|
||||
```bash
|
||||
# KVM verfügbar?
|
||||
kvm-ok
|
||||
|
||||
# libvirtd läuft?
|
||||
sudo systemctl status libvirtd
|
||||
|
||||
# virsh funktioniert?
|
||||
virsh version
|
||||
|
||||
# Standard-Netzwerk vorhanden?
|
||||
virsh net-list --all
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Standardnetzwerk (virbr0)
|
||||
|
||||
libvirt legt automatisch ein NAT-Netzwerk an:
|
||||
|
||||
- **Name:** default
|
||||
- **Bridge:** virbr0
|
||||
- **Subnetz:** 192.168.122.0/24
|
||||
- **DHCP:** aktiviert (via dnsmasq)
|
||||
|
||||
Netzwerk aktivieren (falls nicht aktiv):
|
||||
```bash
|
||||
sudo virsh net-start default
|
||||
sudo virsh net-autostart default
|
||||
```
|
||||
202
docs/01-network-bridge.md
Normal file
202
docs/01-network-bridge.md
Normal file
@@ -0,0 +1,202 @@
|
||||
# 01 — Netzwerk-Bridge (br0)
|
||||
|
||||
**Datum:** 2026-03-16
|
||||
**Nodes:** rnk-cp01, rnk-wrk01, rnk-wrk02
|
||||
|
||||
---
|
||||
|
||||
## Ziel
|
||||
|
||||
Auf jedem Node eine Linux-Bridge (`br0`) konfigurieren, sodass KVM-VMs direkt im LAN-Segment `192.168.11.0/24` erreichbar sind (kein NAT).
|
||||
|
||||
---
|
||||
|
||||
## Node-Übersicht
|
||||
|
||||
| Node | IP | Ethernet-Interface |
|
||||
|-----------|------------------|-----------------------|
|
||||
| rnk-cp01 | 192.168.11.170 | enx1065308999be |
|
||||
| rnk-wrk01 | 192.168.11.171 | enxa4bb6df4c4d7 |
|
||||
| rnk-wrk02 | 192.168.11.172 | enxcc96e5c5702b |
|
||||
|
||||
---
|
||||
|
||||
## Vorgehensweise
|
||||
|
||||
### 1. Bestehende Netplan-Config prüfen
|
||||
|
||||
```bash
|
||||
sudo cat /etc/netplan/50-cloud-init.yaml
|
||||
ip link show
|
||||
```
|
||||
|
||||
### 2. Neue Bridge-Config schreiben
|
||||
|
||||
Neue Datei `/etc/netplan/99-br0.yaml` erstellen (Beispiel für rnk-cp01):
|
||||
|
||||
```yaml
|
||||
network:
|
||||
version: 2
|
||||
ethernets:
|
||||
enx1065308999be:
|
||||
dhcp4: no
|
||||
dhcp6: no
|
||||
bridges:
|
||||
br0:
|
||||
interfaces: [enx1065308999be]
|
||||
addresses:
|
||||
- "192.168.11.170/24"
|
||||
nameservers:
|
||||
addresses:
|
||||
- 192.168.11.1
|
||||
search:
|
||||
- int.befast.at
|
||||
routes:
|
||||
- to: "default"
|
||||
via: "192.168.11.1"
|
||||
parameters:
|
||||
stp: false
|
||||
forward-delay: 0
|
||||
```
|
||||
|
||||
> `stp: false` und `forward-delay: 0` sorgen dafür, dass die Bridge sofort ohne Spanning-Tree-Verzögerung verfügbar ist — wichtig für VMs die beim Boot DHCP nutzen.
|
||||
|
||||
### 3. Berechtigungen setzen
|
||||
|
||||
```bash
|
||||
sudo chmod 600 /etc/netplan/99-br0.yaml
|
||||
```
|
||||
|
||||
Netplan verweigert sonst das Anwenden (Warnung: "Permissions too open").
|
||||
|
||||
### 4. Alte Cloud-Init Config entfernen
|
||||
|
||||
```bash
|
||||
sudo rm /etc/netplan/50-cloud-init.yaml
|
||||
```
|
||||
|
||||
### 5. Config anwenden
|
||||
|
||||
```bash
|
||||
sudo netplan apply
|
||||
```
|
||||
|
||||
Die IP wechselt von der Ethernet-Schnittstelle auf `br0` — SSH-Verbindung bleibt erhalten, da die IP gleich bleibt.
|
||||
|
||||
---
|
||||
|
||||
## Konfiguration je Node
|
||||
|
||||
### rnk-cp01 — /etc/netplan/99-br0.yaml
|
||||
|
||||
```yaml
|
||||
network:
|
||||
version: 2
|
||||
ethernets:
|
||||
enx1065308999be:
|
||||
dhcp4: no
|
||||
dhcp6: no
|
||||
bridges:
|
||||
br0:
|
||||
interfaces: [enx1065308999be]
|
||||
addresses:
|
||||
- "192.168.11.170/24"
|
||||
nameservers:
|
||||
addresses:
|
||||
- 192.168.11.1
|
||||
search:
|
||||
- int.befast.at
|
||||
routes:
|
||||
- to: "default"
|
||||
via: "192.168.11.1"
|
||||
parameters:
|
||||
stp: false
|
||||
forward-delay: 0
|
||||
```
|
||||
|
||||
### rnk-wrk01 — /etc/netplan/99-br0.yaml
|
||||
|
||||
```yaml
|
||||
network:
|
||||
version: 2
|
||||
ethernets:
|
||||
enxa4bb6df4c4d7:
|
||||
dhcp4: no
|
||||
dhcp6: no
|
||||
bridges:
|
||||
br0:
|
||||
interfaces: [enxa4bb6df4c4d7]
|
||||
addresses:
|
||||
- "192.168.11.171/24"
|
||||
nameservers:
|
||||
addresses:
|
||||
- 192.168.11.1
|
||||
search:
|
||||
- int.befast.at
|
||||
routes:
|
||||
- to: "default"
|
||||
via: "192.168.11.1"
|
||||
parameters:
|
||||
stp: false
|
||||
forward-delay: 0
|
||||
```
|
||||
|
||||
### rnk-wrk02 — /etc/netplan/99-br0.yaml
|
||||
|
||||
```yaml
|
||||
network:
|
||||
version: 2
|
||||
ethernets:
|
||||
enxcc96e5c5702b:
|
||||
dhcp4: no
|
||||
dhcp6: no
|
||||
bridges:
|
||||
br0:
|
||||
interfaces: [enxcc96e5c5702b]
|
||||
addresses:
|
||||
- "192.168.11.172/24"
|
||||
nameservers:
|
||||
addresses:
|
||||
- 192.168.11.1
|
||||
search:
|
||||
- int.befast.at
|
||||
routes:
|
||||
- to: "default"
|
||||
via: "192.168.11.1"
|
||||
parameters:
|
||||
stp: false
|
||||
forward-delay: 0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verifikation
|
||||
|
||||
```bash
|
||||
# Bridge-Interface prüfen
|
||||
ip addr show br0
|
||||
|
||||
# Bridge-Members prüfen
|
||||
bridge link show
|
||||
|
||||
# Konnektivität testen
|
||||
ping -c2 192.168.11.1
|
||||
```
|
||||
|
||||
Erwartete Ausgabe `ip addr show br0`:
|
||||
```
|
||||
br0: <BROADCAST,MULTICAST,UP,LOWER_UP> ...
|
||||
inet 192.168.11.XXX/24 brd 192.168.11.255 scope global br0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Ergebnis
|
||||
|
||||
| Node | br0 IP | Status |
|
||||
|-----------|------------------|--------|
|
||||
| rnk-cp01 | 192.168.11.170 | UP |
|
||||
| rnk-wrk01 | 192.168.11.171 | UP |
|
||||
| rnk-wrk02 | 192.168.11.172 | UP |
|
||||
|
||||
VMs können jetzt mit `--network bridge=br0` gestartet werden und erhalten eine IP direkt aus dem LAN-Segment.
|
||||
154
docs/02-k3s-installation.md
Normal file
154
docs/02-k3s-installation.md
Normal file
@@ -0,0 +1,154 @@
|
||||
# 02 — k3s Installation
|
||||
|
||||
**Datum:** 2026-03-16
|
||||
**Version:** v1.34.5+k3s1
|
||||
**Container Runtime:** containerd 2.1.5-k3s1
|
||||
|
||||
---
|
||||
|
||||
## Cluster-Übersicht
|
||||
|
||||
| Node | Rolle | IP |
|
||||
|-----------|----------------|------------------|
|
||||
| rnk-cp01 | control-plane | 192.168.11.170 |
|
||||
| rnk-wrk01 | agent (worker) | 192.168.11.171 |
|
||||
| rnk-wrk02 | agent (worker) | 192.168.11.172 |
|
||||
|
||||
---
|
||||
|
||||
## Voraussetzungen
|
||||
|
||||
- KVM + libvirt installiert (siehe `01-kvm-libvirt.md`)
|
||||
- Bridge `br0` auf allen Nodes konfiguriert (siehe `01-network-bridge.md`)
|
||||
- SSH-Zugang von rnk-cp01 zu rnk-wrk01 und rnk-wrk02 ohne Passwort
|
||||
- Internetverbindung auf allen Nodes
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
### 1. k3s Server auf rnk-cp01
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -s - server \
|
||||
--node-ip=192.168.11.170 \
|
||||
--tls-san=192.168.11.170 \
|
||||
--flannel-iface=br0 \
|
||||
--write-kubeconfig-mode=644
|
||||
```
|
||||
|
||||
| Flag | Erklärung |
|
||||
|------|-----------|
|
||||
| `--node-ip` | IP-Adresse die dieser Node im Cluster advertised |
|
||||
| `--tls-san` | IP ins TLS-Zertifikat aufnehmen (für externe kubectl-Zugriffe) |
|
||||
| `--flannel-iface=br0` | Flannel CNI nutzt br0 statt des physischen Interfaces |
|
||||
| `--write-kubeconfig-mode=644` | kubeconfig für nicht-root User lesbar |
|
||||
|
||||
> Hinweis: `--advertise-addr` existiert in k3s v1.34+ nicht mehr. Der korrekte Flag ist `--tls-san`.
|
||||
|
||||
**Status prüfen:**
|
||||
```bash
|
||||
sudo systemctl is-active k3s
|
||||
```
|
||||
|
||||
### 2. Node-Token auslesen
|
||||
|
||||
```bash
|
||||
sudo cat /var/lib/rancher/k3s/server/node-token
|
||||
```
|
||||
|
||||
Der Token wird für den Agent-Join benötigt.
|
||||
|
||||
### 3. k3s Agent auf rnk-wrk01 und rnk-wrk02
|
||||
|
||||
Von rnk-cp01 per SSH, beide Worker parallel:
|
||||
|
||||
```bash
|
||||
TOKEN=$(sudo cat /var/lib/rancher/k3s/server/node-token)
|
||||
|
||||
# rnk-wrk01
|
||||
ssh mtkadmin@192.168.11.171 "curl -sfL https://get.k3s.io | \
|
||||
K3S_URL=https://192.168.11.170:6443 \
|
||||
K3S_TOKEN='$TOKEN' \
|
||||
sh -s - agent \
|
||||
--node-ip=192.168.11.171 \
|
||||
--flannel-iface=br0" &
|
||||
|
||||
# rnk-wrk02
|
||||
ssh mtkadmin@192.168.11.172 "curl -sfL https://get.k3s.io | \
|
||||
K3S_URL=https://192.168.11.170:6443 \
|
||||
K3S_TOKEN='$TOKEN' \
|
||||
sh -s - agent \
|
||||
--node-ip=192.168.11.172 \
|
||||
--flannel-iface=br0" &
|
||||
|
||||
wait
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verifikation
|
||||
|
||||
```bash
|
||||
kubectl get nodes -o wide
|
||||
```
|
||||
|
||||
Erwartete Ausgabe:
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP
|
||||
rnk-cp01 Ready control-plane ... v1.34.5+k3s1 192.168.11.170
|
||||
rnk-wrk01 Ready <none> ... v1.34.5+k3s1 192.168.11.171
|
||||
rnk-wrk02 Ready <none> ... v1.34.5+k3s1 192.168.11.172
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Kubeconfig
|
||||
|
||||
Die kubeconfig liegt auf rnk-cp01 unter:
|
||||
```
|
||||
/etc/rancher/k3s/k3s.yaml
|
||||
```
|
||||
|
||||
Für externen Zugriff (z.B. vom Laptop) kopieren und Server-IP anpassen:
|
||||
```bash
|
||||
scp mtkadmin@192.168.11.170:/etc/rancher/k3s/k3s.yaml ~/.kube/config
|
||||
sed -i 's/127.0.0.1/192.168.11.170/' ~/.kube/config
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nützliche Befehle
|
||||
|
||||
```bash
|
||||
# Alle Nodes anzeigen
|
||||
kubectl get nodes -o wide
|
||||
|
||||
# System-Pods prüfen
|
||||
kubectl get pods -n kube-system
|
||||
|
||||
# k3s Logs (Server)
|
||||
sudo journalctl -u k3s -f
|
||||
|
||||
# k3s Logs (Agent)
|
||||
sudo journalctl -u k3s-agent -f
|
||||
|
||||
# k3s deinstallieren (Server)
|
||||
sudo k3s-uninstall.sh
|
||||
|
||||
# k3s deinstallieren (Agent)
|
||||
sudo k3s-agent-uninstall.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Ergebnis
|
||||
|
||||
Alle drei Nodes sind im Status `Ready`:
|
||||
|
||||
```
|
||||
NAME STATUS ROLES VERSION INTERNAL-IP CONTAINER-RUNTIME
|
||||
rnk-cp01 Ready control-plane v1.34.5+k3s1 192.168.11.170 containerd://2.1.5-k3s1
|
||||
rnk-wrk01 Ready <none> v1.34.5+k3s1 192.168.11.171 containerd://2.1.5-k3s1
|
||||
rnk-wrk02 Ready <none> v1.34.5+k3s1 192.168.11.172 containerd://2.1.5-k3s1
|
||||
```
|
||||
197
docs/03-longhorn.md
Normal file
197
docs/03-longhorn.md
Normal file
@@ -0,0 +1,197 @@
|
||||
# 03 — Longhorn Distributed Storage
|
||||
|
||||
**Datum:** 2026-03-17
|
||||
**Version:** Longhorn v1.11.1 (Helm Chart 1.11.1)
|
||||
**Namespace:** longhorn-system
|
||||
|
||||
---
|
||||
|
||||
## Übersicht
|
||||
|
||||
Longhorn ist ein cloud-natives, verteiltes Block-Storage-System für Kubernetes. Es repliziert Volumes über mehrere Nodes und bietet Snapshots, Backups und eine Web-UI.
|
||||
|
||||
---
|
||||
|
||||
## Voraussetzungen
|
||||
|
||||
### Helm installieren
|
||||
|
||||
Helm war nicht vorhanden und wurde installiert:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
|
||||
# → helm installed into /usr/local/bin/helm
|
||||
# Version: v3.20.1
|
||||
```
|
||||
|
||||
### Kubeconfig einrichten
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.kube
|
||||
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
|
||||
chmod 600 ~/.kube/config
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
### 1. Helm Repository hinzufügen
|
||||
|
||||
```bash
|
||||
helm repo add longhorn https://charts.longhorn.io
|
||||
helm repo update
|
||||
```
|
||||
|
||||
Ausgabe:
|
||||
```
|
||||
"longhorn" has been added to your repositories
|
||||
...Successfully got an update from the "longhorn" chart repository
|
||||
```
|
||||
|
||||
### 2. Namespace erstellen
|
||||
|
||||
```bash
|
||||
kubectl create namespace longhorn-system
|
||||
```
|
||||
|
||||
### 3. Longhorn per Helm installieren
|
||||
|
||||
```bash
|
||||
helm install longhorn longhorn/longhorn \
|
||||
--namespace longhorn-system \
|
||||
--set defaultSettings.defaultReplicaCount=2 \
|
||||
--wait \
|
||||
--timeout 10m
|
||||
```
|
||||
|
||||
Parameter:
|
||||
- `defaultReplicaCount=2`: Jedes Volume wird auf 2 von 3 Nodes repliziert (ausreichend für 3-Node-Cluster)
|
||||
|
||||
Ausgabe:
|
||||
```
|
||||
NAME: longhorn
|
||||
LAST DEPLOYED: Tue Mar 17 08:42:26 2026
|
||||
NAMESPACE: longhorn-system
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
APP VERSION: v1.11.1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pod-Status nach Installation
|
||||
|
||||
```bash
|
||||
kubectl get pods -n longhorn-system -o wide
|
||||
```
|
||||
|
||||
Alle 27 Pods im Status `Running`:
|
||||
|
||||
| Pod-Gruppe | Anzahl | Nodes |
|
||||
|---|---|---|
|
||||
| longhorn-manager | 3 | cp01, wrk01, wrk02 |
|
||||
| engine-image | 3 | cp01, wrk01, wrk02 |
|
||||
| instance-manager | 3 | cp01, wrk01, wrk02 |
|
||||
| longhorn-csi-plugin | 3 | cp01, wrk01, wrk02 |
|
||||
| csi-attacher | 3 | cp01, wrk01, wrk02 |
|
||||
| csi-provisioner | 3 | cp01, wrk01, wrk02 |
|
||||
| csi-resizer | 3 | cp01, wrk01, wrk02 |
|
||||
| csi-snapshotter | 3 | cp01, wrk01, wrk02 |
|
||||
| longhorn-ui | 2 | wrk01, wrk02 |
|
||||
| longhorn-driver-deployer | 1 | wrk02 |
|
||||
|
||||
> **Hinweis:** `longhorn-manager` auf `rnk-wrk01` zeigte initial einen CrashLoopBackOff (2 Restarts). Ursache war ein transientes Startproblem beim Instance Manager. Nach dessen Initialisierung lief der Pod stabil.
|
||||
|
||||
---
|
||||
|
||||
## StorageClass
|
||||
|
||||
Longhorn registriert sich automatisch als **Standard-StorageClass**:
|
||||
|
||||
```bash
|
||||
kubectl get storageclass
|
||||
```
|
||||
|
||||
```
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
|
||||
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false
|
||||
longhorn (default) driver.longhorn.io Delete Immediate true
|
||||
longhorn-static driver.longhorn.io Delete Immediate true
|
||||
```
|
||||
|
||||
> Da nun zwei Default-StorageClasses existieren, sollte `local-path` ggf. als nicht-default markiert werden:
|
||||
> ```bash
|
||||
> kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
|
||||
> ```
|
||||
|
||||
---
|
||||
|
||||
## Services
|
||||
|
||||
```
|
||||
longhorn-admission-webhook ClusterIP 10.43.89.97 9502/TCP
|
||||
longhorn-backend ClusterIP 10.43.67.94 9500/TCP
|
||||
longhorn-frontend ClusterIP 10.43.107.228 80/TCP
|
||||
longhorn-recovery-backend ClusterIP 10.43.69.95 9503/TCP
|
||||
```
|
||||
|
||||
### Web-UI erreichbar machen (Port-Forward)
|
||||
|
||||
```bash
|
||||
kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80
|
||||
# UI dann unter http://localhost:8080
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Helm Release Info
|
||||
|
||||
```
|
||||
NAME NAMESPACE REVISION STATUS CHART APP VERSION
|
||||
longhorn longhorn-system 1 deployed longhorn-1.11.1 v1.11.1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nächste Schritte
|
||||
|
||||
- [x] `local-path` als nicht-default StorageClass demarkieren → bereits korrekt (longhorn ist default)
|
||||
- [x] Longhorn-UI via Ingress dauerhaft exposieren → erledigt 2026-03-19
|
||||
- [ ] Backup-Target konfigurieren (z.B. NFS oder S3-kompatibel)
|
||||
- [ ] PVC-Test: Test-Deployment mit Longhorn-Volume erstellen
|
||||
|
||||
---
|
||||
|
||||
## Longhorn-UI Ingress (2026-03-19)
|
||||
|
||||
Ingress für dauerhaften Zugriff auf die Longhorn-UI angelegt:
|
||||
|
||||
```bash
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: longhorn-ingress
|
||||
namespace: longhorn-system
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: web
|
||||
spec:
|
||||
ingressClassName: traefik
|
||||
rules:
|
||||
- host: longhorn.192.168.11.180.nip.io
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: longhorn-frontend
|
||||
port:
|
||||
number: 80
|
||||
EOF
|
||||
```
|
||||
|
||||
**URL:** http://longhorn.192.168.11.180.nip.io
|
||||
|
||||
**Hinweis:** Kein Passwortschutz — nur im internen Netz erreichbar.
|
||||
198
docs/04-traefik.md
Normal file
198
docs/04-traefik.md
Normal file
@@ -0,0 +1,198 @@
|
||||
# 04 — Traefik Ingress Controller
|
||||
|
||||
**Datum:** 2026-03-17
|
||||
**Version:** Traefik v3.6.9 (Helm Chart traefik-39.0.201+up39.0.2)
|
||||
**Namespace:** kube-system
|
||||
**Quelle:** k3s built-in (automatisch durch k3s installiert)
|
||||
|
||||
---
|
||||
|
||||
## Übersicht
|
||||
|
||||
k3s bringt Traefik als Standard-Ingress-Controller mit. Er wird beim ersten Cluster-Start automatisch per Helm in den Namespace `kube-system` installiert. Eine manuelle Installation war daher nicht nötig.
|
||||
|
||||
Traefik v3.6.9 läuft als `LoadBalancer`-Service und ist auf allen drei Cluster-Nodes erreichbar.
|
||||
|
||||
---
|
||||
|
||||
## Prüfung: Ist Traefik bereits installiert?
|
||||
|
||||
```bash
|
||||
kubectl get pods -n kube-system | grep traefik
|
||||
```
|
||||
|
||||
Ausgabe:
|
||||
```
|
||||
helm-install-traefik-cbxhx 0/1 Completed 2 11h
|
||||
helm-install-traefik-crd-8dcsk 0/1 Completed 0 11h
|
||||
svclb-traefik-dea220eb-29vm9 2/2 Running 2 11h
|
||||
svclb-traefik-dea220eb-fh52c 2/2 Running 2 11h
|
||||
svclb-traefik-dea220eb-plnwv 2/2 Running 2 11h
|
||||
traefik-788bc4688c-c6m7w 1/1 Running 1 11h
|
||||
```
|
||||
|
||||
→ Traefik läuft bereits. Keine manuelle Installation notwendig.
|
||||
|
||||
---
|
||||
|
||||
## Helm Release
|
||||
|
||||
```bash
|
||||
helm list -n kube-system
|
||||
```
|
||||
|
||||
```
|
||||
NAME NAMESPACE REVISION STATUS CHART APP VERSION
|
||||
traefik kube-system 1 deployed traefik-39.0.201+up39.0.2 v3.6.9
|
||||
traefik-crd kube-system 1 deployed traefik-crd-39.0.201+up39.0.2 v3.6.9
|
||||
```
|
||||
|
||||
Zwei Helm Releases:
|
||||
- **traefik**: Der eigentliche Controller
|
||||
- **traefik-crd**: Die Custom Resource Definitions (IngressRoute, Middleware, etc.)
|
||||
|
||||
---
|
||||
|
||||
## Service & Erreichbarkeit
|
||||
|
||||
```bash
|
||||
kubectl get svc traefik -n kube-system
|
||||
```
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
|
||||
traefik LoadBalancer 10.43.61.252 192.168.11.170,192.168.11.171,192.168.11.172 80:31202/TCP,443:32016/TCP
|
||||
```
|
||||
|
||||
Traefik ist über alle drei Node-IPs erreichbar:
|
||||
|
||||
| Node | IP | HTTP | HTTPS |
|
||||
|---|---|---|---|
|
||||
| rnk-cp01 | 192.168.11.170 | :80 | :443 |
|
||||
| rnk-wrk01 | 192.168.11.171 | :80 | :443 |
|
||||
| rnk-wrk02 | 192.168.11.172 | :80 | :443 |
|
||||
|
||||
NodePorts: `31202` (HTTP), `32016` (HTTPS)
|
||||
|
||||
---
|
||||
|
||||
## Konfiguration (EntryPoints)
|
||||
|
||||
Traefik lauscht auf folgenden Ports:
|
||||
|
||||
| EntryPoint | Port | Zweck |
|
||||
|---|---|---|
|
||||
| `web` | 8000 (→ 80) | HTTP |
|
||||
| `websecure` | 8443 (→ 443) | HTTPS (TLS aktiviert) |
|
||||
| `traefik` | 8080 | Dashboard / API |
|
||||
| `metrics` | 9100 | Prometheus Metrics |
|
||||
|
||||
Aktive Features (aus Deployment-Args):
|
||||
- `--api.dashboard=true` — Dashboard aktiviert
|
||||
- `--ping=true` — Health-Check unter `/ping`
|
||||
- `--metrics.prometheus=true` — Prometheus-Scraping aktiviert
|
||||
- `--providers.kubernetescrd` — Traefik CRDs (IngressRoute, Middleware, …)
|
||||
- `--providers.kubernetesingress` — Standard Kubernetes Ingress
|
||||
- `--entryPoints.websecure.http.tls=true` — TLS auf websecure
|
||||
- `--log.level=INFO`
|
||||
|
||||
---
|
||||
|
||||
## IngressClass
|
||||
|
||||
```bash
|
||||
kubectl get ingressclass
|
||||
```
|
||||
|
||||
```
|
||||
NAME CONTROLLER PARAMETERS AGE
|
||||
traefik traefik.io/ingress-controller <none> 11h
|
||||
```
|
||||
|
||||
IngressClass-Name für Ingress-Objekte: `traefik`
|
||||
|
||||
---
|
||||
|
||||
## Custom Resource Definitions (CRDs)
|
||||
|
||||
Installierte Traefik-CRDs:
|
||||
|
||||
```
|
||||
ingressroutes.traefik.io
|
||||
ingressroutetcps.traefik.io
|
||||
ingressrouteudps.traefik.io
|
||||
middlewares.traefik.io
|
||||
middlewaretcps.traefik.io
|
||||
serverstransports.traefik.io
|
||||
serverstransporttcps.traefik.io
|
||||
tlsoptions.traefik.io
|
||||
tlsstores.traefik.io
|
||||
traefikservices.traefik.io
|
||||
```
|
||||
|
||||
Zusätzlich Traefik Hub CRDs (API Gateway / Management, derzeit ungenutzt):
|
||||
`accesscontrolpolicies`, `aiservices`, `apiauths`, `apibundles`, etc.
|
||||
|
||||
---
|
||||
|
||||
## Dashboard aufrufen (Port-Forward)
|
||||
|
||||
Das Traefik Dashboard ist über den `traefik`-EntryPoint (Port 8080) erreichbar, jedoch nicht von außen exponiert:
|
||||
|
||||
```bash
|
||||
kubectl port-forward -n kube-system deployment/traefik 9000:8080
|
||||
# Dashboard: http://localhost:9000/dashboard/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Beispiel: Ingress-Objekt
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: my-app
|
||||
namespace: default
|
||||
spec:
|
||||
ingressClassName: traefik
|
||||
rules:
|
||||
- host: my-app.homelab.local
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: my-app-svc
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
## Beispiel: IngressRoute (Traefik CRD)
|
||||
|
||||
```yaml
|
||||
apiVersion: traefik.io/v1alpha1
|
||||
kind: IngressRoute
|
||||
metadata:
|
||||
name: my-app
|
||||
namespace: default
|
||||
spec:
|
||||
entryPoints:
|
||||
- web
|
||||
routes:
|
||||
- match: Host(`my-app.homelab.local`)
|
||||
kind: Rule
|
||||
services:
|
||||
- name: my-app-svc
|
||||
port: 80
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nächste Schritte
|
||||
|
||||
- [ ] Traefik Dashboard per IngressRoute mit BasicAuth exposieren
|
||||
- [ ] Default TLS-Zertifikat konfigurieren (z.B. via cert-manager + Let's Encrypt)
|
||||
- [ ] Wildcard-DNS für `*.homelab.local` auf die Cluster-IPs setzen
|
||||
- [ ] Prometheus-Scraping für Traefik Metrics einrichten
|
||||
297
docs/05-cert-manager.md
Normal file
297
docs/05-cert-manager.md
Normal file
@@ -0,0 +1,297 @@
|
||||
# 05 — cert-manager
|
||||
|
||||
**Datum:** 2026-03-17
|
||||
**Version:** cert-manager v1.20.0 (Helm Chart v1.20.0)
|
||||
**Namespace:** cert-manager
|
||||
|
||||
---
|
||||
|
||||
## Übersicht
|
||||
|
||||
cert-manager automatisiert die Ausstellung und Erneuerung von TLS-Zertifikaten in Kubernetes. Es integriert sich mit Let's Encrypt via ACME und nutzt Traefik als Ingress-Controller für die HTTP-01-Challenge.
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
### 1. Jetstack Helm Repository hinzufügen
|
||||
|
||||
```bash
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
```
|
||||
|
||||
Ausgabe:
|
||||
```
|
||||
"jetstack" has been added to your repositories
|
||||
...Successfully got an update from the "jetstack" chart repository
|
||||
```
|
||||
|
||||
### 2. Namespace erstellen
|
||||
|
||||
```bash
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
### 3. cert-manager per Helm installieren (mit CRDs)
|
||||
|
||||
```bash
|
||||
helm install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager \
|
||||
--version v1.20.0 \
|
||||
--set crds.enabled=true \
|
||||
--wait \
|
||||
--timeout 5m
|
||||
```
|
||||
|
||||
Ausgabe:
|
||||
```
|
||||
NAME: cert-manager
|
||||
LAST DEPLOYED: Tue Mar 17 09:38:11 2026
|
||||
NAMESPACE: cert-manager
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
cert-manager v1.20.0 has been deployed successfully!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pod-Status
|
||||
|
||||
```bash
|
||||
kubectl get pods -n cert-manager
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cert-manager-cainjector-68c64dbb9b-xvbcr 1/1 Running 0 ...
|
||||
cert-manager-f5cd6c77c-lfpg7 1/1 Running 0 ...
|
||||
cert-manager-webhook-54d5d87669-2pgsj 1/1 Running 0 ...
|
||||
```
|
||||
|
||||
Alle 3 Pods laufen stabil.
|
||||
|
||||
---
|
||||
|
||||
## DNS-Problem: ISP-Wildcard-Interception
|
||||
|
||||
### Symptom
|
||||
|
||||
Die ClusterIssuers zeigten nach der Erstellung dauerhaft `Ready: False`:
|
||||
|
||||
```
|
||||
Failed to register ACME account: Get "https://acme-v02.api.letsencrypt.org/directory":
|
||||
remote error: tls: unrecognized name
|
||||
```
|
||||
|
||||
### Ursache
|
||||
|
||||
In der Netplan-Konfiguration aller Nodes war die Search-Domain `int.befast.at`
|
||||
statisch eingetragen. Der kubelet kopiert diese aus dem Host-`/etc/resolv.conf`
|
||||
direkt in alle Pods:
|
||||
|
||||
```
|
||||
# /etc/resolv.conf auf dem Host (systemd-resolved) — vorher
|
||||
nameserver 127.0.0.53
|
||||
search int.befast.at
|
||||
```
|
||||
|
||||
Pods erhielten dadurch folgendes `/etc/resolv.conf`:
|
||||
|
||||
```
|
||||
search cert-manager.svc.cluster.local svc.cluster.local cluster.local int.befast.at
|
||||
nameserver 10.43.0.10
|
||||
options ndots:5
|
||||
```
|
||||
|
||||
Mit `ndots:5` wird `acme-v02.api.letsencrypt.org` (3 Punkte) zuerst als
|
||||
`acme-v02.api.letsencrypt.org.int.befast.at` aufgelöst — und der ISP-DNS
|
||||
(`84.191.81.126`) antwortete mit einem Wildcard-Eintrag, dessen TLS-Zertifikat
|
||||
den SNI-Namen `acme-v02.api.letsencrypt.org` nicht enthält.
|
||||
|
||||
### Lösung: Search-Domain aus Netplan entfernen (alle 3 Nodes)
|
||||
|
||||
Die `search:`-Sektion wurde aus `/etc/netplan/99-br0.yaml` auf allen Nodes
|
||||
entfernt. Backups liegen als `/etc/netplan/99-br0.yaml.bak` auf jedem Node.
|
||||
|
||||
**rnk-cp01 (192.168.11.170):**
|
||||
```bash
|
||||
sudo cp /etc/netplan/99-br0.yaml /etc/netplan/99-br0.yaml.bak
|
||||
# search: / - int.befast.at Zeilen entfernt
|
||||
sudo netplan apply
|
||||
```
|
||||
|
||||
**rnk-wrk01 + rnk-wrk02 (via SSH):**
|
||||
```bash
|
||||
for IP in 192.168.11.171 192.168.11.172; do
|
||||
ssh mtkadmin@$IP "
|
||||
sudo cp /etc/netplan/99-br0.yaml /etc/netplan/99-br0.yaml.bak
|
||||
sudo sed -i '/^ search:/,/^ - int.befast.at/d' /etc/netplan/99-br0.yaml
|
||||
sudo netplan apply
|
||||
"
|
||||
done
|
||||
```
|
||||
|
||||
**`/etc/netplan/99-br0.yaml` nach der Änderung (Beispiel rnk-cp01):**
|
||||
```yaml
|
||||
network:
|
||||
version: 2
|
||||
ethernets:
|
||||
enx1065308999be:
|
||||
dhcp4: no
|
||||
dhcp6: no
|
||||
bridges:
|
||||
br0:
|
||||
interfaces: [enx1065308999be]
|
||||
addresses:
|
||||
- "192.168.11.170/24"
|
||||
nameservers:
|
||||
addresses:
|
||||
- 192.168.11.1
|
||||
routes:
|
||||
- to: "default"
|
||||
via: "192.168.11.1"
|
||||
parameters:
|
||||
stp: false
|
||||
forward-delay: 0
|
||||
```
|
||||
|
||||
### Ergebnis
|
||||
|
||||
```
|
||||
# /etc/resolv.conf auf allen Hosts — nachher
|
||||
nameserver 127.0.0.53
|
||||
search .
|
||||
```
|
||||
|
||||
Pods erhalten jetzt saubere DNS-Konfiguration ohne ISP-Domain:
|
||||
```
|
||||
search default.svc.cluster.local svc.cluster.local cluster.local
|
||||
nameserver 10.43.0.10
|
||||
options ndots:5
|
||||
```
|
||||
|
||||
Überprüfung DNS-Auflösung aus einem Pod:
|
||||
```bash
|
||||
kubectl run dns-test --image=busybox:1.28 --restart=Never -- sleep 30
|
||||
kubectl exec dns-test -- nslookup acme-v02.api.letsencrypt.org
|
||||
# Name: acme-v02.api.letsencrypt.org → 172.65.32.248 ✓
|
||||
kubectl delete pod dns-test --grace-period=0
|
||||
```
|
||||
|
||||
> **Hinweis:** Zusätzlich wurde CoreDNS auf `forward . 8.8.8.8 8.8.4.4 1.1.1.1`
|
||||
> konfiguriert (statt `forward . /etc/resolv.conf`), damit keine zukünftigen
|
||||
> Host-DNS-Änderungen auf Pods durchschlagen.
|
||||
|
||||
---
|
||||
|
||||
## ClusterIssuer-Konfiguration
|
||||
|
||||
Manifest: `/home/mtkadmin/homelab/cert-manager-issuers.yaml`
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
spec:
|
||||
acme:
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
email: homelab@befast.at
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-staging-account-key
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
ingressClassName: traefik
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-production
|
||||
spec:
|
||||
acme:
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
email: homelab@befast.at
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-production-account-key
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
ingressClassName: traefik
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -f /home/mtkadmin/homelab/cert-manager-issuers.yaml
|
||||
```
|
||||
|
||||
### Status
|
||||
|
||||
```bash
|
||||
kubectl get clusterissuer -o wide
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS
|
||||
letsencrypt-production True The ACME account was registered with the ACME server
|
||||
letsencrypt-staging True The ACME account was registered with the ACME server
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verwendung: Zertifikat für Ingress
|
||||
|
||||
### Staging zuerst testen (kein Rate-Limit)
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: my-app
|
||||
namespace: default
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt-staging
|
||||
spec:
|
||||
ingressClassName: traefik
|
||||
tls:
|
||||
- hosts:
|
||||
- my-app.example.com
|
||||
secretName: my-app-tls-staging
|
||||
rules:
|
||||
- host: my-app.example.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: my-app-svc
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
### Auf Production wechseln
|
||||
|
||||
Annotation ändern auf:
|
||||
```yaml
|
||||
cert-manager.io/cluster-issuer: letsencrypt-production
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Helm Release Info
|
||||
|
||||
```
|
||||
NAME NAMESPACE REVISION STATUS CHART APP VERSION
|
||||
cert-manager cert-manager 1 deployed cert-manager-v1.20.0 v1.20.0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nächste Schritte
|
||||
|
||||
- [ ] Ersten echten Ingress mit TLS-Zertifikat testen (Staging)
|
||||
- [ ] DNS-Eintrag für öffentliche Domain auf Cluster-IP setzen
|
||||
- [ ] Nach erfolgreichem Staging-Test auf Production-Issuer umstellen
|
||||
- [ ] Zertifikatserneuerung überwachen (`kubectl get certificate -A`)
|
||||
200
docs/06-rancher.md
Normal file
200
docs/06-rancher.md
Normal file
@@ -0,0 +1,200 @@
|
||||
# 06 — Rancher
|
||||
|
||||
**Datum:** 2026-03-17
|
||||
**Version:** Rancher v2.13.3 (Helm Chart 2.13.3)
|
||||
**Namespace:** cattle-system
|
||||
**URL:** https://rancher.192.168.11.170.nip.io
|
||||
|
||||
---
|
||||
|
||||
## Übersicht
|
||||
|
||||
Rancher ist eine Kubernetes-Management-Plattform mit Web-UI. Es ermöglicht die
|
||||
zentrale Verwaltung von Kubernetes-Clustern, Workloads, Storage, Netzwerk und
|
||||
Benutzerrechten. In diesem Homelab läuft Rancher im k3s-Cluster selbst
|
||||
(single-cluster setup).
|
||||
|
||||
---
|
||||
|
||||
## Voraussetzungen
|
||||
|
||||
- cert-manager v1.20.0 installiert und bereit (`05-cert-manager.md`)
|
||||
- Traefik als Ingress Controller aktiv (`04-traefik.md`)
|
||||
- Alle 3 Nodes `Ready`
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
### 1. Rancher Helm Repository hinzufügen
|
||||
|
||||
```bash
|
||||
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
|
||||
helm repo update
|
||||
```
|
||||
|
||||
Ausgabe:
|
||||
```
|
||||
"rancher-stable" has been added to your repositories
|
||||
...Successfully got an update from the "rancher-stable" chart repository
|
||||
```
|
||||
|
||||
### 2. Namespace erstellen
|
||||
|
||||
```bash
|
||||
kubectl create namespace cattle-system
|
||||
```
|
||||
|
||||
### 3. Rancher per Helm installieren
|
||||
|
||||
```bash
|
||||
helm install rancher rancher-stable/rancher \
|
||||
--namespace cattle-system \
|
||||
--version 2.13.3 \
|
||||
--set hostname=rancher.192.168.11.170.nip.io \
|
||||
--set ingress.tls.source=rancher \
|
||||
--set replicas=2 \
|
||||
--wait \
|
||||
--timeout 10m
|
||||
```
|
||||
|
||||
**Wichtig — TLS-Quelle:** `ingress.tls.source=rancher` verwendet Rancher's
|
||||
eigene self-signed CA (nicht Let's Encrypt). Siehe Abschnitt "TLS-Entscheidung"
|
||||
weiter unten.
|
||||
|
||||
---
|
||||
|
||||
## TLS-Entscheidung: Rancher Self-Signed CA statt Let's Encrypt
|
||||
|
||||
### Erster Versuch: Let's Encrypt
|
||||
|
||||
Initial wurde `ingress.tls.source=letsEncrypt` versucht. Dies schlug fehl:
|
||||
|
||||
```
|
||||
Error: acme: authorization error for rancher.192.168.11.170.nip.io:
|
||||
400 urn:ietf:params:acme:error:dns:
|
||||
no valid A records found for rancher.192.168.11.170.nip.io
|
||||
```
|
||||
|
||||
**Grund:** Let's Encrypt benötigt für die HTTP-01 Challenge öffentlichen
|
||||
Internetzugriff auf den Server. Die IP `192.168.11.170` ist eine private
|
||||
RFC1918-Adresse — Let's Encrypt's Validierungsserver können diesen Host
|
||||
nicht erreichen.
|
||||
|
||||
### Lösung: Rancher Self-Signed CA
|
||||
|
||||
```bash
|
||||
helm upgrade rancher rancher-stable/rancher \
|
||||
--namespace cattle-system \
|
||||
--version 2.13.3 \
|
||||
--set hostname=rancher.192.168.11.170.nip.io \
|
||||
--set ingress.tls.source=rancher \
|
||||
--set replicas=2
|
||||
```
|
||||
|
||||
Rancher erstellt eine eigene CA (`tls-rancher` Secret) und stellt darüber
|
||||
das Ingress-Zertifikat via cert-manager aus (CA-Issuer `rancher` in
|
||||
`cattle-system`). Der Browser zeigt eine Zertifikatswarnung — für ein
|
||||
privates Homelab ist das akzeptabel.
|
||||
|
||||
### Certificate-Fix
|
||||
|
||||
Nach dem Upgrade hatte cert-manager das alte fehlgeschlagene
|
||||
CertificateRequest im Backoff. Das Certificate-Objekt wurde manuell
|
||||
gelöscht, woraufhin Rancher's Controller es sofort neu erstellte:
|
||||
|
||||
```bash
|
||||
kubectl delete certificate tls-rancher-ingress -n cattle-system
|
||||
# → READY: True nach ~20 Sekunden
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pod-Status
|
||||
|
||||
```bash
|
||||
kubectl get pods -n cattle-system
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
rancher-6f98b4d565-94mff 1/1 Running 0 ... rnk-wrk02
|
||||
rancher-6f98b4d565-q7jc4 1/1 Running 0 ... rnk-cp01
|
||||
rancher-webhook-5dcf69b995-4q9sg 1/1 Running 0 ... rnk-wrk02
|
||||
system-upgrade-controller-65d9b4b8b-7wvrl 1/1 Running 0 ... rnk-cp01
|
||||
helm-operation-* 0/2 Completed 0 ... (abgeschlossene Jobs)
|
||||
```
|
||||
|
||||
- **rancher**: 2 Replicas, verteilt auf rnk-wrk02 und rnk-cp01
|
||||
- **rancher-webhook**: Validierungs-Webhook für Rancher-CRDs
|
||||
- **system-upgrade-controller**: Verwaltet k3s-Node-Upgrades
|
||||
|
||||
---
|
||||
|
||||
## Ingress & TLS
|
||||
|
||||
```bash
|
||||
kubectl get ingress -n cattle-system
|
||||
kubectl get certificate -n cattle-system
|
||||
```
|
||||
|
||||
```
|
||||
NAME CLASS HOSTS ADDRESS PORTS
|
||||
rancher traefik rancher.192.168.11.170.nip.io 192.168.11.170,192.168.11.171,192.168.11.172 80, 443
|
||||
|
||||
NAME READY SECRET
|
||||
tls-rancher-ingress True tls-rancher-ingress
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Helm Release Info
|
||||
|
||||
```
|
||||
NAME NAMESPACE REVISION STATUS CHART APP VERSION
|
||||
rancher cattle-system 2 deployed rancher-2.13.3 v2.13.3
|
||||
```
|
||||
|
||||
Revision 2: Upgrade von `letsEncrypt` → `rancher` TLS-Quelle.
|
||||
|
||||
---
|
||||
|
||||
## Erster Login
|
||||
|
||||
### URL
|
||||
|
||||
```
|
||||
https://rancher.192.168.11.170.nip.io
|
||||
```
|
||||
|
||||
> Browser zeigt Zertifikatswarnung (self-signed CA) → "Trotzdem fortfahren"
|
||||
|
||||
### Bootstrap Password abrufen
|
||||
|
||||
```bash
|
||||
kubectl get secret --namespace cattle-system bootstrap-secret \
|
||||
-o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'
|
||||
```
|
||||
|
||||
**Bootstrap Password (einmalig):** `vks6s469l7h5dtm25hh8vz6hzcpmkjx6jc87qdshm7c7ggq9n84q9m`
|
||||
|
||||
> Nach dem ersten Login wird ein neues Passwort gesetzt — das Bootstrap
|
||||
> Password danach nicht mehr gültig.
|
||||
|
||||
### Direkter Setup-Link
|
||||
|
||||
```bash
|
||||
echo https://rancher.192.168.11.170.nip.io/dashboard/?setup=$(kubectl get secret \
|
||||
--namespace cattle-system bootstrap-secret \
|
||||
-o go-template='{{.data.bootstrapPassword|base64decode}}')
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nächste Schritte
|
||||
|
||||
- [ ] Ersten Login durchführen, Admin-Passwort setzen
|
||||
- [ ] Cluster in Rancher registrieren / importieren
|
||||
- [ ] Longhorn-Storage in Rancher-UI prüfen
|
||||
- [ ] User und Rollen konfigurieren
|
||||
- [ ] Rancher-eigenen Monitoring-Stack prüfen (optional)
|
||||
226
docs/07-argocd.md
Normal file
226
docs/07-argocd.md
Normal file
@@ -0,0 +1,226 @@
|
||||
# 07 — ArgoCD
|
||||
|
||||
**Datum:** 2026-03-17
|
||||
**Version:** ArgoCD v3.3.4 (Helm Chart argo-cd-9.4.12)
|
||||
**Namespace:** argocd
|
||||
**URL:** https://argocd.192.168.11.170.nip.io
|
||||
|
||||
---
|
||||
|
||||
## Übersicht
|
||||
|
||||
ArgoCD ist ein deklarativer GitOps Continuous Delivery Controller für
|
||||
Kubernetes. Es synchronisiert Kubernetes-Manifeste aus Git-Repositories
|
||||
automatisch in den Cluster. Änderungen im Git → ArgoCD erkennt Drift →
|
||||
automatisches oder manuelles Sync in den Cluster.
|
||||
|
||||
---
|
||||
|
||||
## Voraussetzungen
|
||||
|
||||
- Traefik Ingress Controller aktiv (`04-traefik.md`)
|
||||
- cert-manager installiert (`05-cert-manager.md`)
|
||||
- Alle 3 Nodes `Ready`
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
### 1. Argo Helm Repository hinzufügen
|
||||
|
||||
```bash
|
||||
helm repo add argo https://argoproj.github.io/argo-helm
|
||||
helm repo update
|
||||
```
|
||||
|
||||
Ausgabe:
|
||||
```
|
||||
"argo" has been added to your repositories
|
||||
...Successfully got an update from the "argo" chart repository
|
||||
```
|
||||
|
||||
### 2. Namespace erstellen
|
||||
|
||||
```bash
|
||||
kubectl create namespace argocd
|
||||
```
|
||||
|
||||
### 3. ArgoCD per Helm installieren
|
||||
|
||||
```bash
|
||||
helm install argocd argo/argo-cd \
|
||||
--namespace argocd \
|
||||
--version 9.4.12 \
|
||||
--set server.ingress.enabled=true \
|
||||
--set server.ingress.ingressClassName=traefik \
|
||||
--set "server.ingress.hostname=argocd.192.168.11.170.nip.io" \
|
||||
--set "server.ingress.tls=true" \
|
||||
--set configs.params."server\.insecure"=true \
|
||||
--set "server.ingress.annotations.traefik\.ingress\.kubernetes\.io/router\.entrypoints=websecure" \
|
||||
--set "server.ingress.annotations.traefik\.ingress\.kubernetes\.io/router\.tls=true" \
|
||||
--wait \
|
||||
--timeout 10m
|
||||
```
|
||||
|
||||
**Wichtige Flags:**
|
||||
- `configs.params."server\.insecure"=true` — ArgoCD Server läuft ohne eigenes
|
||||
TLS, da TLS durch Traefik terminiert wird (SSL-Termination am Ingress)
|
||||
- `server.ingress.tls=true` — Traefik stellt HTTPS bereit
|
||||
- Traefik-Annotations erzwingen HTTPS-Entrypoint
|
||||
|
||||
### Hinweis: Hostname-Parameter
|
||||
|
||||
Der Helm-Chart verwendet `server.ingress.hostname` (nicht `hosts[0]`).
|
||||
Bei Verwendung von `hosts[0]` wird der Default-Hostname `argocd.example.com`
|
||||
nicht überschrieben. Korrekte Variante ist `hostname`.
|
||||
|
||||
---
|
||||
|
||||
## Pod-Status
|
||||
|
||||
```bash
|
||||
kubectl get pods -n argocd -o wide
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS NODE
|
||||
argocd-application-controller-0 1/1 Running rnk-wrk02
|
||||
argocd-applicationset-controller-6fdf946c79-28mzq 1/1 Running rnk-wrk02
|
||||
argocd-dex-server-855967dc45-q4l6n 1/1 Running rnk-wrk02
|
||||
argocd-notifications-controller-75cd85cdc-6zmqg 1/1 Running rnk-wrk02
|
||||
argocd-redis-75b6f7c5cf-hjkhc 1/1 Running rnk-wrk02
|
||||
argocd-redis-secret-init-xn472 0/1 Completed rnk-wrk02
|
||||
argocd-repo-server-59d5dccbf7-d5s99 1/1 Running rnk-wrk01
|
||||
argocd-server-56f7f5d5d9-ll626 1/1 Running rnk-wrk01
|
||||
```
|
||||
|
||||
Alle 7 Pods laufen stabil. `argocd-redis-secret-init` ist ein einmaliger
|
||||
Init-Job (Status `Completed` = korrekt).
|
||||
|
||||
**Komponenten:**
|
||||
|
||||
| Pod | Funktion |
|
||||
|---|---|
|
||||
| `argocd-server` | Web-UI + API Server |
|
||||
| `argocd-application-controller` | Überwacht Cluster-State vs. Git-State |
|
||||
| `argocd-repo-server` | Git-Repository-Zugriff, Manifest-Rendering |
|
||||
| `argocd-applicationset-controller` | Automatisierte Application-Sets |
|
||||
| `argocd-dex-server` | SSO / OIDC Identity Provider |
|
||||
| `argocd-redis` | Cache für Application State |
|
||||
| `argocd-notifications-controller` | Benachrichtigungen (Slack, Email etc.) |
|
||||
|
||||
---
|
||||
|
||||
## Ingress
|
||||
|
||||
```bash
|
||||
kubectl get ingress -n argocd
|
||||
```
|
||||
|
||||
```
|
||||
NAME CLASS HOSTS ADDRESS PORTS
|
||||
argocd-server traefik argocd.192.168.11.170.nip.io 192.168.11.170,192.168.11.171,192.168.11.172 80, 443
|
||||
```
|
||||
|
||||
TLS wird von Traefik terminiert. ArgoCD selbst läuft im `insecure`-Modus
|
||||
(HTTP intern), was bei Ingress-TLS-Terminierung der empfohlene Ansatz ist.
|
||||
|
||||
---
|
||||
|
||||
## Erster Login
|
||||
|
||||
### URL
|
||||
|
||||
```
|
||||
https://argocd.192.168.11.170.nip.io
|
||||
```
|
||||
|
||||
> Browser zeigt Zertifikatswarnung (Traefik self-signed) → "Trotzdem fortfahren"
|
||||
|
||||
### Zugangsdaten
|
||||
|
||||
| Feld | Wert |
|
||||
|---|---|
|
||||
| Benutzername | `admin` |
|
||||
| Passwort (initial) | `T8T1rLY0ac2MqWiC` |
|
||||
|
||||
### Admin-Passwort abrufen (jederzeit)
|
||||
|
||||
```bash
|
||||
kubectl -n argocd get secret argocd-initial-admin-secret \
|
||||
-o jsonpath="{.data.password}" | base64 -d && echo
|
||||
```
|
||||
|
||||
> **Sicherheitshinweis:** Nach dem ersten Login und Passwortänderung das
|
||||
> Initial-Secret löschen:
|
||||
> ```bash
|
||||
> kubectl delete secret argocd-initial-admin-secret -n argocd
|
||||
> ```
|
||||
|
||||
---
|
||||
|
||||
## Helm Release Info
|
||||
|
||||
```
|
||||
NAME NAMESPACE REVISION STATUS CHART APP VERSION
|
||||
argocd argocd 2 deployed argo-cd-9.4.12 v3.3.4
|
||||
```
|
||||
|
||||
Revision 2: Hostname-Fix von `argocd.example.com` → `argocd.192.168.11.170.nip.io`
|
||||
|
||||
---
|
||||
|
||||
## ArgoCD CLI (optional)
|
||||
|
||||
```bash
|
||||
# CLI installieren
|
||||
curl -sSL -o /usr/local/bin/argocd \
|
||||
https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
|
||||
chmod +x /usr/local/bin/argocd
|
||||
|
||||
# Login
|
||||
argocd login argocd.192.168.11.170.nip.io \
|
||||
--username admin \
|
||||
--password T8T1rLY0ac2MqWiC \
|
||||
--insecure
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Erste Application deployen (Beispiel)
|
||||
|
||||
```yaml
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: my-app
|
||||
namespace: argocd
|
||||
spec:
|
||||
project: default
|
||||
source:
|
||||
repoURL: https://github.com/mein-user/mein-repo.git
|
||||
targetRevision: HEAD
|
||||
path: k8s/
|
||||
destination:
|
||||
server: https://kubernetes.default.svc
|
||||
namespace: default
|
||||
syncPolicy:
|
||||
automated:
|
||||
prune: true
|
||||
selfHeal: true
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -f my-app.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nächste Schritte
|
||||
|
||||
- [ ] Erstes Git-Repository in ArgoCD registrieren
|
||||
- [ ] Admin-Passwort ändern und Initial-Secret löschen
|
||||
- [ ] Homelab-Manifeste in Git-Repository ablegen
|
||||
- [ ] ArgoCD App für Longhorn/cert-manager Konfiguration erstellen
|
||||
- [ ] SSO via Dex konfigurieren (optional)
|
||||
- [ ] Notifications für Sync-Status einrichten (optional)
|
||||
272
docs/08-omada-mcp.md
Normal file
272
docs/08-omada-mcp.md
Normal file
@@ -0,0 +1,272 @@
|
||||
# 08 · TPLink Omada MCP Server
|
||||
|
||||
**Erstellt:** 2026-03-18
|
||||
**Aktualisiert:** 2026-03-18 — Credentials rotiert (CLIENT_ID/SECRET), Service auf NodePort 31777 umgestellt
|
||||
**Namespace:** `omada-mcp`
|
||||
**Image:** `jmtvms/tplink-omada-mcp:latest`
|
||||
**Status:** Running · Pod `omada-mcp-dfdbfbcf8-x6t5k` · NodePort `31777` → Container `3000`
|
||||
|
||||
---
|
||||
|
||||
## Übersicht
|
||||
|
||||
Der [tplink-omada-mcp](https://github.com/jmtvms/tplink-omada-mcp) Server stellt eine MCP-Schnittstelle (Model Context Protocol) für den TPLink Omada Controller bereit. Claude und andere KI-Assistenten können darüber das Netzwerk verwalten (Clients, SSIDs, VLANs, Geräte usw.).
|
||||
|
||||
Der Server läuft im HTTP-Modus auf Port **3000** (`MCP_SERVER_USE_HTTP=true`).
|
||||
|
||||
---
|
||||
|
||||
## Dateien
|
||||
|
||||
```
|
||||
homelab/k8s/omada-mcp/
|
||||
├── kustomization.yaml # Kustomize-Einstiegspunkt
|
||||
├── namespace.yaml # Namespace omada-mcp
|
||||
├── secret.yaml # Credentials (Platzhalter – vor Anwenden befüllen!)
|
||||
├── deployment.yaml # Deployment (1 Replica)
|
||||
└── service.yaml # NodePort Service Port 3000 → NodePort 31777
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Credentials (Secret)
|
||||
|
||||
Das Secret `omada-mcp-credentials` enthält folgende Schlüssel:
|
||||
|
||||
| Key | Beschreibung |
|
||||
|-----|-------------|
|
||||
| `OMADA_BASE_URL` | URL des Omada Controllers, z. B. `https://192.168.11.29` |
|
||||
| `OMADA_CLIENT_ID` | OAuth Client-ID aus dem Omada Controller |
|
||||
| `OMADA_CLIENT_SECRET` | OAuth Client-Secret |
|
||||
| `OMADA_OMADAC_ID` | Omada-Controller-ID (zu finden in den Controller-Einstellungen) |
|
||||
| `OMADA_STRICT_SSL` | `false` – deaktiviert SSL-Verifikation (Self-signed Certs) |
|
||||
| `MCP_SERVER_USE_HTTP` | `true` – aktiviert HTTP-Modus |
|
||||
| `MCP_HTTP_BIND_ADDR` | `0.0.0.0` – lauscht auf allen Interfaces |
|
||||
|
||||
### Secret anlegen (empfohlen – kein base64 nötig)
|
||||
|
||||
```bash
|
||||
kubectl create secret generic omada-mcp-credentials \
|
||||
--namespace omada-mcp \
|
||||
--from-literal=OMADA_BASE_URL='https://192.168.11.29' \
|
||||
--from-literal=OMADA_CLIENT_ID='dein-client-id' \
|
||||
--from-literal=OMADA_CLIENT_SECRET='dein-client-secret' \
|
||||
--from-literal=OMADA_OMADAC_ID='dein-omadac-id' \
|
||||
--from-literal=OMADA_STRICT_SSL='false' \
|
||||
--from-literal=MCP_SERVER_USE_HTTP='true' \
|
||||
--from-literal=MCP_HTTP_BIND_ADDR='0.0.0.0'
|
||||
```
|
||||
|
||||
Danach `secret.yaml` **nicht** anwenden (das Secret existiert bereits).
|
||||
|
||||
### Alternativ: secret.yaml befüllen
|
||||
|
||||
```bash
|
||||
# Werte base64-kodieren
|
||||
echo -n 'https://omada.example.com' | base64
|
||||
echo -n 'dein-client-id' | base64
|
||||
echo -n 'dein-client-secret' | base64
|
||||
echo -n 'dein-omadac-id' | base64
|
||||
```
|
||||
|
||||
Die Ausgaben in `secret.yaml` bei den `REPLACE_WITH_*`-Platzhaltern eintragen.
|
||||
|
||||
---
|
||||
|
||||
## Deployment
|
||||
|
||||
### Voraussetzungen
|
||||
|
||||
- Namespace existiert (wird durch `namespace.yaml` erstellt)
|
||||
- Secret ist befüllt (siehe oben)
|
||||
|
||||
### Anwenden mit Kustomize
|
||||
|
||||
```bash
|
||||
# Dry-run – Manifeste prüfen
|
||||
kubectl apply -k homelab/k8s/omada-mcp/ --dry-run=client
|
||||
|
||||
# Anwenden
|
||||
kubectl apply -k homelab/k8s/omada-mcp/
|
||||
```
|
||||
|
||||
### Einzelne Manifeste anwenden
|
||||
|
||||
```bash
|
||||
kubectl apply -f homelab/k8s/omada-mcp/namespace.yaml
|
||||
kubectl apply -f homelab/k8s/omada-mcp/secret.yaml # nur wenn nicht per kubectl create
|
||||
kubectl apply -f homelab/k8s/omada-mcp/deployment.yaml
|
||||
kubectl apply -f homelab/k8s/omada-mcp/service.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Umgebungsvariablen im Container
|
||||
|
||||
| Variable | Wert | Quelle |
|
||||
|----------|------|--------|
|
||||
| `OMADA_BASE_URL` | `https://192.168.11.29` | Secret `omada-mcp-credentials` |
|
||||
| `OMADA_CLIENT_ID` | aus Secret | `omada-mcp-credentials` |
|
||||
| `OMADA_CLIENT_SECRET` | aus Secret | `omada-mcp-credentials` |
|
||||
| `OMADA_OMADAC_ID` | aus Secret | `omada-mcp-credentials` |
|
||||
| `OMADA_STRICT_SSL` | `false` | `omada-mcp-credentials` |
|
||||
| `MCP_SERVER_USE_HTTP` | `true` | `omada-mcp-credentials` |
|
||||
| `MCP_HTTP_BIND_ADDR` | `0.0.0.0` | `omada-mcp-credentials` |
|
||||
|
||||
---
|
||||
|
||||
## Verifikation
|
||||
|
||||
```bash
|
||||
# Pod-Status prüfen
|
||||
kubectl get pods -n omada-mcp
|
||||
|
||||
# Logs anzeigen
|
||||
kubectl logs -n omada-mcp deployment/omada-mcp
|
||||
|
||||
# Service prüfen
|
||||
kubectl get svc -n omada-mcp
|
||||
|
||||
# Direkter Test via Port-Forward
|
||||
kubectl port-forward -n omada-mcp svc/omada-mcp 3000:3000
|
||||
curl http://localhost:3000/
|
||||
```
|
||||
|
||||
### Erwartete Ausgabe (Pod läuft)
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
omada-mcp-xxxxxxxxx-xxxxx 1/1 Running 0 1m
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## MCP-Endpunkt in Claude Code einbinden
|
||||
|
||||
Sobald der Pod läuft, kann der MCP-Server per Port-Forward oder Ingress angebunden werden.
|
||||
|
||||
### Option A: NodePort (dauerhaft, direkt erreichbar)
|
||||
|
||||
Der Service ist auf NodePort **31777** auf allen Nodes erreichbar:
|
||||
|
||||
| Node | URL |
|
||||
|------|-----|
|
||||
| rnk-cp01 | `http://192.168.11.170:31777` |
|
||||
| rnk-wrk01 | `http://192.168.11.171:31777` |
|
||||
| rnk-wrk02 | `http://192.168.11.172:31777` |
|
||||
|
||||
Claude Code `.mcp.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"omada": {
|
||||
"type": "http",
|
||||
"url": "http://192.168.11.171:31777/mcp"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Option B: Port-Forward (lokal/temporär)
|
||||
|
||||
```bash
|
||||
kubectl port-forward -n omada-mcp svc/omada-mcp 3000:3000 &
|
||||
```
|
||||
|
||||
### Option B: Ingress (dauerhaft, mit Traefik)
|
||||
|
||||
Ingress-Manifest erstellen (optional):
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: omada-mcp
|
||||
namespace: omada-mcp
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
cert-manager.io/cluster-issuer: letsencrypt-production
|
||||
spec:
|
||||
ingressClassName: traefik
|
||||
tls:
|
||||
- hosts:
|
||||
- omada-mcp.homelab.local
|
||||
secretName: omada-mcp-tls
|
||||
rules:
|
||||
- host: omada-mcp.homelab.local
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: omada-mcp
|
||||
port:
|
||||
number: 3000
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Ressourcen
|
||||
|
||||
- CPU: 50m Request / 200m Limit
|
||||
- Memory: 64Mi Request / 256Mi Limit
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
```bash
|
||||
# Pod-Events anzeigen
|
||||
kubectl describe pod -n omada-mcp -l app.kubernetes.io/name=omada-mcp
|
||||
|
||||
# Secret-Inhalte prüfen (base64-dekodiert)
|
||||
kubectl get secret omada-mcp-credentials -n omada-mcp -o jsonpath='{.data.OMADA_BASE_URL}' | base64 -d
|
||||
|
||||
# Pod neu starten
|
||||
kubectl rollout restart deployment/omada-mcp -n omada-mcp
|
||||
```
|
||||
|
||||
### Häufige Fehler
|
||||
|
||||
| Fehler | Ursache | Lösung |
|
||||
|--------|---------|--------|
|
||||
| `ImagePullBackOff` | Image nicht erreichbar | Internet-Zugang der Nodes prüfen |
|
||||
| `CrashLoopBackOff` | Falsche Credentials | Logs prüfen, Secret aktualisieren |
|
||||
| `Pending` | Ressourcen fehlen | `kubectl describe pod` → Events |
|
||||
| SSL-Fehler | Self-signed Cert | `OMADA_STRICT_SSL=false` bestätigen |
|
||||
|
||||
---
|
||||
|
||||
## Deployment-Geschichte
|
||||
|
||||
| Datum | Aktion | Ergebnis |
|
||||
|-------|--------|---------|
|
||||
| 2026-03-18 | Namespace, Secret, Deployment, Service erstellt | Pod in CrashLoopBackOff wegen falscher Probe-Pfade |
|
||||
| 2026-03-18 | Liveness/Readiness Probe auf `/healthz` geändert | Pod `1/1 Running`, `/healthz` → `{"status":"ok"}` |
|
||||
| 2026-03-18 | MCP Initialize-Request getestet | Server antwortet korrekt, Version 0.1.0 |
|
||||
| 2026-03-18 | Service von ClusterIP auf NodePort 31777 umgestellt | Extern erreichbar auf allen Nodes |
|
||||
| 2026-03-18 | Credentials rotiert (CLIENT_ID + CLIENT_SECRET) | Pod-Rollout erfolgreich, `1/1 Running` |
|
||||
|
||||
### Bekannte Besonderheit: Probe-Pfad
|
||||
|
||||
Der MCP-Server gibt auf `GET /` HTTP 404 zurück (MCP-Protokoll beantwortet nur POST mit korrekten Headers).
|
||||
Der Health-Endpoint ist `/healthz` → `{"status":"ok"}`.
|
||||
|
||||
### MCP-Endpunkt aufrufen
|
||||
|
||||
```bash
|
||||
curl -s -X POST http://192.168.11.171:31777/mcp \
|
||||
-H 'Content-Type: application/json' \
|
||||
-H 'Accept: application/json, text/event-stream' \
|
||||
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}'
|
||||
```
|
||||
|
||||
## Nächste Schritte
|
||||
|
||||
- [x] Credentials im Secret befüllen und anwenden
|
||||
- [x] Pod-Status und Logs nach Deployment prüfen
|
||||
- [ ] MCP-Server in Claude Code `.mcp.json` eintragen
|
||||
- [ ] Optional: Ingress für dauerhaften Zugriff erstellen
|
||||
- [ ] Optional: Horizontal Pod Autoscaler bei Bedarf hinzufügen
|
||||
361
docs/09-metallb-pihole.md
Normal file
361
docs/09-metallb-pihole.md
Normal file
@@ -0,0 +1,361 @@
|
||||
# 09 · MetalLB + Pi-hole Installation
|
||||
|
||||
**Datum:** 2026-03-18
|
||||
|
||||
---
|
||||
|
||||
## Was wurde installiert
|
||||
|
||||
- **MetalLB** v0.14.x — Layer-2 Load Balancer für den k3s-Cluster
|
||||
- **Pi-hole** (pihole/pihole:latest) — DNS-Adblocker mit Web-UI
|
||||
|
||||
---
|
||||
|
||||
## Vorbereitungen
|
||||
|
||||
### Klipper (k3s ServiceLB) deaktiviert
|
||||
|
||||
k3s hat einen eingebauten Load-Balancer (Klipper/ServiceLB), der mit MetalLB kollidiert.
|
||||
Deaktiviert durch Ergänzung in `/etc/systemd/system/k3s.service`:
|
||||
|
||||
```
|
||||
'--disable=servicelb' \
|
||||
```
|
||||
|
||||
Dann: `systemctl daemon-reload && systemctl restart k3s`
|
||||
|
||||
### DNS-Fix auf rnk-wrk02
|
||||
|
||||
`systemd-resolved` Stub-Resolver (127.0.0.53) war defekt → containerd konnte keine Images pullen.
|
||||
|
||||
Fix: `/etc/resolv.conf` Symlink auf direkte DNS-Auflösung umgestellt:
|
||||
```bash
|
||||
ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
|
||||
```
|
||||
Nameserver ist nun direkt `192.168.11.1`.
|
||||
|
||||
---
|
||||
|
||||
## Ausgeführte Befehle
|
||||
|
||||
### MetalLB
|
||||
|
||||
```bash
|
||||
helm repo add metallb https://metallb.github.io/metallb
|
||||
helm repo update
|
||||
helm upgrade --install metallb metallb/metallb \
|
||||
--namespace metallb-system \
|
||||
--create-namespace \
|
||||
--wait
|
||||
kubectl apply -f ~/homelab/k8s/metallb/metallb-config.yaml
|
||||
```
|
||||
|
||||
### Pi-hole
|
||||
|
||||
```bash
|
||||
kubectl create namespace pihole
|
||||
kubectl apply -f ~/homelab/k8s/pihole/secret.yaml
|
||||
kubectl apply -k ~/homelab/k8s/pihole/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## IP-Adressen
|
||||
|
||||
| Service | IP | Protokoll |
|
||||
|---|---|---|
|
||||
| Traefik (Ingress) | 192.168.11.180 | TCP 80/443 |
|
||||
| Pi-hole DNS | 192.168.11.181 | TCP+UDP 53 |
|
||||
|
||||
MetalLB Pool: `192.168.11.180 – 192.168.11.199`
|
||||
|
||||
---
|
||||
|
||||
## Ergebnis
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS
|
||||
pod/pihole-f7d664fd-v65wn 1/1 Running 0
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP
|
||||
pihole-dns-tcp LoadBalancer 10.43.200.43 192.168.11.181
|
||||
pihole-dns-udp LoadBalancer 10.43.65.201 192.168.11.181
|
||||
pihole-web ClusterIP 10.43.25.107 <none>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Zugang
|
||||
|
||||
- **Web-UI:** http://pihole.192.168.11.181.nip.io/admin
|
||||
- **DNS-Server:** `192.168.11.181` (Port 53 TCP+UDP)
|
||||
- **Passwort:** in Secret `pihole/pihole-secret`
|
||||
|
||||
---
|
||||
|
||||
## Dateien
|
||||
|
||||
```
|
||||
~/homelab/k8s/metallb/
|
||||
metallb-config.yaml # IPAddressPool + L2Advertisement
|
||||
|
||||
~/homelab/k8s/pihole/
|
||||
kustomization.yaml
|
||||
namespace.yaml
|
||||
secret.yaml # WEBPASSWORD (nicht ins Git!)
|
||||
deployment.yaml
|
||||
services.yaml # DNS TCP/UDP LoadBalancer + Web ClusterIP
|
||||
ingress.yaml # Traefik Ingress für Web-UI
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nächste Schritte
|
||||
|
||||
- Router/DHCP auf DNS `192.168.11.181` umstellen
|
||||
- Pi-hole Blocklisten konfigurieren
|
||||
- ~~Ggf. Persistent Volume für `/etc/pihole` hinzufügen~~ → erledigt (siehe unten)
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting 2026-03-19
|
||||
|
||||
### Problem 1: Web-UI nicht erreichbar
|
||||
|
||||
**Symptom:** Browser kann `http://pihole.192.168.11.180.nip.io/admin/` nicht laden.
|
||||
|
||||
**Diagnose:**
|
||||
```bash
|
||||
kubectl get pods -n pihole # Pod läuft (Running)
|
||||
kubectl logs -n pihole deployment/pihole --tail=50 # FTL startet normal
|
||||
```
|
||||
|
||||
**Ursache:** Pi-hole v6 verwendet keinen lighttpd mehr — der Webserver ist direkt in FTL integriert (Port 80/443). `service lighttpd status` schlägt in v6 fehl.
|
||||
|
||||
**Ergebnis:** Web-UI war tatsächlich erreichbar, Ingress funktionierte korrekt (HTTP 302 → `/admin/login`).
|
||||
|
||||
---
|
||||
|
||||
### Problem 2: DNS antwortet nicht (192.168.11.181:53)
|
||||
|
||||
**Symptom:** `dig @192.168.11.181 google.com` läuft in Timeout. `nc -vzu 192.168.11.181 53` zeigt Port als offen.
|
||||
|
||||
**Diagnose:**
|
||||
```bash
|
||||
kubectl exec -n pihole <POD> -- grep -v "^#" /etc/pihole/pihole.toml | grep listeningMode
|
||||
# → listeningMode = "LOCAL"
|
||||
```
|
||||
|
||||
**Ursache:** Pi-hole v6 FTL startet standardmäßig mit `dns.listeningMode = "LOCAL"`. In Kubernetes kommt LoadBalancer-Traffic (192.168.11.0/24) nicht direkt vom Pod-Interface → dnsmasq verwirft alle Anfragen (logged: `ignoring query from non-local network`).
|
||||
|
||||
**Fix:** `DNSMASQ_LISTENING=all` in `deployment.yaml` hinzugefügt:
|
||||
```yaml
|
||||
env:
|
||||
- name: DNSMASQ_LISTENING
|
||||
value: "all"
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl patch deployment pihole -n pihole --type=json \
|
||||
-p='[{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"DNSMASQ_LISTENING","value":"all"}}]'
|
||||
kubectl rollout status deployment/pihole -n pihole
|
||||
```
|
||||
|
||||
**Verifikation:**
|
||||
```bash
|
||||
dig @192.168.11.181 google.com +short
|
||||
# → 142.250.201.78 ✓
|
||||
```
|
||||
|
||||
**Persistenz:** Fix ist in `~/homelab/k8s/pihole/deployment.yaml` und in etcd gespeichert — überlebt Pod-Neustarts.
|
||||
|
||||
---
|
||||
|
||||
### Problem 3: HTTPS-Zugriff auf Web-UI schlägt fehl (2026-03-19)
|
||||
|
||||
**Symptom:** `https://pihole.192.168.11.180.nip.io/admin` nicht erreichbar — Ingress war nur auf HTTP (Port 80) konfiguriert.
|
||||
|
||||
**Ursache:** `ingress.yaml` hatte `traefik.ingress.kubernetes.io/router.entrypoints: web` — nur HTTP-Entrypoint.
|
||||
|
||||
**Fix:** Ingress auf `websecure` umgestellt + TLS aktiviert (Traefik Default-Zertifikat):
|
||||
```yaml
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- pihole.192.168.11.180.nip.io
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -f ~/homelab/k8s/pihole/ingress.yaml
|
||||
```
|
||||
|
||||
**Hinweis:** Browser zeigt Zertifikatswarnung (self-signed) — Ausnahme hinzufügen.
|
||||
|
||||
**Zugang:** `https://pihole.192.168.11.180.nip.io/admin`
|
||||
|
||||
---
|
||||
|
||||
### Problem 4: Claude Code ConnectionRefused wenn Pi-hole als DNS-Proxy gesetzt (2026-03-19)
|
||||
|
||||
**Symptom:** `API Error: Unable to connect to API (ConnectionRefused)` in Claude Code wenn `192.168.11.181` als DNS-Proxy in Omada gesetzt ist.
|
||||
|
||||
**Ursache:** Pi-hole gibt für `api.anthropic.com` auch eine IPv6-Adresse (`2607:6bc0::10`) zurück. Der Node hat kein funktionierendes globales IPv6 → Verbindungsversuch auf IPv6 schlägt mit ConnectionRefused fehl.
|
||||
|
||||
**Fix:** IPv4 in `/etc/gai.conf` bevorzugen:
|
||||
```bash
|
||||
echo "precedence ::ffff:0:0/96 100" >> /etc/gai.conf
|
||||
```
|
||||
|
||||
**Verifikation:**
|
||||
```bash
|
||||
curl --connect-timeout 5 https://api.anthropic.com
|
||||
# → Anthropic API erreichbar ✓
|
||||
```
|
||||
|
||||
**Persistenz:** `/etc/gai.conf` ist persistent auf `rnk-cp01`.
|
||||
|
||||
---
|
||||
|
||||
## Migration von altem Pi-hole (2026-03-19)
|
||||
|
||||
### Ausgangslage
|
||||
|
||||
Altes Pi-hole lief als Docker-Container (`Pihole-DoT-DoH`, Image `devzwf/pihole-dot-doh:latest`) auf Unraid (`192.168.11.124`), erreichbar unter `192.168.11.123`.
|
||||
|
||||
### Vorgehen: Teleporter-Backup via HTTP-API
|
||||
|
||||
Pi-hole v6 änderte die CLI-Syntax — `pihole -a -t` funktioniert nicht mehr. Shell-Umleitung (`>`) korrumpiert Binärdaten (null bytes). Lösung: Backup direkt über die REST-API laden.
|
||||
|
||||
```bash
|
||||
# 1. Authentifizieren und Backup als gültige ZIP herunterladen
|
||||
SID=$(curl -s -X POST http://192.168.11.123/api/auth \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"password":"<altes-passwort>"}' | grep -o '"sid":"[^"]*"' | cut -d'"' -f4)
|
||||
|
||||
curl -s -X GET http://192.168.11.123/api/teleporter \
|
||||
-H "sid: $SID" \
|
||||
-o /tmp/pihole-backup.zip
|
||||
|
||||
# 2. In neuen Pod kopieren und importieren
|
||||
POD=$(kubectl get pods -n pihole -l app=pihole -o jsonpath='{.items[0].metadata.name}')
|
||||
kubectl cp /tmp/pihole-backup.zip pihole/$POD:/tmp/pihole-backup.zip
|
||||
|
||||
SID=$(kubectl exec -n pihole $POD -- curl -s -X POST \
|
||||
http://localhost/api/auth \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"password":"<neues-passwort>"}' | grep -o '"sid":"[^"]*"' | cut -d'"' -f4)
|
||||
|
||||
kubectl exec -n pihole $POD -- curl -s -X POST \
|
||||
http://localhost/api/teleporter \
|
||||
-H "sid: $SID" \
|
||||
-F "file=@/tmp/pihole-backup.zip"
|
||||
```
|
||||
|
||||
**Importierte Objekte:** pihole.toml, gravity.db (adlists, domainlists, clients, groups), dhcp.leases, hosts
|
||||
|
||||
### Ingress-URL korrigiert
|
||||
|
||||
`nip.io`-Hostname enthält die eingebettete IP — `pihole.192.168.11.181.nip.io` löst auf die DNS-IP (Port 53) auf, nicht auf Traefik.
|
||||
|
||||
**Fix:** `ingress.yaml` Host auf Traefik-IP geändert:
|
||||
```
|
||||
pihole.192.168.11.180.nip.io → Traefik (192.168.11.180) → pihole-web Service
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## DNS-Loop-Fix: Pod nutzt direkt 1.1.1.1 (2026-03-19)
|
||||
|
||||
### Problem
|
||||
|
||||
Pi-hole Pod nutzte CoreDNS (10.43.0.10) als Upstream → DNS-Loop, da CoreDNS intern Pi-hole anfragen kann.
|
||||
|
||||
### Fix: dnsPolicy auf None
|
||||
|
||||
```bash
|
||||
kubectl patch deployment pihole -n pihole --patch '
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
dnsPolicy: "None"
|
||||
dnsConfig:
|
||||
nameservers:
|
||||
- 1.1.1.1
|
||||
- 8.8.8.8
|
||||
searches: []
|
||||
'
|
||||
```
|
||||
|
||||
**Hinweis:** Bei RWO-Volumes (Longhorn) kann der neue Pod beim Rollout auf einem anderen Node landen → `Multi-Attach error`. Lösung: alten Pod manuell löschen, danach ggf. stuck Pod ebenfalls löschen damit Scheduler neu plant.
|
||||
|
||||
```bash
|
||||
kubectl delete pod <alter-pod> -n pihole
|
||||
kubectl delete pod <stuck-pod> -n pihole # falls ContainerCreating auf falschem Node
|
||||
```
|
||||
|
||||
**Verifikation:**
|
||||
```bash
|
||||
kubectl exec -n pihole <POD> -- cat /etc/resolv.conf
|
||||
# nameserver 1.1.1.1
|
||||
# nameserver 8.8.8.8
|
||||
```
|
||||
|
||||
**Persistenz:** In `deployment.yaml` gespeichert + in etcd.
|
||||
|
||||
---
|
||||
|
||||
## Fix: Echte Client-IPs in Pi-hole (2026-03-19)
|
||||
|
||||
### Problem
|
||||
|
||||
Alle DNS-Queries in Pi-hole zeigten als Client `10.42.0.0` (Kubernetes Pod-Netzwerk) statt der echten Geräte-IPs (192.168.11.x).
|
||||
|
||||
### Ursache
|
||||
|
||||
kube-proxy führt SNAT (Source NAT) durch wenn Traffic über einen LoadBalancer-Service läuft — die Original-Source-IP wird durch die Pod-Netzwerk-IP ersetzt.
|
||||
|
||||
Zusätzlich: Der **Omada DNS-Proxy** leitet alle Client-Anfragen über sich selbst weiter → Pi-hole sieht nur die Router-IP als Client. Auch wenn der DNS-Proxy aktiv bleibt, muss kube-proxy die echte Router-IP durchreichen.
|
||||
|
||||
### Fix: externalTrafficPolicy: Local
|
||||
|
||||
```bash
|
||||
kubectl patch svc pihole-dns-tcp -n pihole -p '{"spec":{"externalTrafficPolicy":"Local"}}'
|
||||
kubectl patch svc pihole-dns-udp -n pihole -p '{"spec":{"externalTrafficPolicy":"Local"}}'
|
||||
```
|
||||
|
||||
In `~/homelab/k8s/pihole/services.yaml` für beide LoadBalancer-Services ergänzt:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Local
|
||||
```
|
||||
|
||||
**Verifikation:**
|
||||
```bash
|
||||
# Queries aus Unraid (192.168.11.124) testen
|
||||
dig @192.168.11.181 google.com +short
|
||||
# Pi-hole Query Log zeigt danach: 192.168.11.124 → google.com ✓
|
||||
```
|
||||
|
||||
**Hinweis:** `externalTrafficPolicy: Local` bedeutet, dass Traffic nur an Nodes weitergeleitet wird auf denen der Pod läuft. Ist der Pod auf einem anderen Node, gibt es keinen Fallback — dies ist bei Pi-hole gewünscht (kein NAT, echte IPs).
|
||||
|
||||
### Omada DNS-Proxy Konfiguration
|
||||
|
||||
- DNS-Proxy in Omada leitet Anfragen weiter an `192.168.11.181` (Pi-hole)
|
||||
- Clients erhalten die Router-IP als DNS → alle Queries gehen über den Proxy
|
||||
- Pi-hole sieht die Router-IP als Client (nicht einzelne Geräte) — **akzeptabler Kompromiss**
|
||||
- Secondary DNS (8.8.8.8) wurde entfernt damit Pi-hole Blocking nicht umgangen wird
|
||||
|
||||
### Finaler Status
|
||||
|
||||
| Komponente | Status |
|
||||
|---|---|
|
||||
| Pi-hole DNS `192.168.11.181:53` | ✓ Erreichbar |
|
||||
| Web-UI `https://pihole.192.168.11.180.nip.io/admin` | ✓ Erreichbar (HTTPS, self-signed) |
|
||||
| Blocking aktiv | ✓ |
|
||||
| Echte Client-IPs sichtbar | ✓ (nach externalTrafficPolicy: Local) |
|
||||
| Queries heute | ~40.000 |
|
||||
165
docs/10-gitea.md
Normal file
165
docs/10-gitea.md
Normal file
@@ -0,0 +1,165 @@
|
||||
# 10 · Gitea Installation
|
||||
|
||||
**Datum:** 2026-03-20
|
||||
|
||||
---
|
||||
|
||||
## Was wurde installiert
|
||||
|
||||
- **Gitea** (gitea/gitea:latest) — Self-hosted Git-Service
|
||||
- **PostgreSQL 16** (postgres:16-alpine) — Datenbank für Gitea
|
||||
- Beide Komponenten auf `rnk-wrk01` (nodeSelector)
|
||||
|
||||
---
|
||||
|
||||
## Manifest-Dateien
|
||||
|
||||
```
|
||||
~/homelab/k8s/gitea/
|
||||
kustomization.yaml
|
||||
namespace.yaml
|
||||
secret.yaml # DB- + Admin-Passwort (nicht ins Git!)
|
||||
pvc.yaml # gitea-data (10Gi) + gitea-postgres (2Gi) via Longhorn
|
||||
postgres.yaml # Deployment + ClusterIP Service
|
||||
deployment.yaml # Gitea Deployment mit initContainer
|
||||
service.yaml # gitea-web (ClusterIP) + gitea-ssh (LoadBalancer)
|
||||
ingress.yaml # Traefik HTTP Ingress
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Ausgeführte Befehle
|
||||
|
||||
```bash
|
||||
mkdir -p ~/homelab/k8s/gitea
|
||||
# ... alle YAML-Dateien erstellt ...
|
||||
|
||||
kubectl apply -k ~/homelab/k8s/gitea/
|
||||
|
||||
kubectl wait deployment/gitea-postgres -n gitea --for=condition=Available --timeout=120s
|
||||
kubectl wait deployment/gitea -n gitea --for=condition=Available --timeout=180s
|
||||
|
||||
# Admin-User anlegen (muss als git-User ausgeführt werden, nicht root)
|
||||
kubectl exec -n gitea deployment/gitea -- su git -c "gitea admin user create \
|
||||
--username admin \
|
||||
--password '<passwort>' \
|
||||
--email admin@homelab.local \
|
||||
--admin"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Problem: Postgres CrashLoopBackOff
|
||||
|
||||
**Fehler:**
|
||||
```
|
||||
initdb: error: directory "/var/lib/postgresql/data" exists but is not empty
|
||||
initdb: detail: It contains a lost+found directory, perhaps due to it being a mount point.
|
||||
```
|
||||
|
||||
**Ursache:** Longhorn-Volume enthält `lost+found` im Root — Postgres kann kein initdb durchführen wenn das Verzeichnis nicht leer ist.
|
||||
|
||||
**Fix:** `PGDATA` auf Unterverzeichnis setzen:
|
||||
```yaml
|
||||
env:
|
||||
- name: PGDATA
|
||||
value: /var/lib/postgresql/data/pgdata
|
||||
```
|
||||
|
||||
In `postgres.yaml` dauerhaft eingetragen.
|
||||
|
||||
### Problem: Gitea läuft nicht als root
|
||||
|
||||
**Fehler:**
|
||||
```
|
||||
Gitea is not supposed to be run as root.
|
||||
```
|
||||
|
||||
**Fix:** `su git -c "gitea ..."` statt direktem Aufruf:
|
||||
```bash
|
||||
kubectl exec -n gitea deployment/gitea -- su git -c "gitea admin user create ..."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## IP-Adressen & Zugang
|
||||
|
||||
| Service | Adresse | Protokoll |
|
||||
|---|---|---|
|
||||
| Web-UI | http://gitea.192.168.11.180.nip.io | HTTP via Traefik |
|
||||
| SSH | 192.168.11.182:22 | TCP (MetalLB) |
|
||||
| Postgres | ClusterIP intern | TCP 5432 |
|
||||
|
||||
**MetalLB IP:** `192.168.11.182` (SSH)
|
||||
|
||||
---
|
||||
|
||||
## Konfiguration
|
||||
|
||||
| Parameter | Wert |
|
||||
|---|---|
|
||||
| `GITEA__server__DOMAIN` | gitea.192.168.11.180.nip.io |
|
||||
| `GITEA__server__ROOT_URL` | http://gitea.192.168.11.180.nip.io |
|
||||
| `GITEA__security__INSTALL_LOCK` | true (kein Setup-Wizard) |
|
||||
| `TZ` | Europe/Vienna |
|
||||
| Datenbank | PostgreSQL 16 |
|
||||
| Node | rnk-wrk01 (beide Pods) |
|
||||
|
||||
---
|
||||
|
||||
## Ergebnis
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS
|
||||
pod/gitea-5dddddf8bd-hg8nt 1/1 Running 0
|
||||
pod/gitea-postgres-75895d77ff-h6h55 1/1 Running 0
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP
|
||||
service/gitea-postgres ClusterIP 10.43.146.54 <none>
|
||||
service/gitea-ssh LoadBalancer 10.43.27.136 192.168.11.182
|
||||
service/gitea-web ClusterIP 10.43.166.44 <none>
|
||||
|
||||
NAME CLASS HOSTS ADDRESS
|
||||
ingress/gitea-web traefik gitea.192.168.11.180.nip.io 192.168.11.180
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Admin-Passwort gesetzt (2026-03-20)
|
||||
|
||||
Admin-Passwort nach Installation auf einheitliches Homelab-Passwort gesetzt.
|
||||
|
||||
**Problem:** Nach `change-password` via CLI erzwingt Gitea eine Passwortänderung im Browser (`must_change_password=true`).
|
||||
|
||||
**Fix:**
|
||||
```bash
|
||||
kubectl exec -n gitea deployment/gitea -- su git -c \
|
||||
"gitea admin user change-password --username admin \
|
||||
--password 'bmw520AUDI' --must-change-password=false"
|
||||
```
|
||||
|
||||
k8s Secret und `secret.yaml` ebenfalls aktualisiert:
|
||||
```bash
|
||||
kubectl patch secret gitea-secret -n gitea \
|
||||
--type='json' \
|
||||
-p='[{"op":"replace","path":"/data/admin-password","value":"<base64>"}]'
|
||||
```
|
||||
|
||||
**Verifikation:**
|
||||
```bash
|
||||
curl -s -o /dev/null -w "%{http_code}" \
|
||||
http://gitea.192.168.11.180.nip.io/api/v1/user \
|
||||
-u admin:bmw520AUDI
|
||||
# → 200 ✓
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nächste Schritte
|
||||
|
||||
- SSH-Key für lokale Entwicklung hinterlegen
|
||||
- Repositories anlegen / migrieren
|
||||
- Ggf. Gitea Actions aktivieren (CI/CD)
|
||||
- Backup-Strategie für Longhorn-Volumes festlegen
|
||||
69
k8s/gitea/deployment.yaml
Normal file
69
k8s/gitea/deployment.yaml
Normal file
@@ -0,0 +1,69 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: gitea
|
||||
namespace: gitea
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: gitea
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: gitea
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: rnk-wrk01
|
||||
initContainers:
|
||||
- name: wait-for-postgres
|
||||
image: busybox
|
||||
command: ['sh', '-c', 'until nc -z gitea-postgres 5432; do echo waiting; sleep 2; done']
|
||||
containers:
|
||||
- name: gitea
|
||||
image: gitea/gitea:latest
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
name: web
|
||||
- containerPort: 22
|
||||
name: ssh
|
||||
env:
|
||||
- name: GITEA__database__DB_TYPE
|
||||
value: postgres
|
||||
- name: GITEA__database__HOST
|
||||
value: gitea-postgres:5432
|
||||
- name: GITEA__database__NAME
|
||||
value: gitea
|
||||
- name: GITEA__database__USER
|
||||
value: gitea
|
||||
- name: GITEA__database__PASSWD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: gitea-secret
|
||||
key: postgres-password
|
||||
- name: GITEA__server__DOMAIN
|
||||
value: gitea.192.168.11.180.nip.io
|
||||
- name: GITEA__server__ROOT_URL
|
||||
value: http://gitea.192.168.11.180.nip.io
|
||||
- name: GITEA__server__SSH_DOMAIN
|
||||
value: gitea.192.168.11.180.nip.io
|
||||
- name: GITEA__server__SSH_PORT
|
||||
value: "22"
|
||||
- name: GITEA__security__INSTALL_LOCK
|
||||
value: "true"
|
||||
- name: TZ
|
||||
value: "Europe/Vienna"
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /data
|
||||
resources:
|
||||
requests:
|
||||
memory: "128Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: gitea-data
|
||||
20
k8s/gitea/ingress.yaml
Normal file
20
k8s/gitea/ingress.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: gitea-web
|
||||
namespace: gitea
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: web
|
||||
spec:
|
||||
ingressClassName: traefik
|
||||
rules:
|
||||
- host: gitea.192.168.11.180.nip.io
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: gitea-web
|
||||
port:
|
||||
number: 3000
|
||||
10
k8s/gitea/kustomization.yaml
Normal file
10
k8s/gitea/kustomization.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- secret.yaml
|
||||
- pvc.yaml
|
||||
- postgres.yaml
|
||||
- deployment.yaml
|
||||
- service.yaml
|
||||
- ingress.yaml
|
||||
4
k8s/gitea/namespace.yaml
Normal file
4
k8s/gitea/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: gitea
|
||||
60
k8s/gitea/postgres.yaml
Normal file
60
k8s/gitea/postgres.yaml
Normal file
@@ -0,0 +1,60 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: gitea-postgres
|
||||
namespace: gitea
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: gitea-postgres
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: gitea-postgres
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: rnk-wrk01
|
||||
containers:
|
||||
- name: postgres
|
||||
image: postgres:16-alpine
|
||||
env:
|
||||
- name: POSTGRES_DB
|
||||
value: gitea
|
||||
- name: POSTGRES_USER
|
||||
value: gitea
|
||||
- name: POSTGRES_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: gitea-secret
|
||||
key: postgres-password
|
||||
- name: PGDATA
|
||||
value: /var/lib/postgresql/data/pgdata
|
||||
ports:
|
||||
- containerPort: 5432
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /var/lib/postgresql/data
|
||||
resources:
|
||||
requests:
|
||||
memory: "256Mi"
|
||||
cpu: "100m"
|
||||
limits:
|
||||
memory: "512Mi"
|
||||
cpu: "500m"
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: gitea-postgres
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea-postgres
|
||||
namespace: gitea
|
||||
spec:
|
||||
selector:
|
||||
app: gitea-postgres
|
||||
ports:
|
||||
- port: 5432
|
||||
targetPort: 5432
|
||||
25
k8s/gitea/pvc.yaml
Normal file
25
k8s/gitea/pvc.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: gitea-data
|
||||
namespace: gitea
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: longhorn
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: gitea-postgres
|
||||
namespace: gitea
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: longhorn
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
29
k8s/gitea/service.yaml
Normal file
29
k8s/gitea/service.yaml
Normal file
@@ -0,0 +1,29 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea-web
|
||||
namespace: gitea
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app: gitea
|
||||
ports:
|
||||
- name: web
|
||||
port: 3000
|
||||
targetPort: 3000
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea-ssh
|
||||
namespace: gitea
|
||||
annotations:
|
||||
metallb.universe.tf/loadBalancerIPs: 192.168.11.182
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
selector:
|
||||
app: gitea
|
||||
ports:
|
||||
- name: ssh
|
||||
port: 22
|
||||
targetPort: 22
|
||||
18
k8s/metallb/metallb-config.yaml
Normal file
18
k8s/metallb/metallb-config.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: homelab-pool
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
addresses:
|
||||
- 192.168.11.180-192.168.11.199
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: L2Advertisement
|
||||
metadata:
|
||||
name: homelab-l2
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
ipAddressPools:
|
||||
- homelab-pool
|
||||
85
k8s/omada-mcp/deployment.yaml
Normal file
85
k8s/omada-mcp/deployment.yaml
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: omada-mcp
|
||||
namespace: omada-mcp
|
||||
labels:
|
||||
app.kubernetes.io/name: omada-mcp
|
||||
app.kubernetes.io/version: latest
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: omada-mcp
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: omada-mcp
|
||||
spec:
|
||||
containers:
|
||||
- name: omada-mcp
|
||||
image: jmtvms/tplink-omada-mcp:latest
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 3000
|
||||
protocol: TCP
|
||||
env:
|
||||
- name: MCP_SERVER_USE_HTTP
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: omada-mcp-credentials
|
||||
key: MCP_SERVER_USE_HTTP
|
||||
- name: MCP_HTTP_BIND_ADDR
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: omada-mcp-credentials
|
||||
key: MCP_HTTP_BIND_ADDR
|
||||
- name: OMADA_BASE_URL
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: omada-mcp-credentials
|
||||
key: OMADA_BASE_URL
|
||||
- name: OMADA_CLIENT_ID
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: omada-mcp-credentials
|
||||
key: OMADA_CLIENT_ID
|
||||
- name: OMADA_CLIENT_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: omada-mcp-credentials
|
||||
key: OMADA_CLIENT_SECRET
|
||||
- name: OMADA_OMADAC_ID
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: omada-mcp-credentials
|
||||
key: OMADA_OMADAC_ID
|
||||
- name: OMADA_STRICT_SSL
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: omada-mcp-credentials
|
||||
key: OMADA_STRICT_SSL
|
||||
resources:
|
||||
requests:
|
||||
cpu: 50m
|
||||
memory: 64Mi
|
||||
limits:
|
||||
cpu: 200m
|
||||
memory: 256Mi
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 3000
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 30
|
||||
failureThreshold: 3
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 3000
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 15
|
||||
failureThreshold: 3
|
||||
restartPolicy: Always
|
||||
11
k8s/omada-mcp/kustomization.yaml
Normal file
11
k8s/omada-mcp/kustomization.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
namespace: omada-mcp
|
||||
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- secret.yaml
|
||||
- deployment.yaml
|
||||
- service.yaml
|
||||
7
k8s/omada-mcp/namespace.yaml
Normal file
7
k8s/omada-mcp/namespace.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: omada-mcp
|
||||
labels:
|
||||
app.kubernetes.io/name: omada-mcp
|
||||
17
k8s/omada-mcp/service.yaml
Normal file
17
k8s/omada-mcp/service.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: omada-mcp
|
||||
namespace: omada-mcp
|
||||
labels:
|
||||
app.kubernetes.io/name: omada-mcp
|
||||
spec:
|
||||
selector:
|
||||
app.kubernetes.io/name: omada-mcp
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 3000
|
||||
targetPort: 3000
|
||||
type: NodePort
|
||||
58
k8s/pihole/deployment.yaml
Normal file
58
k8s/pihole/deployment.yaml
Normal file
@@ -0,0 +1,58 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: pihole
|
||||
namespace: pihole
|
||||
labels:
|
||||
app: pihole
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: pihole
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: pihole
|
||||
spec:
|
||||
containers:
|
||||
- name: pihole
|
||||
image: pihole/pihole:latest
|
||||
env:
|
||||
- name: TZ
|
||||
value: "Europe/Berlin"
|
||||
- name: WEBPASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: pihole-secret
|
||||
key: password
|
||||
- name: PIHOLE_DNS_
|
||||
value: "1.1.1.1;1.0.0.1"
|
||||
- name: DNSMASQ_LISTENING
|
||||
value: "all"
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: web
|
||||
protocol: TCP
|
||||
- containerPort: 53
|
||||
name: dns-tcp
|
||||
protocol: TCP
|
||||
- containerPort: 53
|
||||
name: dns-udp
|
||||
protocol: UDP
|
||||
volumeMounts:
|
||||
- name: pihole-data
|
||||
mountPath: /etc/pihole
|
||||
- name: dnsmasq-data
|
||||
mountPath: /etc/dnsmasq.d
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- NET_ADMIN
|
||||
volumes:
|
||||
- name: pihole-data
|
||||
persistentVolumeClaim:
|
||||
claimName: pihole-data
|
||||
- name: dnsmasq-data
|
||||
persistentVolumeClaim:
|
||||
claimName: pihole-dnsmasq
|
||||
23
k8s/pihole/ingress.yaml
Normal file
23
k8s/pihole/ingress.yaml
Normal file
@@ -0,0 +1,23 @@
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: pihole-web
|
||||
namespace: pihole
|
||||
annotations:
|
||||
traefik.ingress.kubernetes.io/router.entrypoints: websecure
|
||||
traefik.ingress.kubernetes.io/router.tls: "true"
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- pihole.192.168.11.180.nip.io
|
||||
rules:
|
||||
- host: pihole.192.168.11.180.nip.io
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: pihole-web
|
||||
port:
|
||||
number: 80
|
||||
11
k8s/pihole/kustomization.yaml
Normal file
11
k8s/pihole/kustomization.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
namespace: pihole
|
||||
|
||||
resources:
|
||||
- namespace.yaml
|
||||
- pvc.yaml
|
||||
- deployment.yaml
|
||||
- services.yaml
|
||||
- ingress.yaml
|
||||
4
k8s/pihole/namespace.yaml
Normal file
4
k8s/pihole/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: pihole
|
||||
26
k8s/pihole/pvc.yaml
Normal file
26
k8s/pihole/pvc.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: pihole-data
|
||||
namespace: pihole
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: longhorn
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: pihole-dnsmasq
|
||||
namespace: pihole
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: longhorn
|
||||
resources:
|
||||
requests:
|
||||
storage: 128Mi
|
||||
56
k8s/pihole/services.yaml
Normal file
56
k8s/pihole/services.yaml
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
# DNS TCP Service (LoadBalancer with fixed IP)
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: pihole-dns-tcp
|
||||
namespace: pihole
|
||||
annotations:
|
||||
metallb.universe.tf/loadBalancerIPs: 192.168.11.181
|
||||
metallb.universe.tf/allow-shared-ip: pihole-dns
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Local
|
||||
selector:
|
||||
app: pihole
|
||||
ports:
|
||||
- name: dns-tcp
|
||||
port: 53
|
||||
targetPort: 53
|
||||
protocol: TCP
|
||||
---
|
||||
# DNS UDP Service (LoadBalancer with fixed IP)
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: pihole-dns-udp
|
||||
namespace: pihole
|
||||
annotations:
|
||||
metallb.universe.tf/loadBalancerIPs: 192.168.11.181
|
||||
metallb.universe.tf/allow-shared-ip: pihole-dns
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Local
|
||||
selector:
|
||||
app: pihole
|
||||
ports:
|
||||
- name: dns-udp
|
||||
port: 53
|
||||
targetPort: 53
|
||||
protocol: UDP
|
||||
---
|
||||
# Web UI Service (ClusterIP, exposed via Ingress)
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: pihole-web
|
||||
namespace: pihole
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app: pihole
|
||||
ports:
|
||||
- name: web
|
||||
port: 80
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
Reference in New Issue
Block a user