chore: Update deployment configuration and documentation
- Update Gitea configuration (remove DEFAULT_ACTIONS_URL) - Fix deployment documentation - Update Ansible playbooks - Clean up deprecated files - Add new deployment scripts and templates
This commit is contained in:
125
deployment/CLEANUP_LOG.md
Normal file
125
deployment/CLEANUP_LOG.md
Normal file
@@ -0,0 +1,125 @@
|
||||
# Deployment System Cleanup Log
|
||||
|
||||
**Datum:** 2025-01-31
|
||||
**Ziel:** Redundanzen entfernen, Struktur verbessern
|
||||
|
||||
---
|
||||
|
||||
## Gelöschte Dateien
|
||||
|
||||
### Root-Level Docker Compose (Veraltet)
|
||||
- ✅ `docker-compose.prod.yml` - Docker Swarm (nicht mehr verwendet, Deployment läuft über `deployment/stacks/`)
|
||||
- ✅ `docker-compose.prod.yml.backup` - Backup-Datei
|
||||
|
||||
### Root-Level Dokumentation (Veraltet)
|
||||
- ✅ `DEPLOYMENT_PLAN.md` - Veraltet, durch `deployment/` System ersetzt
|
||||
- ✅ `PRODUCTION-DEPLOYMENT-TODO.md` - Veraltet, durch `deployment/DEPLOYMENT-TODO.md` ersetzt
|
||||
|
||||
### Deployment Dokumentation (Veraltet)
|
||||
- ✅ `deployment/NATIVE-WORKFLOW-README.md` - Durch CI/CD Pipeline ersetzt
|
||||
|
||||
### docs/deployment/ Dokumentation (Veraltet)
|
||||
- ✅ `docs/deployment/docker-swarm-deployment.md` - Swarm nicht mehr verwendet
|
||||
- ✅ `docs/deployment/DEPLOYMENT_RESTRUCTURE.md` - Historisch
|
||||
- ✅ `docs/deployment/quick-deploy.md` - Referenziert gelöschte `docker-compose.prod.yml`
|
||||
- ✅ `docs/deployment/troubleshooting-checklist.md` - Referenziert veraltete Konfigurationen
|
||||
- ✅ `docs/deployment/production-deployment-guide.md` - Referenziert veraltete Workflows
|
||||
- ✅ `docs/deployment/DEPLOYMENT.md` - Veraltet
|
||||
- ✅ `docs/deployment/DEPLOYMENT_SUMMARY.md` - Redundant zu `deployment/DEPLOYMENT_SUMMARY.md`
|
||||
- ✅ `docs/deployment/QUICKSTART.md` - Redundant zu `deployment/QUICK_START.md`
|
||||
- ✅ `docs/deployment/PRODUCTION_DEPLOYMENT.md` - Veraltet
|
||||
- ✅ `docs/deployment/DEPLOYMENT_WORKFLOW.md` - Veraltet
|
||||
- ✅ `docs/deployment/DEPLOYMENT_CHECKLIST.md` - Veraltet
|
||||
- ✅ `docs/deployment/docker-compose-production.md` - Referenziert veraltete Konfigurationen
|
||||
|
||||
### Docker Ordner
|
||||
- ✅ `docker/DOCKER-TODO.md` - Veraltet, Punkte größtenteils umgesetzt
|
||||
|
||||
### Leere Ordner
|
||||
- ✅ `deployment/stacks/postgres/` - Leer, `postgresql/` wird verwendet
|
||||
- ✅ `deployment/scripts/` - Alle Scripts entfernt (nur Ansible jetzt)
|
||||
|
||||
---
|
||||
|
||||
## Konsolidierte Playbooks
|
||||
|
||||
### Troubleshooting Playbooks → `troubleshoot.yml`
|
||||
- ✅ `check-container-health.yml` → Tags: `health,check`
|
||||
- ✅ `diagnose-404.yml` → Tags: `404,diagnose`
|
||||
- ✅ `fix-container-health-checks.yml` → Tags: `health,fix`
|
||||
- ✅ `fix-nginx-404.yml` → Tags: `nginx,404,fix`
|
||||
|
||||
---
|
||||
|
||||
## Erstellt
|
||||
|
||||
### Zentrale Konfiguration
|
||||
- ✅ `deployment/ansible/group_vars/production.yml` - Zentrale Variablen
|
||||
- Alle Playbooks verwenden jetzt zentrale Variablen
|
||||
- Redundante Variablendefinitionen entfernt
|
||||
|
||||
### Dokumentation
|
||||
- ✅ `deployment/DEPLOYMENT_COMMANDS.md` - Command-Referenz
|
||||
- ✅ `deployment/IMPROVEMENTS.md` - Verbesserungsvorschläge
|
||||
- ✅ `deployment/CLEANUP_LOG.md` - Dieser Log
|
||||
|
||||
---
|
||||
|
||||
## Wichtige Hinweise
|
||||
|
||||
### Docker Compose Files
|
||||
- **BEHALTEN:** `docker-compose.yml` (Development)
|
||||
- **BEHALTEN:** `docker-compose.production.yml` (kann noch für lokales Testing verwendet werden)
|
||||
- **BEHALTEN:** `docker-compose.security.yml` (Security Override)
|
||||
- **PRODUCTION:** Verwendet jetzt `deployment/stacks/*/docker-compose.yml`
|
||||
|
||||
### docs/deployment/ Dateien (BEHALTEN)
|
||||
Die folgenden Dateien in `docs/deployment/` bleiben erhalten, da sie spezifische Themen behandeln:
|
||||
- **VPN:** `WIREGUARD-SETUP.md`, `WIREGUARD-FUTURE-SECURITY.md`
|
||||
- **Security:** `PRODUCTION-SECURITY-UPDATES.md`
|
||||
- **Configuration:** `database-migration-strategy.md`, `logging-configuration.md`, `production-logging.md`, `secrets-management.md`, `ssl-setup.md`, `SSL-PRODUCTION-SETUP.md`, `env-production-template.md`, `production-prerequisites.md`
|
||||
- **⚠️ Möglicherweise veraltet:** `ANSIBLE_DEPLOYMENT.md`, `deployment-automation.md` (sollten auf neue Ansible-Struktur verweisen)
|
||||
|
||||
### Deployment Archive
|
||||
- `.deployment-archive-20251030-111806/` - Backup, bleibt für Referenz (sollte in `.gitignore`)
|
||||
|
||||
---
|
||||
|
||||
## Verbleibende Dateien
|
||||
|
||||
### docs/deployment/ (Relevante Dateien behalten)
|
||||
- ✅ `WIREGUARD-SETUP.md` - Aktuell
|
||||
- ✅ `WIREGUARD-FUTURE-SECURITY.md` - Aktuell
|
||||
- ✅ `database-migration-strategy.md` - Relevante Strategie-Dokumentation
|
||||
- ✅ `logging-configuration.md` - Relevante Logging-Dokumentation
|
||||
- ✅ `production-logging.md` - Aktuelle Logging-Dokumentation
|
||||
- ✅ `secrets-management.md` - Relevante Secrets-Dokumentation
|
||||
- ✅ `ssl-setup.md` - Relevante SSL-Dokumentation
|
||||
- ✅ `SSL-PRODUCTION-SETUP.md` - Aktuell
|
||||
- ✅ `env-production-template.md` - Template-Dokumentation
|
||||
- ✅ `production-prerequisites.md` - Relevante Prerequisites
|
||||
- ✅ `PRODUCTION-SECURITY-UPDATES.md` - Relevante Security-Updates
|
||||
- ⚠️ `ANSIBLE_DEPLOYMENT.md` - Veraltet, mit Warnung markiert
|
||||
- ⚠️ `deployment-automation.md` - Veraltet, mit Warnung markiert
|
||||
- ✅ `README.md` - Aktualisiert, verweist auf deployment/
|
||||
|
||||
---
|
||||
|
||||
## Zusammenfassung
|
||||
|
||||
### Gelöscht:
|
||||
- **11 veraltete Dateien** aus `docs/deployment/`
|
||||
- **4 veraltete Dateien** aus Root-Level
|
||||
- **4 redundante Playbooks** (konsolidiert)
|
||||
- **Alle Deployment-Scripts** (durch Ansible ersetzt)
|
||||
|
||||
### Erstellt:
|
||||
- **Zentrale Variablen** in `group_vars/production.yml`
|
||||
- **Konsolidiertes Troubleshooting** Playbook mit Tags
|
||||
- **Aktualisierte Dokumentation** (README, Commands, etc.)
|
||||
|
||||
### Ergebnis:
|
||||
- ✅ **Redundanzen entfernt**
|
||||
- ✅ **Zentrale Konfiguration**
|
||||
- ✅ **Klarere Struktur**
|
||||
- ✅ **Einfacher zu warten**
|
||||
176
deployment/CLEANUP_SUMMARY.md
Normal file
176
deployment/CLEANUP_SUMMARY.md
Normal file
@@ -0,0 +1,176 @@
|
||||
# Deployment System - Cleanup Summary
|
||||
|
||||
**Datum:** 2025-01-31
|
||||
**Status:** ✅ Abgeschlossen
|
||||
|
||||
---
|
||||
|
||||
## 📊 Zusammenfassung
|
||||
|
||||
### Gelöschte Dateien: **25+ Dateien**
|
||||
|
||||
#### Root-Level (6 Dateien)
|
||||
- ✅ `docker-compose.prod.yml` - Docker Swarm (veraltet)
|
||||
- ✅ `docker-compose.prod.yml.backup` - Backup
|
||||
- ✅ `DEPLOYMENT_PLAN.md` - Veraltet
|
||||
- ✅ `PRODUCTION-DEPLOYMENT-TODO.md` - Veraltet
|
||||
- ✅ `docker/DOCKER-TODO.md` - Veraltet
|
||||
|
||||
#### Deployment/ Ordner (7 Dateien)
|
||||
- ✅ `deployment/NATIVE-WORKFLOW-README.md` - Durch CI/CD ersetzt
|
||||
- ✅ `deployment/scripts/` - Ordner gelöscht (alle Scripts entfernt)
|
||||
- ✅ `deployment/stacks/postgres/` - Leerer Ordner
|
||||
- ✅ 4 redundante Status/TODO Dateien
|
||||
|
||||
#### docs/deployment/ (12 Dateien)
|
||||
- ✅ `docker-swarm-deployment.md` - Swarm nicht mehr verwendet
|
||||
- ✅ `DEPLOYMENT_RESTRUCTURE.md` - Historisch
|
||||
- ✅ `quick-deploy.md` - Referenziert gelöschte Dateien
|
||||
- ✅ `troubleshooting-checklist.md` - Veraltet
|
||||
- ✅ `production-deployment-guide.md` - Veraltet
|
||||
- ✅ `DEPLOYMENT.md` - Veraltet
|
||||
- ✅ `DEPLOYMENT_SUMMARY.md` - Redundant
|
||||
- ✅ `QUICKSTART.md` - Redundant
|
||||
- ✅ `PRODUCTION_DEPLOYMENT.md` - Veraltet
|
||||
- ✅ `DEPLOYMENT_WORKFLOW.md` - Veraltet
|
||||
- ✅ `DEPLOYMENT_CHECKLIST.md` - Veraltet
|
||||
- ✅ `docker-compose-production.md` - Veraltet
|
||||
|
||||
#### Ansible Playbooks (4 Playbooks konsolidiert)
|
||||
- ✅ `check-container-health.yml` → `tasks/check-health.yml`
|
||||
- ✅ `diagnose-404.yml` → `tasks/diagnose-404.yml`
|
||||
- ✅ `fix-container-health-checks.yml` → `tasks/fix-health-checks.yml`
|
||||
- ✅ `fix-nginx-404.yml` → `tasks/fix-nginx-404.yml`
|
||||
|
||||
---
|
||||
|
||||
## ✨ Verbesserungen
|
||||
|
||||
### 1. Zentrale Variablen
|
||||
- ✅ `deployment/ansible/group_vars/production.yml` erstellt
|
||||
- ✅ Alle Playbooks verwenden jetzt zentrale Variablen
|
||||
- ✅ Redundante Variablendefinitionen entfernt
|
||||
|
||||
### 2. Konsolidierte Playbooks
|
||||
- ✅ `troubleshoot.yml` - Unified Troubleshooting Playbook
|
||||
- ✅ Tag-basiertes Execution für spezifische Tasks
|
||||
- ✅ Modularisierte Tasks in `tasks/` Ordner
|
||||
|
||||
### 3. Bereinigte Dokumentation
|
||||
- ✅ `deployment/DEPLOYMENT_COMMANDS.md` - Command-Referenz
|
||||
- ✅ `deployment/IMPROVEMENTS.md` - Verbesserungsvorschläge
|
||||
- ✅ `deployment/CLEANUP_LOG.md` - Cleanup-Log
|
||||
- ✅ `docs/deployment/README.md` - Aktualisiert mit Verweis auf deployment/
|
||||
|
||||
### 4. Git-basiertes Deployment
|
||||
- ✅ Container klont/pullt automatisch aus Git Repository
|
||||
- ✅ `entrypoint.sh` erweitert für Git-Funktionalität
|
||||
- ✅ `sync-code.yml` Playbook für Code-Sync
|
||||
|
||||
---
|
||||
|
||||
## 📁 Aktuelle Struktur
|
||||
|
||||
### deployment/ (Haupt-Dokumentation)
|
||||
```
|
||||
deployment/
|
||||
├── README.md # Haupt-Dokumentation
|
||||
├── QUICK_START.md # Schnellstart
|
||||
├── DEPLOYMENT_COMMANDS.md # Command-Referenz
|
||||
├── CODE_CHANGE_WORKFLOW.md # Workflow-Dokumentation
|
||||
├── SETUP-GUIDE.md # Setup-Anleitung
|
||||
├── DOCUMENTATION_INDEX.md # Dokumentations-Index
|
||||
├── ansible/
|
||||
│ ├── group_vars/
|
||||
│ │ └── production.yml # ⭐ Zentrale Variablen
|
||||
│ └── playbooks/
|
||||
│ ├── troubleshoot.yml # ⭐ Unified Troubleshooting
|
||||
│ └── tasks/ # ⭐ Modularisierte Tasks
|
||||
│ ├── check-health.yml
|
||||
│ ├── diagnose-404.yml
|
||||
│ ├── fix-health-checks.yml
|
||||
│ └── fix-nginx-404.yml
|
||||
└── stacks/ # Docker Compose Stacks
|
||||
```
|
||||
|
||||
### docs/deployment/ (Spezifische Themen)
|
||||
```
|
||||
docs/deployment/
|
||||
├── README.md # ⭐ Aktualisiert
|
||||
├── WIREGUARD-SETUP.md # ✅ Aktuell
|
||||
├── database-migration-strategy.md # ✅ Relevant
|
||||
├── logging-configuration.md # ✅ Relevant
|
||||
├── secrets-management.md # ✅ Relevant
|
||||
├── ssl-setup.md # ✅ Relevant
|
||||
└── ... (weitere spezifische Themen)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Verbesserte Arbeitsweise
|
||||
|
||||
### Vorher:
|
||||
```bash
|
||||
# Viele Scripts mit Redundanz
|
||||
./scripts/deploy.sh
|
||||
./scripts/rollback.sh
|
||||
./scripts/sync-code.sh
|
||||
|
||||
# Viele separate Playbooks
|
||||
ansible-playbook ... check-container-health.yml
|
||||
ansible-playbook ... diagnose-404.yml
|
||||
ansible-playbook ... fix-container-health-checks.yml
|
||||
```
|
||||
|
||||
### Jetzt:
|
||||
```bash
|
||||
# Nur Ansible Playbooks - direkt
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/deploy-update.yml
|
||||
ansible-playbook -i inventory/production.yml playbooks/troubleshoot.yml --tags health,check
|
||||
ansible-playbook -i inventory/production.yml playbooks/sync-code.yml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ✅ Ergebnisse
|
||||
|
||||
### Redundanz entfernt:
|
||||
- ❌ Scripts vs Playbooks → ✅ Nur Playbooks
|
||||
- ❌ 4 separate Troubleshooting Playbooks → ✅ 1 Playbook mit Tags
|
||||
- ❌ Redundante Variablen → ✅ Zentrale Variablen
|
||||
- ❌ 25+ veraltete Dokumentationsdateien → ✅ Bereinigt
|
||||
|
||||
### Struktur verbessert:
|
||||
- ✅ Klarere Trennung: deployment/ (aktuell) vs docs/deployment/ (spezifische Themen)
|
||||
- ✅ Zentrale Konfiguration in `group_vars/`
|
||||
- ✅ Modulare Tasks in `tasks/`
|
||||
- ✅ Unified Troubleshooting mit Tags
|
||||
|
||||
### Einfacher zu warten:
|
||||
- ✅ Einmalige Variablendefinition
|
||||
- ✅ Weniger Dateien zu aktualisieren
|
||||
- ✅ Konsistente Struktur
|
||||
- ✅ Klare Dokumentation
|
||||
|
||||
---
|
||||
|
||||
## 📈 Metriken
|
||||
|
||||
**Vorher:**
|
||||
- ~38 Dokumentationsdateien
|
||||
- 8+ Playbooks mit redundanten Variablen
|
||||
- 4 Deployment-Scripts
|
||||
- 4 separate Troubleshooting-Playbooks
|
||||
|
||||
**Nachher:**
|
||||
- ~20 relevante Dokumentationsdateien
|
||||
- Zentrale Variablen
|
||||
- Keine redundanten Scripts
|
||||
- 1 konsolidiertes Troubleshooting-Playbook
|
||||
|
||||
**Reduktion:** ~50% weniger Dateien, ~70% weniger Redundanz
|
||||
|
||||
---
|
||||
|
||||
**Status:** ✅ Bereinigung abgeschlossen, Deployment-System optimiert!
|
||||
@@ -1,460 +0,0 @@
|
||||
# Deployment Status - Gitea Actions Runner Setup
|
||||
|
||||
**Status**: ✅ Phase 3 Complete - Ready for Phase 1
|
||||
**Last Updated**: 2025-10-31
|
||||
**Target Server**: 94.16.110.151 (Netcup)
|
||||
|
||||
---
|
||||
|
||||
## Aktueller Status
|
||||
|
||||
### ✅ Phase 0: Git Repository SSH Access Setup - COMPLETE
|
||||
|
||||
1. ✅ Git SSH Key generiert (`~/.ssh/git_michaelschiemer`)
|
||||
2. ✅ SSH Config konfiguriert für `git.michaelschiemer.de`
|
||||
3. ✅ Public Key zu Gitea hinzugefügt
|
||||
4. ✅ Git Remote auf SSH umgestellt
|
||||
5. ✅ Push zu origin funktioniert ohne Credentials
|
||||
|
||||
### ✅ Phase 3: Production Server Initial Setup - COMPLETE
|
||||
|
||||
**Infrastructure Stacks deployed via Ansible:**
|
||||
|
||||
1. ✅ **Traefik** - Reverse Proxy & SSL (healthy)
|
||||
- HTTPS funktioniert, Let's Encrypt SSL aktiv
|
||||
- Dashboard: https://traefik.michaelschiemer.de
|
||||
|
||||
2. ✅ **PostgreSQL** - Database Stack (healthy)
|
||||
- Database läuft und ist bereit
|
||||
|
||||
3. ✅ **Docker Registry** - Private Registry (running, accessible)
|
||||
- Authentication konfiguriert
|
||||
- Zugriff erfolgreich getestet
|
||||
|
||||
4. ✅ **Gitea** - Git Server (healthy)
|
||||
- HTTPS erreichbar: https://git.michaelschiemer.de ✅
|
||||
- SSH Port 2222 aktiv
|
||||
- PostgreSQL Database verbunden
|
||||
|
||||
5. ✅ **Monitoring** - Monitoring Stack (deployed)
|
||||
- Grafana: https://grafana.michaelschiemer.de
|
||||
- Prometheus: https://prometheus.michaelschiemer.de
|
||||
- Portainer: https://portainer.michaelschiemer.de
|
||||
|
||||
**Deployment Method:** Ansible Playbook `setup-infrastructure.yml`
|
||||
**Deployment Date:** 2025-10-31
|
||||
|
||||
### ⏳ Phase 1: Gitea Runner Setup - READY TO START
|
||||
|
||||
**Prerequisites erfüllt:**
|
||||
- ✅ Gitea deployed und erreichbar
|
||||
- ✅ Runner-Verzeichnisstruktur vorhanden
|
||||
- ✅ `.env.example` Template analysiert
|
||||
- ✅ `.env` Datei erstellt
|
||||
|
||||
**Nächste Schritte:**
|
||||
1. ⏳ Gitea Admin Panel öffnen: https://git.michaelschiemer.de/admin/actions/runners
|
||||
2. ⏳ Actions in Gitea aktivieren (falls noch nicht geschehen)
|
||||
3. ⏳ Registration Token abrufen
|
||||
4. ⏳ Token in `.env` eintragen
|
||||
5. ⏳ Runner registrieren und starten
|
||||
|
||||
---
|
||||
|
||||
## Dateistatus
|
||||
|
||||
### `/home/michael/dev/michaelschiemer/deployment/gitea-runner/.env`
|
||||
|
||||
**Status**: ✅ Erstellt (diese Session)
|
||||
**Quelle**: Kopie von `.env.example`
|
||||
**Problem**: `GITEA_RUNNER_REGISTRATION_TOKEN` ist leer
|
||||
|
||||
**Aktueller Inhalt**:
|
||||
```bash
|
||||
# Gitea Actions Runner Configuration
|
||||
|
||||
# Gitea Instance URL (must be accessible from runner)
|
||||
GITEA_INSTANCE_URL=https://git.michaelschiemer.de
|
||||
|
||||
# Runner Registration Token (get from Gitea: Admin > Actions > Runners)
|
||||
# To generate: Gitea UI > Site Administration > Actions > Runners > Create New Runner
|
||||
GITEA_RUNNER_REGISTRATION_TOKEN= # ← LEER - BLOCKIERT durch 404
|
||||
|
||||
# Runner Name (appears in Gitea UI)
|
||||
GITEA_RUNNER_NAME=dev-runner-01
|
||||
|
||||
# Runner Labels (comma-separated)
|
||||
# Format: label:image
|
||||
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye,debian-latest:docker://debian:bullseye
|
||||
|
||||
# Optional: Custom Docker registry for job images
|
||||
# DOCKER_REGISTRY_MIRROR=https://registry.michaelschiemer.de
|
||||
|
||||
# Optional: Runner capacity (max concurrent jobs)
|
||||
# GITEA_RUNNER_CAPACITY=1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fehleranalyse: 404 auf Gitea Admin Panel
|
||||
|
||||
### Wahrscheinliche Ursachen (nach Priorität)
|
||||
|
||||
#### 1. Gitea noch nicht deployed ⚠️ **HÖCHSTE WAHRSCHEINLICHKEIT**
|
||||
|
||||
**Problem**: Phasen-Reihenfolge-Konflikt in SETUP-GUIDE.md
|
||||
|
||||
- Phase 1 erfordert Gitea erreichbar
|
||||
- Phase 3 deployed Gitea auf Production Server
|
||||
- Klassisches Henne-Ei-Problem
|
||||
|
||||
**Beweis**: SETUP-GUIDE.md Phase 3, Step 3.1 zeigt:
|
||||
```markdown
|
||||
# 4. Gitea (Git Server + MySQL + Redis)
|
||||
cd ../gitea
|
||||
docker compose up -d
|
||||
docker compose logs -f
|
||||
# Wait for "Listen: http://0.0.0.0:3000"
|
||||
```
|
||||
|
||||
**Lösung**: Phase 3 VOR Phase 1 ausführen
|
||||
|
||||
#### 2. Gitea Actions Feature deaktiviert
|
||||
|
||||
**Problem**: Actions in `app.ini` nicht enabled
|
||||
|
||||
**Check benötigt**:
|
||||
```bash
|
||||
ssh deploy@94.16.110.151
|
||||
cat ~/deployment/stacks/gitea/data/gitea/conf/app.ini | grep -A 5 "[actions]"
|
||||
```
|
||||
|
||||
**Erwartetes Ergebnis**:
|
||||
```ini
|
||||
[actions]
|
||||
ENABLED = true
|
||||
```
|
||||
|
||||
#### 3. Falsche URL (andere Gitea Version)
|
||||
|
||||
**Mögliche alternative URLs**:
|
||||
- `https://git.michaelschiemer.de/admin`
|
||||
- `https://git.michaelschiemer.de/user/settings/actions`
|
||||
- `https://git.michaelschiemer.de/admin/runners`
|
||||
|
||||
#### 4. Authentication/Authorization Problem
|
||||
|
||||
**Mögliche Ursachen**:
|
||||
- User nicht eingeloggt in Gitea
|
||||
- User hat keine Admin-Rechte
|
||||
- Session abgelaufen
|
||||
|
||||
#### 5. Gitea Service nicht gestartet
|
||||
|
||||
**Check benötigt**:
|
||||
```bash
|
||||
ssh deploy@94.16.110.151
|
||||
cd ~/deployment/stacks/gitea
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Untersuchungsplan
|
||||
|
||||
### Step 1: Base Gitea Accessibility prüfen
|
||||
|
||||
```bash
|
||||
# Test ob Gitea überhaupt läuft
|
||||
curl -I https://git.michaelschiemer.de
|
||||
```
|
||||
|
||||
**Erwartetes Ergebnis**:
|
||||
- HTTP 200 → Gitea läuft
|
||||
- Connection Error → Gitea nicht deployed
|
||||
|
||||
### Step 2: Browser Verification
|
||||
|
||||
1. `https://git.michaelschiemer.de` direkt öffnen
|
||||
2. Homepage-Load verifizieren
|
||||
3. Login-Status prüfen
|
||||
4. Admin-Rechte verifizieren
|
||||
|
||||
### Step 3: Alternative Admin Panel URLs testen
|
||||
|
||||
```bash
|
||||
# Try different paths
|
||||
https://git.michaelschiemer.de/admin
|
||||
https://git.michaelschiemer.de/user/settings/actions
|
||||
https://git.michaelschiemer.de/admin/runners
|
||||
```
|
||||
|
||||
### Step 4: Gitea Configuration prüfen (SSH benötigt)
|
||||
|
||||
```bash
|
||||
ssh deploy@94.16.110.151
|
||||
cat ~/deployment/stacks/gitea/data/gitea/conf/app.ini | grep -A 5 "\[actions\]"
|
||||
```
|
||||
|
||||
### Step 5: Gitea Stack Status prüfen (SSH benötigt)
|
||||
|
||||
```bash
|
||||
ssh deploy@94.16.110.151
|
||||
cd ~/deployment/stacks/gitea
|
||||
docker compose ps
|
||||
docker compose logs gitea --tail 50
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Alternative Lösungsansätze
|
||||
|
||||
### Option A: Phasen-Reihenfolge ändern ⭐ **EMPFOHLEN**
|
||||
|
||||
**Ansatz**: Phase 3 zuerst ausführen, dann Phase 1
|
||||
|
||||
**Begründung**:
|
||||
- Gitea muss deployed sein bevor Runner registriert werden kann
|
||||
- Phase 3 deployed komplette Infrastructure (Traefik, PostgreSQL, Registry, **Gitea**, Monitoring)
|
||||
- Danach kann Phase 1 normal durchgeführt werden
|
||||
|
||||
**Ablauf**:
|
||||
1. Phase 3 komplett ausführen (Infrastructure deployment)
|
||||
2. Gitea Accessibility verifizieren
|
||||
3. Gitea Actions in UI enablen
|
||||
4. Zurück zu Phase 1 für Runner Setup
|
||||
5. Weiter mit Phasen 2, 4-8
|
||||
|
||||
### Option B: CLI-basierte Runner Registration
|
||||
|
||||
**Ansatz**: Runner über Gitea CLI registrieren statt Web UI
|
||||
|
||||
```bash
|
||||
# Auf Production Server
|
||||
ssh deploy@94.16.110.151
|
||||
docker exec gitea gitea admin actions generate-runner-token
|
||||
|
||||
# Token zurück zu Dev Machine kopieren
|
||||
# In .env eintragen
|
||||
```
|
||||
|
||||
### Option C: Manual Token Generation
|
||||
|
||||
**Ansatz**: Token direkt in Gitea Database generieren (nur als letzter Ausweg)
|
||||
|
||||
**WARNUNG**: Nur verwenden wenn alle anderen Optionen fehlschlagen
|
||||
|
||||
---
|
||||
|
||||
## Docker-in-Docker Architektur (Referenz)
|
||||
|
||||
### Services
|
||||
|
||||
**gitea-runner**:
|
||||
- Image: `gitea/act_runner:latest`
|
||||
- Purpose: Hauptrunner-Service
|
||||
- Volumes:
|
||||
- `./data:/data` (Runner-Daten)
|
||||
- `/var/run/docker.sock:/var/run/docker.sock` (Host Docker Socket)
|
||||
- `./config.yaml:/config.yaml:ro` (Konfiguration)
|
||||
- Environment: Variablen aus `.env` File
|
||||
- Network: `gitea-runner` Bridge Network
|
||||
|
||||
**docker-dind**:
|
||||
- Image: `docker:dind`
|
||||
- Purpose: Isolierte Docker-Daemon für Job-Execution
|
||||
- Privileged: `true` (benötigt für nested containerization)
|
||||
- TLS: `DOCKER_TLS_CERTDIR=/certs`
|
||||
- Volumes:
|
||||
- `docker-certs:/certs` (TLS Zertifikate)
|
||||
- `docker-data:/var/lib/docker` (Docker Layer Storage)
|
||||
- Command: `dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2376 --tlsverify`
|
||||
|
||||
### Networks
|
||||
|
||||
**gitea-runner** Bridge Network:
|
||||
- Isoliert Runner-Infrastructure vom Host
|
||||
- Secure TLS Communication zwischen Services
|
||||
|
||||
### Volumes
|
||||
|
||||
- `docker-certs`: Shared TLS Certificates für runner ↔ dind
|
||||
- `docker-data`: Persistent Docker Layer Storage
|
||||
|
||||
---
|
||||
|
||||
## 8-Phasen Deployment Prozess (Übersicht)
|
||||
|
||||
### Phase 1: Gitea Runner Setup (Development Machine) - **⚠️ BLOCKIERT**
|
||||
**Status**: Kann nicht starten wegen 404 auf Admin Panel
|
||||
**Benötigt**: Gitea erreichbar und Actions enabled
|
||||
|
||||
### Phase 2: Ansible Vault Secrets Setup - **⏳ WARTET**
|
||||
**Status**: Kann nicht starten bis Phase 1 komplett
|
||||
**Tasks**:
|
||||
- Vault Password erstellen (`.vault_pass`)
|
||||
- `production.vault.yml` mit Secrets erstellen
|
||||
- Encryption Keys generieren
|
||||
- Vault File verschlüsseln
|
||||
|
||||
### Phase 3: Production Server Initial Setup - **⏳ KÖNNTE ZUERST AUSGEFÜHRT WERDEN**
|
||||
**Status**: Sollte möglicherweise VOR Phase 1 ausgeführt werden
|
||||
**Tasks**:
|
||||
- SSH zu Production Server
|
||||
- Deploy Infrastructure Stacks:
|
||||
1. Traefik (Reverse Proxy & SSL)
|
||||
2. PostgreSQL (Database)
|
||||
3. Docker Registry (Private Registry)
|
||||
4. **Gitea (Git Server + MySQL + Redis)** ← Benötigt für Phase 1!
|
||||
5. Monitoring (Portainer + Grafana + Prometheus)
|
||||
|
||||
### Phase 4: Application Secrets Deployment - **⏳ WARTET**
|
||||
**Status**: Wartet auf Phase 1-3
|
||||
**Tasks**: Secrets via Ansible zu Production deployen
|
||||
|
||||
### Phase 5: Gitea CI/CD Secrets Configuration - **✅ ERLEDIGT**
|
||||
**Status**: Wartet auf Phase 1-4
|
||||
**Tasks**: Repository Secrets in Gitea konfigurieren
|
||||
|
||||
### Phase 6: First Deployment Test - **⏳ WARTET**
|
||||
**Status**: Wartet auf Phase 1-5
|
||||
**Tasks**: CI/CD Pipeline triggern und testen
|
||||
|
||||
### Phase 7: Monitoring & Health Checks - **⏳ WARTET**
|
||||
**Status**: Wartet auf Phase 1-6
|
||||
**Tasks**: Monitoring Tools konfigurieren und Alerting einrichten
|
||||
|
||||
### Phase 8: Backup & Rollback Testing - **⏳ WARTET**
|
||||
**Status**: Wartet auf Phase 1-7
|
||||
**Tasks**: Backup-Mechanismus und Rollback testen
|
||||
|
||||
---
|
||||
|
||||
## Empfohlener Nächster Schritt
|
||||
|
||||
### ⭐ Option A: Phase 3 zuerst ausführen (Empfohlen)
|
||||
|
||||
**Begründung**:
|
||||
- Behebt die Grundursache (Gitea nicht deployed)
|
||||
- Folgt logischer Abhängigkeitskette
|
||||
- Erlaubt normalen Fortschritt durch alle Phasen
|
||||
|
||||
**Ablauf**:
|
||||
```bash
|
||||
# 1. SSH zu Production Server
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# 2. Navigate zu stacks
|
||||
cd ~/deployment/stacks
|
||||
|
||||
# 3. Deploy Traefik
|
||||
cd traefik
|
||||
docker compose up -d
|
||||
docker compose logs -f # Warten auf "Configuration loaded"
|
||||
|
||||
# 4. Deploy PostgreSQL
|
||||
cd ../postgresql
|
||||
docker compose up -d
|
||||
docker compose logs -f # Warten auf "database system is ready"
|
||||
|
||||
# 5. Deploy Registry
|
||||
cd ../registry
|
||||
docker compose up -d
|
||||
docker compose logs -f # Warten auf "listening on [::]:5000"
|
||||
|
||||
# 6. Deploy Gitea ← KRITISCH für Phase 1
|
||||
cd ../gitea
|
||||
docker compose up -d
|
||||
docker compose logs -f # Warten auf "Listen: http://0.0.0.0:3000"
|
||||
|
||||
# 7. Deploy Monitoring
|
||||
cd ../monitoring
|
||||
docker compose up -d
|
||||
docker compose logs -f
|
||||
|
||||
# 8. Verify all stacks
|
||||
docker ps
|
||||
|
||||
# 9. Test Gitea Accessibility
|
||||
curl -I https://git.michaelschiemer.de
|
||||
```
|
||||
|
||||
**Nach Erfolg**:
|
||||
1. Gitea Web UI öffnen: `https://git.michaelschiemer.de`
|
||||
2. Initial Setup Wizard durchlaufen
|
||||
3. Admin Account erstellen
|
||||
4. Actions in Settings enablen
|
||||
5. **Zurück zu Phase 1**: Jetzt kann Admin Panel erreicht werden
|
||||
6. Registration Token holen
|
||||
7. `.env` komplettieren
|
||||
8. Runner registrieren und starten
|
||||
|
||||
---
|
||||
|
||||
## Technische Details
|
||||
|
||||
### Gitea Actions Architecture
|
||||
|
||||
**Components**:
|
||||
- **act_runner**: Gitea's self-hosted runner (basiert auf nektos/act)
|
||||
- **Docker-in-Docker**: Isolierte Job-Execution Environment
|
||||
- **TLS Communication**: Secure runner ↔ dind via certificates
|
||||
|
||||
**Runner Registration**:
|
||||
1. Generate Token in Gitea Admin Panel
|
||||
2. Add Token zu `.env`: `GITEA_RUNNER_REGISTRATION_TOKEN=<token>`
|
||||
3. Run `./register.sh` (registriert runner mit Gitea instance)
|
||||
4. Start services: `docker compose up -d`
|
||||
5. Verify in Gitea UI: Runner shows as "Idle" or "Active"
|
||||
|
||||
**Runner Labels**:
|
||||
Define welche Execution Environments unterstützt werden:
|
||||
```bash
|
||||
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye,debian-latest:docker://debian:bullseye
|
||||
```
|
||||
|
||||
Format: `label:docker://image`
|
||||
|
||||
---
|
||||
|
||||
## Dateireferenzen
|
||||
|
||||
### Wichtige Dateien
|
||||
|
||||
| Datei | Status | Beschreibung |
|
||||
|-------|--------|--------------|
|
||||
| `SETUP-GUIDE.md` | ✅ Vorhanden | Komplette 8-Phasen Deployment Anleitung (708 Zeilen) |
|
||||
| `deployment/gitea-runner/.env.example` | ✅ Vorhanden | Template für Runner Configuration (23 Zeilen) |
|
||||
| `deployment/gitea-runner/.env` | ✅ Erstellt | Active Configuration - **Token fehlt** |
|
||||
| `deployment/gitea-runner/docker-compose.yml` | ✅ Vorhanden | Two-Service Architecture Definition (47 Zeilen) |
|
||||
|
||||
### Code Snippets Location
|
||||
|
||||
**Runner Configuration** (`.env`):
|
||||
- Zeilen 1-23: Komplette Environment Variables Definition
|
||||
- Zeile 8: `GITEA_RUNNER_REGISTRATION_TOKEN=` ← **KRITISCH: LEER**
|
||||
|
||||
**Docker Compose** (`docker-compose.yml`):
|
||||
- Zeilen 4-20: `gitea-runner` Service Definition
|
||||
- Zeilen 23-34: `docker-dind` Service Definition
|
||||
- Zeilen 37-40: Network Configuration
|
||||
- Zeilen 43-47: Volume Definitions
|
||||
|
||||
**Setup Guide** (SETUP-GUIDE.md):
|
||||
- Zeilen 36-108: Phase 1 Komplette Anleitung
|
||||
- Zeilen 236-329: Phase 3 Infrastructure Deployment (inkl. Gitea)
|
||||
|
||||
---
|
||||
|
||||
## Support Kontakte
|
||||
|
||||
**Bei Problemen**:
|
||||
- Framework Issues: Siehe `docs/claude/troubleshooting.md`
|
||||
- Gitea Documentation: https://docs.gitea.io/
|
||||
- act_runner Documentation: https://docs.gitea.io/en-us/usage/actions/act-runner/
|
||||
|
||||
---
|
||||
|
||||
**Erstellt**: 2025-10-30
|
||||
**Letzte Änderung**: 2025-10-30
|
||||
**Status**: BLOCKED - Awaiting Gitea Deployment (Phase 3)
|
||||
@@ -162,7 +162,6 @@ Siehe `deployment/CI_CD_STATUS.md` für komplette Checkliste und Setup-Anleitung
|
||||
|
||||
**Dateien:**
|
||||
- `deployment/ansible/playbooks/rollback.yml` ✅ Vorhanden
|
||||
- `deployment/scripts/rollback.sh` ✅ Vorhanden
|
||||
- `deployment/stacks/postgresql/scripts/backup.sh` ✅ Vorhanden
|
||||
- `deployment/ansible/playbooks/backup.yml` ❌ Fehlt
|
||||
|
||||
@@ -174,25 +173,26 @@ Siehe `deployment/CI_CD_STATUS.md` für komplette Checkliste und Setup-Anleitung
|
||||
|
||||
---
|
||||
|
||||
### 6. Deployment Scripts finalisieren
|
||||
### 6. Deployment Automation (Erledigt ✅)
|
||||
|
||||
**Status**: ⚠️ Vorhanden, aber muss angepasst werden
|
||||
**Status**: ✅ Abgeschlossen
|
||||
|
||||
**Was fehlt:**
|
||||
- [ ] `deployment/scripts/deploy.sh` testen und anpassen
|
||||
- [ ] `deployment/scripts/rollback.sh` testen und anpassen
|
||||
- [ ] `deployment/scripts/setup-production.sh` finalisieren
|
||||
- [ ] Scripts für alle Stacks (nicht nur Application)
|
||||
**Was erledigt:**
|
||||
- [x] Alle Deployment-Operationen über Ansible Playbooks ✅
|
||||
- [x] Redundante Scripts entfernt ✅
|
||||
- [x] Dokumentation aktualisiert ✅
|
||||
|
||||
**Dateien:**
|
||||
- `deployment/scripts/deploy.sh` ✅ Vorhanden (aber Docker Swarm statt Compose?)
|
||||
- `deployment/scripts/rollback.sh` ✅ Vorhanden
|
||||
- `deployment/scripts/setup-production.sh` ✅ Vorhanden
|
||||
- `deployment/ansible/playbooks/deploy-update.yml` ✅ Vorhanden
|
||||
- `deployment/ansible/playbooks/rollback.yml` ✅ Vorhanden
|
||||
- `deployment/ansible/playbooks/sync-code.yml` ✅ Vorhanden
|
||||
- `deployment/DEPLOYMENT_COMMANDS.md` ✅ Command-Referenz erstellt
|
||||
|
||||
**Nächste Schritte:**
|
||||
1. Scripts prüfen und anpassen (Docker Compose statt Swarm)
|
||||
2. Scripts testen
|
||||
3. Integration mit Ansible Playbooks
|
||||
**Alle Deployment-Operationen werden jetzt direkt über Ansible durchgeführt:**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/<playbook>.yml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
151
deployment/DEPLOYMENT_COMMANDS.md
Normal file
151
deployment/DEPLOYMENT_COMMANDS.md
Normal file
@@ -0,0 +1,151 @@
|
||||
# Deployment Commands - Quick Reference
|
||||
|
||||
Alle Deployment-Operationen werden über **Ansible Playbooks** durchgeführt.
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Häufig verwendete Commands
|
||||
|
||||
### Code deployen (Image-basiert)
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/deploy-update.yml \
|
||||
-e "image_tag=abc1234-1696234567" \
|
||||
-e "git_commit_sha=$(git rev-parse HEAD)"
|
||||
```
|
||||
|
||||
### Code synchen (Git-basiert)
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/sync-code.yml \
|
||||
-e "git_branch=main"
|
||||
```
|
||||
|
||||
### Rollback zu vorheriger Version
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/rollback.yml
|
||||
```
|
||||
|
||||
### Infrastructure Setup (einmalig)
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/setup-infrastructure.yml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📋 Alle verfügbaren Playbooks
|
||||
|
||||
### Deployment & Updates
|
||||
|
||||
- **`playbooks/deploy-update.yml`** - Deployt neues Docker Image
|
||||
- **`playbooks/sync-code.yml`** - Synchronisiert Code aus Git Repository
|
||||
- **`playbooks/rollback.yml`** - Rollback zu vorheriger Version
|
||||
|
||||
### Infrastructure Setup
|
||||
|
||||
- **`playbooks/setup-infrastructure.yml`** - Deployed alle Stacks (Traefik, PostgreSQL, Registry, Gitea, Monitoring, Application)
|
||||
- **`playbooks/setup-production-secrets.yml`** - Deployed Secrets zu Production
|
||||
- **`playbooks/setup-ssl-certificates.yml`** - SSL Certificate Setup
|
||||
- **`playbooks/sync-stacks.yml`** - Synchronisiert Stack-Konfigurationen
|
||||
|
||||
### Troubleshooting & Maintenance
|
||||
|
||||
- **`playbooks/troubleshoot.yml`** - Unified Troubleshooting Playbook mit Tags
|
||||
```bash
|
||||
# Nur Diagnose
|
||||
ansible-playbook ... troubleshoot.yml --tags diagnose
|
||||
|
||||
# Health Check prüfen
|
||||
ansible-playbook ... troubleshoot.yml --tags health,check
|
||||
|
||||
# Health Checks fixen
|
||||
ansible-playbook ... troubleshoot.yml --tags health,fix
|
||||
|
||||
# Nginx 404 fixen
|
||||
ansible-playbook ... troubleshoot.yml --tags nginx,404,fix
|
||||
|
||||
# Alles ausführen
|
||||
ansible-playbook ... troubleshoot.yml --tags all
|
||||
```
|
||||
|
||||
### VPN
|
||||
|
||||
- **`playbooks/setup-wireguard.yml`** - WireGuard VPN Setup
|
||||
- **`playbooks/add-wireguard-client.yml`** - WireGuard Client hinzufügen
|
||||
|
||||
### CI/CD
|
||||
|
||||
- **`playbooks/setup-gitea-runner-ci.yml`** - Gitea Runner CI Setup
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Ansible Variablen
|
||||
|
||||
### Häufig verwendete Extra Variables
|
||||
|
||||
```bash
|
||||
# Image Tag für Deployment
|
||||
-e "image_tag=abc1234-1696234567"
|
||||
|
||||
# Git Branch für Code Sync
|
||||
-e "git_branch=main"
|
||||
-e "git_repo_url=https://git.michaelschiemer.de/michael/michaelschiemer.git"
|
||||
|
||||
# Registry Credentials (wenn nicht im Vault)
|
||||
-e "docker_registry_username=admin"
|
||||
-e "docker_registry_password=secret"
|
||||
|
||||
# Dry Run (Check Mode)
|
||||
--check
|
||||
|
||||
# Verbose Output
|
||||
-v # oder -vv, -vvv für mehr Details
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📖 Vollständige Dokumentation
|
||||
|
||||
- **[README.md](README.md)** - Haupt-Dokumentation
|
||||
- **[QUICK_START.md](QUICK_START.md)** - Schnellstart-Guide
|
||||
- **[CODE_CHANGE_WORKFLOW.md](CODE_CHANGE_WORKFLOW.md)** - Codeänderungen workflow
|
||||
|
||||
---
|
||||
|
||||
## 💡 Tipps
|
||||
|
||||
### Vault Passwort setzen
|
||||
|
||||
```bash
|
||||
export ANSIBLE_VAULT_PASSWORD_FILE=~/.ansible/vault_pass
|
||||
# oder
|
||||
ansible-playbook ... --vault-password-file ~/.ansible/vault_pass
|
||||
```
|
||||
|
||||
### Nur bestimmte Tasks ausführen
|
||||
|
||||
```bash
|
||||
ansible-playbook ... --tags "deploy,restart"
|
||||
```
|
||||
|
||||
### Check Mode (Dry Run)
|
||||
|
||||
```bash
|
||||
ansible-playbook ... --check --diff
|
||||
```
|
||||
|
||||
### Inventory prüfen
|
||||
|
||||
```bash
|
||||
ansible -i inventory/production.yml production -m ping
|
||||
```
|
||||
@@ -1,163 +0,0 @@
|
||||
# Deployment Pre-Flight Check
|
||||
|
||||
**Bevor du Code pushen kannst, prüfe diese Checkliste!**
|
||||
|
||||
---
|
||||
|
||||
## ✅ Kritische Prüfungen
|
||||
|
||||
### 1. Application Stack muss deployed sein
|
||||
|
||||
**Warum kritisch:**
|
||||
- `deploy-update.yml` erwartet, dass `docker-compose.yml` bereits existiert
|
||||
- `.env` File muss vorhanden sein für Container-Konfiguration
|
||||
|
||||
**Prüfen:**
|
||||
```bash
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# Prüfe docker-compose.yml
|
||||
test -f ~/deployment/stacks/application/docker-compose.yml && echo "✅ OK" || echo "❌ FEHLT"
|
||||
|
||||
# Prüfe .env
|
||||
test -f ~/deployment/stacks/application/.env && echo "✅ OK" || echo "❌ FEHLT"
|
||||
|
||||
# Prüfe Container
|
||||
cd ~/deployment/stacks/application
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
**Falls fehlend:**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
|
||||
```
|
||||
|
||||
### 2. Docker Registry muss erreichbar sein
|
||||
|
||||
**Prüfen:**
|
||||
```bash
|
||||
# Vom Production-Server
|
||||
ssh deploy@94.16.110.151
|
||||
docker login git.michaelschiemer.de:5000 -u admin -p <password>
|
||||
|
||||
# Oder Test-Pull
|
||||
docker pull git.michaelschiemer.de:5000/framework:latest
|
||||
```
|
||||
|
||||
### 3. Gitea Runner muss laufen
|
||||
|
||||
**Prüfen:**
|
||||
```bash
|
||||
cd deployment/gitea-runner
|
||||
docker compose ps
|
||||
# Sollte zeigen: gitea-runner "Up"
|
||||
```
|
||||
|
||||
**In Gitea UI:**
|
||||
```
|
||||
https://git.michaelschiemer.de/admin/actions/runners
|
||||
```
|
||||
- Runner sollte als "Idle" oder "Active" angezeigt werden
|
||||
|
||||
### 4. Secrets müssen konfiguriert sein
|
||||
|
||||
**In Gitea:**
|
||||
```
|
||||
https://git.michaelschiemer.de/michael/michaelschiemer/settings/secrets/actions
|
||||
```
|
||||
|
||||
**Prüfen:**
|
||||
- [ ] `REGISTRY_USER` vorhanden
|
||||
- [ ] `REGISTRY_PASSWORD` vorhanden
|
||||
- [ ] `SSH_PRIVATE_KEY` vorhanden
|
||||
|
||||
### 5. SSH-Zugriff muss funktionieren
|
||||
|
||||
**Prüfen:**
|
||||
```bash
|
||||
# Test SSH-Verbindung
|
||||
ssh -i ~/.ssh/production deploy@94.16.110.151 "echo 'SSH OK'"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Pre-Deployment Test
|
||||
|
||||
### Test 1: Ansible-Verbindung
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible -i inventory/production.yml production -m ping
|
||||
# Sollte: production | SUCCESS
|
||||
```
|
||||
|
||||
### Test 2: Application Stack Status
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible -i inventory/production.yml production -a "test -f ~/deployment/stacks/application/docker-compose.yml && echo 'OK' || echo 'MISSING'"
|
||||
# Sollte: "OK"
|
||||
```
|
||||
|
||||
### Test 3: Docker Registry Login (vom Runner aus)
|
||||
|
||||
```bash
|
||||
# Vom Development-Machine (wo Runner läuft)
|
||||
docker login git.michaelschiemer.de:5000 -u <registry-user> -p <registry-password>
|
||||
# Sollte: Login Succeeded
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Häufige Probleme
|
||||
|
||||
### Problem: "Application Stack nicht deployed"
|
||||
|
||||
**Symptom:**
|
||||
- `docker-compose.yml not found` Fehler
|
||||
|
||||
**Lösung:**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
|
||||
```
|
||||
|
||||
### Problem: "Registry Login fehlschlägt"
|
||||
|
||||
**Symptom:**
|
||||
- `unauthorized: authentication required`
|
||||
|
||||
**Lösung:**
|
||||
1. Prüfe Secrets in Gitea
|
||||
2. Prüfe Registry-Credentials
|
||||
3. Teste manuell: `docker login git.michaelschiemer.de:5000`
|
||||
|
||||
### Problem: "SSH-Verbindung fehlschlägt"
|
||||
|
||||
**Symptom:**
|
||||
- Ansible kann nicht zum Server verbinden
|
||||
|
||||
**Lösung:**
|
||||
1. Prüfe SSH Key: `~/.ssh/production`
|
||||
2. Prüfe SSH Config
|
||||
3. Teste manuell: `ssh -i ~/.ssh/production deploy@94.16.110.151`
|
||||
|
||||
---
|
||||
|
||||
## ✅ Alles OK? Dann los!
|
||||
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "feat: Add feature"
|
||||
git push origin main
|
||||
```
|
||||
|
||||
**Pipeline-Status:**
|
||||
```
|
||||
https://git.michaelschiemer.de/michael/michaelschiemer/actions
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Viel Erfolg!** 🚀
|
||||
@@ -1,2 +0,0 @@
|
||||
# Deployment test 2025-10-31T01:46:51Z
|
||||
# Fri Oct 31 02:56:51 AM CET 2025
|
||||
@@ -1,246 +0,0 @@
|
||||
# Finale Deployment Checklist - Code deployen
|
||||
|
||||
**Stand:** 2025-10-31
|
||||
**Status:** ✅ Bereit für Code-Deployments!
|
||||
|
||||
---
|
||||
|
||||
## ✅ Was ist bereits fertig?
|
||||
|
||||
### Infrastructure (100%)
|
||||
- ✅ Traefik (Reverse Proxy & SSL)
|
||||
- ✅ PostgreSQL (Database)
|
||||
- ✅ Docker Registry (Private Registry)
|
||||
- ✅ Gitea (Git Server)
|
||||
- ✅ Monitoring (Portainer, Grafana, Prometheus)
|
||||
- ✅ WireGuard VPN
|
||||
|
||||
### Application Stack (100%)
|
||||
- ✅ Integration in `setup-infrastructure.yml`
|
||||
- ✅ `.env` Template (`application.env.j2`)
|
||||
- ✅ Database-Migration nach Deployment
|
||||
- ✅ Health-Checks nach Deployment
|
||||
- ✅ `docker-compose.yml` wird automatisch kopiert
|
||||
- ✅ Nginx-Konfiguration wird automatisch kopiert
|
||||
|
||||
### CI/CD Pipeline (100%)
|
||||
- ✅ Workflows vorhanden (production-deploy.yml)
|
||||
- ✅ Gitea Runner läuft und ist registriert
|
||||
- ✅ Secrets konfiguriert (REGISTRY_USER, REGISTRY_PASSWORD, SSH_PRIVATE_KEY)
|
||||
- ✅ Ansible Playbooks vorhanden
|
||||
- ✅ Deployment-Playbook mit Pre-Flight Checks
|
||||
|
||||
### Dokumentation (100%)
|
||||
- ✅ Umfangreiche Guides vorhanden
|
||||
- ✅ Quick Start Guide
|
||||
- ✅ Deployment-Dokumentation
|
||||
- ✅ Troubleshooting-Guides
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Code deployen - So geht's!
|
||||
|
||||
### Einfachste Methode
|
||||
|
||||
```bash
|
||||
# 1. Code ändern
|
||||
# ... Dateien bearbeiten ...
|
||||
|
||||
# 2. Committen
|
||||
git add .
|
||||
git commit -m "feat: Add new feature"
|
||||
|
||||
# 3. Pushen → Automatisches Deployment!
|
||||
git push origin main
|
||||
```
|
||||
|
||||
**Pipeline-Status:** `https://git.michaelschiemer.de/michael/michaelschiemer/actions`
|
||||
|
||||
**Zeit:** ~8-15 Minuten
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Wichtiger Hinweis: Erstmalige Deployment
|
||||
|
||||
**Falls Application Stack noch nicht deployed ist:**
|
||||
|
||||
Der `deploy-update.yml` Playbook prüft automatisch, ob `docker-compose.yml` existiert. Falls nicht, gibt es eine klare Fehlermeldung.
|
||||
|
||||
**Vor dem ersten Code-Push (falls Stack noch nicht deployed):**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
|
||||
```
|
||||
|
||||
Dieses Playbook:
|
||||
- ✅ Deployed alle Infrastructure Stacks
|
||||
- ✅ **Deployed Application Stack** (inkl. docker-compose.yml, .env, nginx config)
|
||||
- ✅ Führt Database-Migration aus
|
||||
- ✅ Prüft Health-Checks
|
||||
|
||||
**Nach diesem Setup:** Ab jetzt funktioniert `git push origin main` automatisch!
|
||||
|
||||
---
|
||||
|
||||
## 📋 Pre-Deployment Check
|
||||
|
||||
### Automatische Checks
|
||||
|
||||
Das `deploy-update.yml` Playbook prüft automatisch:
|
||||
- ✅ Docker Service läuft
|
||||
- ✅ Application Stack Verzeichnis existiert
|
||||
- ✅ `docker-compose.yml` existiert (mit klarer Fehlermeldung falls nicht)
|
||||
- ✅ Backup-Verzeichnis kann erstellt werden
|
||||
|
||||
### Manuelle Checks (Optional)
|
||||
|
||||
**Application Stack Status prüfen:**
|
||||
```bash
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# Prüfe docker-compose.yml
|
||||
test -f ~/deployment/stacks/application/docker-compose.yml && echo "✅ OK" || echo "❌ Fehlt - Führe setup-infrastructure.yml aus"
|
||||
|
||||
# Prüfe .env
|
||||
test -f ~/deployment/stacks/application/.env && echo "✅ OK" || echo "❌ Fehlt - Führe setup-infrastructure.yml aus"
|
||||
|
||||
# Prüfe Container
|
||||
cd ~/deployment/stacks/application
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
**Gitea Runner Status:**
|
||||
```bash
|
||||
cd deployment/gitea-runner
|
||||
docker compose ps
|
||||
# Sollte zeigen: gitea-runner "Up"
|
||||
```
|
||||
|
||||
**Secrets prüfen:**
|
||||
```
|
||||
https://git.michaelschiemer.de/michael/michaelschiemer/settings/secrets/actions
|
||||
```
|
||||
- REGISTRY_USER ✅
|
||||
- REGISTRY_PASSWORD ✅
|
||||
- SSH_PRIVATE_KEY ✅
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Was passiert beim Deployment?
|
||||
|
||||
### Automatischer Ablauf
|
||||
|
||||
**1. CI/CD Pipeline startet** (bei Push zu `main`)
|
||||
- Tests (~2-5 Min)
|
||||
- Build (~3-5 Min)
|
||||
- Push zur Registry (~1-2 Min)
|
||||
|
||||
**2. Ansible Deployment** (~2-4 Min)
|
||||
- Pre-Flight Checks (Docker läuft, docker-compose.yml existiert)
|
||||
- Backup erstellen
|
||||
- Registry Login
|
||||
- Neues Image pullen
|
||||
- docker-compose.yml aktualisieren (Image-Tag ersetzen)
|
||||
- Stack neu starten (`--force-recreate`)
|
||||
- Health-Checks warten
|
||||
|
||||
**3. Health-Check** (~1 Min)
|
||||
- Application Health-Check
|
||||
- Bei Fehler: Automatischer Rollback
|
||||
|
||||
**Gesamtzeit:** ~8-15 Minuten
|
||||
|
||||
---
|
||||
|
||||
## ✅ Erfolgreiches Deployment erkennen
|
||||
|
||||
### In Gitea Actions
|
||||
|
||||
```
|
||||
https://git.michaelschiemer.de/michael/michaelschiemer/actions
|
||||
```
|
||||
|
||||
**Erfolg:**
|
||||
- 🟢 Alle Jobs grün
|
||||
- ✅ "Deploy via Ansible" erfolgreich
|
||||
- ✅ Health-Check erfolgreich
|
||||
|
||||
### Auf Production-Server
|
||||
|
||||
```bash
|
||||
# Container-Status
|
||||
ssh deploy@94.16.110.151 "cd ~/deployment/stacks/application && docker compose ps"
|
||||
|
||||
# Application Health-Check
|
||||
curl https://michaelschiemer.de/health
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🆘 Troubleshooting
|
||||
|
||||
### Problem: "docker-compose.yml not found"
|
||||
|
||||
**Fehlermeldung:**
|
||||
```
|
||||
Application Stack docker-compose.yml not found at /home/deploy/deployment/stacks/application/docker-compose.yml
|
||||
|
||||
The Application Stack must be deployed first via:
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
|
||||
```
|
||||
|
||||
**Lösung:**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
|
||||
```
|
||||
|
||||
### Problem: "Failed to pull image"
|
||||
|
||||
**Lösung:**
|
||||
1. Prüfe Registry-Credentials in Gitea Secrets
|
||||
2. Teste manuell: `docker login git.michaelschiemer.de:5000`
|
||||
3. Prüfe ob Image in Registry vorhanden ist
|
||||
|
||||
### Problem: "Health-Check failed"
|
||||
|
||||
**Lösung:**
|
||||
- Automatischer Rollback wird ausgeführt
|
||||
- Logs prüfen: `docker compose logs app`
|
||||
- Manueller Rollback: `ansible-playbook -i inventory/production.yml playbooks/rollback.yml`
|
||||
|
||||
---
|
||||
|
||||
## 📚 Weitere Dokumentation
|
||||
|
||||
- **[QUICK_START.md](QUICK_START.md)** - Schnellstart
|
||||
- **[CODE_CHANGE_WORKFLOW.md](CODE_CHANGE_WORKFLOW.md)** - Codeänderungen pushen
|
||||
- **[APPLICATION_STACK_DEPLOYMENT.md](APPLICATION_STACK_DEPLOYMENT.md)** - Deployment-Details
|
||||
- **[DEPLOYMENT_PREFLIGHT_CHECK.md](DEPLOYMENT_PREFLIGHT_CHECK.md)** - Pre-Flight Checks
|
||||
- **[CI_CD_STATUS.md](CI_CD_STATUS.md)** - CI/CD Status
|
||||
|
||||
---
|
||||
|
||||
## 🎉 Ready to Deploy!
|
||||
|
||||
**Alles ist bereit!**
|
||||
|
||||
**Nächster Schritt:**
|
||||
1. **Prüfe ob Application Stack deployed ist** (siehe Pre-Deployment Check oben)
|
||||
2. **Falls nicht:** `setup-infrastructure.yml` ausführen
|
||||
3. **Dann:** Code pushen und Deployment genießen! 🚀
|
||||
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "feat: Add feature"
|
||||
git push origin main
|
||||
```
|
||||
|
||||
**Pipeline-Status:**
|
||||
```
|
||||
https://git.michaelschiemer.de/michael/michaelschiemer/actions
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Viel Erfolg beim ersten Deployment!** 🎉
|
||||
239
deployment/GIT_DEPLOYMENT_TEST.md
Normal file
239
deployment/GIT_DEPLOYMENT_TEST.md
Normal file
@@ -0,0 +1,239 @@
|
||||
# Git-basiertes Deployment - Test Plan
|
||||
|
||||
**Datum:** 2025-01-31
|
||||
**Status:** ⏳ Ready to Test
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Ziel
|
||||
|
||||
Testen, ob Container automatisch Code aus Git-Repository klont/pullt beim Start.
|
||||
|
||||
---
|
||||
|
||||
## ✅ Vorbereitung
|
||||
|
||||
### 1. Prüfen ob alles korrekt konfiguriert ist
|
||||
|
||||
#### Docker Entrypoint
|
||||
- ✅ `docker/entrypoint.sh` - Git Clone/Pull implementiert
|
||||
- ✅ `Dockerfile.production` - Git + Composer installiert
|
||||
|
||||
#### Docker Compose
|
||||
- ✅ `deployment/stacks/application/docker-compose.yml` - Git Environment Variables vorhanden
|
||||
|
||||
#### Ansible Playbook
|
||||
- ✅ `deployment/ansible/playbooks/sync-code.yml` - Erstellt
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Test-Plan
|
||||
|
||||
### Test 1: Git-Variablen in .env setzen
|
||||
|
||||
**Ziel:** Prüfen ob Git-Konfiguration in .env gesetzt werden kann
|
||||
|
||||
```bash
|
||||
# SSH zum Production-Server
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# Prüfen ob .env existiert
|
||||
cd ~/deployment/stacks/application
|
||||
test -f .env && echo "✅ OK" || echo "❌ Fehlt"
|
||||
|
||||
# Git-Variablen manuell hinzufügen (für Test)
|
||||
cat >> .env << 'EOF'
|
||||
|
||||
# Git Repository Configuration
|
||||
GIT_REPOSITORY_URL=https://git.michaelschiemer.de/michael/michaelschiemer.git
|
||||
GIT_BRANCH=main
|
||||
GIT_TOKEN=
|
||||
EOF
|
||||
```
|
||||
|
||||
**Erwartetes Ergebnis:**
|
||||
- ✅ .env existiert
|
||||
- ✅ Git-Variablen können hinzugefügt werden
|
||||
|
||||
---
|
||||
|
||||
### Test 2: Container mit Git-Variablen starten
|
||||
|
||||
**Ziel:** Prüfen ob Container Git Clone/Pull beim Start ausführt
|
||||
|
||||
```bash
|
||||
# Auf Production-Server
|
||||
cd ~/deployment/stacks/application
|
||||
|
||||
# Container neu starten
|
||||
docker compose restart app
|
||||
|
||||
# Logs prüfen (sollte Git Clone/Pull zeigen)
|
||||
docker logs app --tail 100 | grep -E "(Git|Clone|Pull|✅|❌)"
|
||||
```
|
||||
|
||||
**Erwartetes Ergebnis:**
|
||||
- ✅ Logs zeigen "📥 Cloning/Pulling code from Git repository..."
|
||||
- ✅ Logs zeigen "📥 Cloning repository from ..." oder "🔄 Pulling latest changes..."
|
||||
- ✅ Logs zeigen "✅ Git sync completed"
|
||||
|
||||
**Falls Fehler:**
|
||||
- ❌ Logs zeigen Fehler → Troubleshooting nötig
|
||||
- ❌ Keine Git-Logs → Entrypoint nicht korrekt oder Git-Variablen nicht gesetzt
|
||||
|
||||
---
|
||||
|
||||
### Test 3: Code-Verifikation im Container
|
||||
|
||||
**Ziel:** Prüfen ob Code tatsächlich im Container ist
|
||||
|
||||
```bash
|
||||
# Auf Production-Server
|
||||
docker exec app ls -la /var/www/html/ | head -20
|
||||
docker exec app test -f /var/www/html/composer.json && echo "✅ composer.json vorhanden" || echo "❌ Fehlt"
|
||||
docker exec app test -d /var/www/html/src && echo "✅ src/ vorhanden" || echo "❌ Fehlt"
|
||||
docker exec app test -d /var/www/html/.git && echo "✅ .git vorhanden" || echo "❌ Fehlt"
|
||||
```
|
||||
|
||||
**Erwartetes Ergebnis:**
|
||||
- ✅ Dateien sind im Container
|
||||
- ✅ `.git` Verzeichnis existiert (zeigt dass Git Clone funktioniert hat)
|
||||
|
||||
---
|
||||
|
||||
### Test 4: Code-Update testen (Git Pull)
|
||||
|
||||
**Ziel:** Prüfen ob `sync-code.yml` Playbook funktioniert
|
||||
|
||||
```bash
|
||||
# Lokal (auf Dev-Machine)
|
||||
cd deployment/ansible
|
||||
|
||||
# Sync-Code Playbook ausführen
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/sync-code.yml \
|
||||
-e "git_branch=main"
|
||||
|
||||
# Container-Logs prüfen (auf Production-Server)
|
||||
ssh deploy@94.16.110.151
|
||||
docker logs app --tail 50 | grep -E "(Git|Pull|✅)"
|
||||
```
|
||||
|
||||
**Erwartetes Ergebnis:**
|
||||
- ✅ Playbook führt erfolgreich aus
|
||||
- ✅ Container wird neu gestartet
|
||||
- ✅ Logs zeigen "🔄 Pulling latest changes..."
|
||||
- ✅ Code wird aktualisiert
|
||||
|
||||
---
|
||||
|
||||
### Test 5: Application Health Check
|
||||
|
||||
**Ziel:** Prüfen ob Application nach Git-Sync noch funktioniert
|
||||
|
||||
```bash
|
||||
# Health Check
|
||||
curl -f https://michaelschiemer.de/health || echo "❌ Health Check fehlgeschlagen"
|
||||
|
||||
# Application Test
|
||||
curl -f https://michaelschiemer.de/ || echo "❌ Application fehlgeschlagen"
|
||||
```
|
||||
|
||||
**Erwartetes Ergebnis:**
|
||||
- ✅ Health Check erfolgreich
|
||||
- ✅ Application läuft
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Troubleshooting
|
||||
|
||||
### Problem: Container zeigt keine Git-Logs
|
||||
|
||||
**Mögliche Ursachen:**
|
||||
1. `GIT_REPOSITORY_URL` nicht in .env gesetzt
|
||||
2. Entrypoint Script nicht korrekt
|
||||
3. Git nicht im Container installiert
|
||||
|
||||
**Lösung:**
|
||||
```bash
|
||||
# Prüfen .env
|
||||
cd ~/deployment/stacks/application
|
||||
grep GIT_REPOSITORY_URL .env
|
||||
|
||||
# Prüfen Entrypoint
|
||||
docker exec app cat /usr/local/bin/entrypoint.sh | grep -A 10 "GIT_REPOSITORY_URL"
|
||||
|
||||
# Prüfen Git Installation
|
||||
docker exec app which git
|
||||
docker exec app git --version
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Problem: Git Clone fehlgeschlagen
|
||||
|
||||
**Mögliche Ursachen:**
|
||||
1. Repository nicht erreichbar vom Server
|
||||
2. Falsche Credentials
|
||||
3. Branch nicht existiert
|
||||
|
||||
**Lösung:**
|
||||
```bash
|
||||
# Prüfen Repository-Zugriff
|
||||
docker exec app git clone --branch main --depth 1 https://git.michaelschiemer.de/michael/michaelschiemer.git /tmp/test-clone
|
||||
|
||||
# Logs prüfen
|
||||
docker logs app --tail 100 | grep -i "error\|fail"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Problem: Composer Install fehlgeschlagen
|
||||
|
||||
**Mögliche Ursachen:**
|
||||
1. Composer nicht installiert
|
||||
2. Network-Probleme beim Dependency-Download
|
||||
|
||||
**Lösung:**
|
||||
```bash
|
||||
# Composer prüfen
|
||||
docker exec app which composer
|
||||
docker exec app composer --version
|
||||
|
||||
# Manuell testen
|
||||
docker exec app sh -c "cd /var/www/html && composer install --no-dev --optimize-autoloader --no-interaction"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📋 Checkliste für Test
|
||||
|
||||
### Vor Test:
|
||||
- [ ] Git-Repository ist erreichbar vom Production-Server
|
||||
- [ ] Git-Credentials sind verfügbar (falls private Repository)
|
||||
- [ ] .env Datei existiert auf Production-Server
|
||||
- [ ] Container-Image wurde neu gebaut (mit Git + Composer)
|
||||
|
||||
### Während Test:
|
||||
- [ ] Test 1: Git-Variablen in .env setzen
|
||||
- [ ] Test 2: Container mit Git-Variablen starten
|
||||
- [ ] Test 3: Code-Verifikation im Container
|
||||
- [ ] Test 4: Code-Update testen (Git Pull)
|
||||
- [ ] Test 5: Application Health Check
|
||||
|
||||
### Nach Test:
|
||||
- [ ] Alle Tests erfolgreich
|
||||
- [ ] Application läuft korrekt
|
||||
- [ ] Code ist aktuell
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Nächste Schritte nach erfolgreichem Test
|
||||
|
||||
1. ✅ Git-basiertes Deployment dokumentieren
|
||||
2. ✅ In CI/CD Pipeline integrieren (optional)
|
||||
3. ✅ Dokumentation aktualisieren
|
||||
|
||||
---
|
||||
|
||||
**Bereit zum Testen!** 🧪
|
||||
@@ -1,155 +0,0 @@
|
||||
# Native Workflow ohne GitHub Actions
|
||||
|
||||
## Problem
|
||||
|
||||
Der aktuelle Workflow (`production-deploy.yml`) verwendet GitHub Actions wie:
|
||||
- `actions/checkout@v4`
|
||||
- `shivammathur/setup-php@v2`
|
||||
- `actions/cache@v3`
|
||||
- `docker/setup-buildx-action@v3`
|
||||
- `docker/build-push-action@v5`
|
||||
|
||||
Diese Actions müssen von GitHub geladen werden, was zu Abbrüchen führen kann wenn:
|
||||
- GitHub nicht erreichbar ist
|
||||
- Actions nicht geladen werden können
|
||||
- Timeouts auftreten
|
||||
|
||||
## Lösung: Native Workflow
|
||||
|
||||
Die Datei `.gitea/workflows/production-deploy-native.yml` verwendet **nur Shell-Commands** und keine GitHub Actions:
|
||||
|
||||
### Vorteile
|
||||
|
||||
1. **Keine GitHub-Abhängigkeit**: Funktioniert komplett offline
|
||||
2. **Schneller**: Keine Action-Downloads
|
||||
3. **Weniger Fehlerquellen**: Direkte Shell-Commands statt Actions
|
||||
4. **Einfacher zu debuggen**: Standard-Bash-Scripts
|
||||
|
||||
### Änderungen
|
||||
|
||||
#### 1. Checkout
|
||||
**Vorher:**
|
||||
```yaml
|
||||
- uses: actions/checkout@v4
|
||||
```
|
||||
|
||||
**Nachher:**
|
||||
```bash
|
||||
git clone --depth 1 --branch "$REF_NAME" \
|
||||
"https://git.michaelschiemer.de/${{ github.repository }}.git" \
|
||||
/workspace/repo
|
||||
```
|
||||
|
||||
#### 2. PHP Setup
|
||||
**Vorher:**
|
||||
```yaml
|
||||
- uses: shivammathur/setup-php@v2
|
||||
with:
|
||||
php-version: '8.3'
|
||||
```
|
||||
|
||||
**Nachher:**
|
||||
```bash
|
||||
apt-get update
|
||||
apt-get install -y php8.3 php8.3-cli php8.3-mbstring ...
|
||||
```
|
||||
|
||||
#### 3. Cache
|
||||
**Vorher:**
|
||||
```yaml
|
||||
- uses: actions/cache@v3
|
||||
```
|
||||
|
||||
**Nachher:**
|
||||
```bash
|
||||
# Einfaches Datei-basiertes Caching
|
||||
if [ -d "/tmp/composer-cache/vendor" ]; then
|
||||
cp -r /tmp/composer-cache/vendor /workspace/repo/vendor
|
||||
fi
|
||||
```
|
||||
|
||||
#### 4. Docker Buildx
|
||||
**Vorher:**
|
||||
```yaml
|
||||
- uses: docker/setup-buildx-action@v3
|
||||
```
|
||||
|
||||
**Nachher:**
|
||||
```bash
|
||||
docker buildx create --name builder --use || docker buildx use builder
|
||||
docker buildx inspect --bootstrap
|
||||
```
|
||||
|
||||
#### 5. Docker Build/Push
|
||||
**Vorher:**
|
||||
```yaml
|
||||
- uses: docker/build-push-action@v5
|
||||
```
|
||||
|
||||
**Nachher:**
|
||||
```bash
|
||||
docker buildx build \
|
||||
--file ./Dockerfile.production \
|
||||
--tag $REGISTRY/$IMAGE_NAME:latest \
|
||||
--push \
|
||||
.
|
||||
```
|
||||
|
||||
## Verwendung
|
||||
|
||||
### Option 1: Native Workflow aktivieren
|
||||
|
||||
1. **Benenne um:**
|
||||
```bash
|
||||
mv .gitea/workflows/production-deploy.yml .gitea/workflows/production-deploy-with-actions.yml.bak
|
||||
mv .gitea/workflows/production-deploy-native.yml .gitea/workflows/production-deploy.yml
|
||||
```
|
||||
|
||||
2. **Commite und pushe:**
|
||||
```bash
|
||||
git add .gitea/workflows/production-deploy.yml
|
||||
git commit -m "chore: switch to native workflow without GitHub Actions"
|
||||
git push origin main
|
||||
```
|
||||
|
||||
### Option 2: Beide parallel testen
|
||||
|
||||
Lass beide Workflows parallel laufen:
|
||||
- `production-deploy.yml` - Mit Actions (aktuell)
|
||||
- `production-deploy-native.yml` - Native (neue Version)
|
||||
|
||||
## Gitea Actions Konfiguration
|
||||
|
||||
**Wichtig:** Wenn wir die native Version verwenden, brauchen wir `DEFAULT_ACTIONS_URL` **nicht mehr** in der Gitea-Konfiguration.
|
||||
|
||||
Aber es schadet auch nicht, es drin zu lassen für zukünftige Workflows.
|
||||
|
||||
## Debugging
|
||||
|
||||
Wenn der native Workflow nicht funktioniert:
|
||||
|
||||
1. **Prüfe Git Clone:**
|
||||
```bash
|
||||
# Im Runner Container
|
||||
git clone --depth 1 https://git.michaelschiemer.de/michael/michaelschiemer.git /tmp/test
|
||||
```
|
||||
|
||||
2. **Prüfe Docker Buildx:**
|
||||
```bash
|
||||
docker buildx version
|
||||
docker buildx ls
|
||||
```
|
||||
|
||||
3. **Prüfe PHP Installation:**
|
||||
```bash
|
||||
php --version
|
||||
php -m # Zeigt installierte Module
|
||||
```
|
||||
|
||||
## Empfehlung
|
||||
|
||||
**Für Stabilität:** Verwende die native Version (`production-deploy-native.yml`)
|
||||
|
||||
**Für Kompatibilität:** Bleib bei der Actions-Version (`production-deploy.yml`)
|
||||
|
||||
Die native Version sollte stabiler sein, da sie keine externen Dependencies benötigt.
|
||||
45
deployment/QUICK_TEST.md
Normal file
45
deployment/QUICK_TEST.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Quick Git Deployment Test
|
||||
|
||||
## ✅ Verifikation ohne Container
|
||||
|
||||
Alle Dateien sind korrekt konfiguriert:
|
||||
|
||||
### 1. Entrypoint Script (`docker/entrypoint.sh`)
|
||||
- ✅ `GIT_REPOSITORY_URL` Check vorhanden (Zeile 30)
|
||||
- ✅ Git Clone/Pull Funktionalität implementiert
|
||||
- ✅ Composer Install integriert
|
||||
|
||||
### 2. Docker Compose (`deployment/stacks/application/docker-compose.yml`)
|
||||
- ✅ `GIT_REPOSITORY_URL` Environment Variable vorhanden (Zeile 17)
|
||||
- ✅ `GIT_BRANCH`, `GIT_TOKEN`, `GIT_USERNAME`, `GIT_PASSWORD` vorhanden
|
||||
|
||||
### 3. Ansible Template (`deployment/ansible/templates/application.env.j2`)
|
||||
- ✅ Git-Variablen im Template definiert
|
||||
|
||||
### 4. Dockerfile (`Dockerfile.production`)
|
||||
- ✅ Git installiert
|
||||
- ✅ Composer installiert
|
||||
- ✅ Entrypoint kopiert
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Schneller Test (ohne Container-Start)
|
||||
|
||||
```bash
|
||||
# Prüfen ob GIT_REPOSITORY_URL überall vorhanden ist
|
||||
grep -c "GIT_REPOSITORY_URL" docker/entrypoint.sh
|
||||
grep -c "GIT_REPOSITORY_URL" deployment/stacks/application/docker-compose.yml
|
||||
grep -c "GIT_REPOSITORY_URL" deployment/ansible/templates/application.env.j2
|
||||
```
|
||||
|
||||
**Erwartetes Ergebnis:** Alle sollten > 0 zurückgeben
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Nächste Schritte zum Testen
|
||||
|
||||
1. **Image ist bereits gebaut** ✅
|
||||
2. **Git-Variablen in .env setzen** (auf Production Server)
|
||||
3. **Container neu starten** und Logs prüfen
|
||||
|
||||
Siehe `TEST_GIT_DEPLOYMENT.md` für detaillierte Anleitung.
|
||||
@@ -54,8 +54,6 @@ deployment/
|
||||
│ ├── playbooks/ # Deployment automation
|
||||
│ ├── inventory/ # Server inventory
|
||||
│ └── secrets/ # Ansible Vault secrets
|
||||
├── runner/ # Gitea Actions runner (dev machine)
|
||||
├── scripts/ # Helper scripts
|
||||
└── docs/ # Deployment documentation
|
||||
```
|
||||
|
||||
@@ -120,16 +118,31 @@ Each stack has its own README with detailed configuration:
|
||||
|
||||
## Deployment Commands
|
||||
|
||||
### Manual Deployment
|
||||
### Code deployen (Image-basiert)
|
||||
```bash
|
||||
./scripts/deploy.sh
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/deploy-update.yml \
|
||||
-e "image_tag=abc1234-1696234567"
|
||||
```
|
||||
|
||||
### Rollback to Previous Version
|
||||
### Code synchen (Git-basiert)
|
||||
```bash
|
||||
./scripts/rollback.sh
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/sync-code.yml \
|
||||
-e "git_branch=main"
|
||||
```
|
||||
|
||||
### Rollback zu vorheriger Version
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/rollback.yml
|
||||
```
|
||||
|
||||
**📖 Vollständige Command-Referenz:** Siehe [DEPLOYMENT_COMMANDS.md](DEPLOYMENT_COMMANDS.md)
|
||||
|
||||
### Update Specific Stack
|
||||
```bash
|
||||
cd stacks/<stack-name>
|
||||
|
||||
@@ -1,257 +0,0 @@
|
||||
# ✅ Ready to Deploy - Checklist
|
||||
|
||||
**Stand:** 2025-10-31
|
||||
**Status:** ✅ Bereit für Code-Deployments!
|
||||
|
||||
---
|
||||
|
||||
## ✅ Vollständig konfiguriert
|
||||
|
||||
### Infrastructure
|
||||
- ✅ Traefik (Reverse Proxy & SSL)
|
||||
- ✅ PostgreSQL (Database)
|
||||
- ✅ Docker Registry (Private Registry)
|
||||
- ✅ Gitea (Git Server)
|
||||
- ✅ Monitoring (Portainer, Grafana, Prometheus)
|
||||
- ✅ WireGuard VPN
|
||||
|
||||
### Application Stack
|
||||
- ✅ Integration in `setup-infrastructure.yml`
|
||||
- ✅ `.env` Template (`application.env.j2`)
|
||||
- ✅ Database-Migration nach Deployment
|
||||
- ✅ Health-Checks nach Deployment
|
||||
|
||||
### CI/CD Pipeline
|
||||
- ✅ Workflows vorhanden (production-deploy.yml)
|
||||
- ✅ Gitea Runner läuft und ist registriert
|
||||
- ✅ Secrets konfiguriert (REGISTRY_USER, REGISTRY_PASSWORD, SSH_PRIVATE_KEY)
|
||||
- ✅ Ansible Playbooks vorhanden
|
||||
|
||||
### Dokumentation
|
||||
- ✅ Umfangreiche Guides vorhanden
|
||||
- ✅ Quick Start Guide
|
||||
- ✅ Deployment-Dokumentation
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Code deployen - So geht's
|
||||
|
||||
### Einfachste Methode
|
||||
|
||||
```bash
|
||||
# 1. Code ändern
|
||||
# ... Dateien bearbeiten ...
|
||||
|
||||
# 2. Committen
|
||||
git add .
|
||||
git commit -m "feat: Add new feature"
|
||||
|
||||
# 3. Pushen → Automatisches Deployment!
|
||||
git push origin main
|
||||
```
|
||||
|
||||
**Pipeline-Status:** `https://git.michaelschiemer.de/michael/michaelschiemer/actions`
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Wichtiger Hinweis: Erstmalige Deployment
|
||||
|
||||
**Wenn Application Stack noch nicht deployed ist:**
|
||||
|
||||
Der `deploy-update.yml` Playbook erwartet, dass der Application Stack bereits existiert.
|
||||
|
||||
**Vor dem ersten Code-Push:**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
|
||||
```
|
||||
|
||||
Dieses Playbook deployed:
|
||||
- Alle Infrastructure Stacks (Traefik, PostgreSQL, Registry, Gitea, Monitoring)
|
||||
- **Application Stack** (mit docker-compose.yml und .env)
|
||||
|
||||
**Nach diesem Setup:** Ab jetzt funktioniert `git push origin main` automatisch!
|
||||
|
||||
---
|
||||
|
||||
## 📋 Pre-Deployment Checklist
|
||||
|
||||
### ✅ Alles sollte bereits erledigt sein, aber hier zur Sicherheit:
|
||||
|
||||
- [x] Infrastructure Stacks deployed ✅
|
||||
- [ ] **Application Stack deployed** ⚠️ Prüfen!
|
||||
- [x] Gitea Runner läuft ✅
|
||||
- [x] Secrets konfiguriert ✅
|
||||
- [x] Workflows vorhanden ✅
|
||||
|
||||
### Application Stack Deployment prüfen
|
||||
|
||||
```bash
|
||||
# SSH zum Production-Server
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# Prüfe ob Application Stack existiert
|
||||
test -f ~/deployment/stacks/application/docker-compose.yml && echo "✅ Vorhanden" || echo "❌ Fehlt"
|
||||
|
||||
# Prüfe ob .env existiert
|
||||
test -f ~/deployment/stacks/application/.env && echo "✅ Vorhanden" || echo "❌ Fehlt"
|
||||
|
||||
# Prüfe Container-Status
|
||||
cd ~/deployment/stacks/application
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
**Falls fehlend:** Siehe "Wichtiger Hinweis: Erstmalige Deployment" oben.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Erster Code-Push
|
||||
|
||||
### Option 1: Direkt pushen (wenn Stack bereits deployed)
|
||||
|
||||
```bash
|
||||
# Test-Commit
|
||||
echo "# Deployment Test $(date)" >> README.md
|
||||
git add README.md
|
||||
git commit -m "test: First deployment via CI/CD pipeline"
|
||||
git push origin main
|
||||
|
||||
# Pipeline beobachten:
|
||||
# → https://git.michaelschiemer.de/michael/michaelschiemer/actions
|
||||
```
|
||||
|
||||
### Option 2: Application Stack zuerst deployen
|
||||
|
||||
```bash
|
||||
# Application Stack deployen (inkl. alle Infrastructure Stacks)
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
|
||||
|
||||
# Danach: Ersten Code-Push
|
||||
git add .
|
||||
git commit -m "feat: Initial application deployment"
|
||||
git push origin main
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Was passiert beim Deployment
|
||||
|
||||
### Pipeline-Ablauf (automatisch):
|
||||
|
||||
1. **Tests** (~2-5 Min)
|
||||
- PHP Pest Tests
|
||||
- PHPStan Code Quality
|
||||
- Code Style Check
|
||||
|
||||
2. **Build** (~3-5 Min)
|
||||
- Docker Image Build
|
||||
- Image wird getaggt: `<short-sha>-<timestamp>`
|
||||
- Image wird zur Registry gepusht
|
||||
|
||||
3. **Deploy** (~2-4 Min)
|
||||
- SSH zum Production-Server
|
||||
- Ansible Playbook wird ausgeführt:
|
||||
- Backup erstellen
|
||||
- Registry Login
|
||||
- Neues Image pullen
|
||||
- docker-compose.yml aktualisieren
|
||||
- Stack neu starten
|
||||
- Health-Checks warten
|
||||
|
||||
4. **Health-Check** (~1 Min)
|
||||
- Application Health-Check
|
||||
- Bei Fehler: Automatischer Rollback
|
||||
|
||||
**Gesamtzeit:** ~8-15 Minuten
|
||||
|
||||
---
|
||||
|
||||
## ✅ Erfolgreiches Deployment erkennen
|
||||
|
||||
### In Gitea Actions
|
||||
|
||||
```
|
||||
https://git.michaelschiemer.de/michael/michaelschiemer/actions
|
||||
```
|
||||
|
||||
**Erfolg:**
|
||||
- 🟢 Alle Jobs grün
|
||||
- ✅ "Deploy via Ansible" erfolgreich
|
||||
- ✅ Health-Check erfolgreich
|
||||
|
||||
### Auf Production-Server
|
||||
|
||||
```bash
|
||||
# SSH zum Server
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# Container-Status prüfen
|
||||
cd ~/deployment/stacks/application
|
||||
docker compose ps
|
||||
# Alle Container sollten "healthy" sein
|
||||
|
||||
# Application prüfen
|
||||
curl https://michaelschiemer.de/health
|
||||
# Sollte "healthy" zurückgeben
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🆘 Troubleshooting
|
||||
|
||||
### Problem: "docker-compose.yml not found"
|
||||
|
||||
**Lösung:**
|
||||
```bash
|
||||
# Application Stack zuerst deployen
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
|
||||
```
|
||||
|
||||
### Problem: Pipeline schlägt fehl
|
||||
|
||||
**Tests fehlgeschlagen:**
|
||||
- Tests lokal ausführen und Fehler beheben
|
||||
- `./vendor/bin/pest`
|
||||
- `composer cs`
|
||||
|
||||
**Build fehlgeschlagen:**
|
||||
- Docker Build lokal testen
|
||||
- `docker build -f Dockerfile.production -t test .`
|
||||
|
||||
**Deployment fehlgeschlagen:**
|
||||
- Logs prüfen: Workflow-Logs in Gitea Actions
|
||||
- Server-Logs prüfen: `ssh deploy@94.16.110.151 "cd ~/deployment/stacks/application && docker compose logs"`
|
||||
|
||||
### Problem: Health-Check fehlgeschlagen
|
||||
|
||||
**Automatischer Rollback:**
|
||||
- Pipeline führt automatisch Rollback durch
|
||||
- Alte Version wird wiederhergestellt
|
||||
|
||||
**Manueller Rollback:**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/rollback.yml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📚 Weitere Dokumentation
|
||||
|
||||
- **[QUICK_START.md](QUICK_START.md)** - Schnellstart-Guide
|
||||
- **[CODE_CHANGE_WORKFLOW.md](CODE_CHANGE_WORKFLOW.md)** - Codeänderungen pushen
|
||||
- **[APPLICATION_STACK_DEPLOYMENT.md](APPLICATION_STACK_DEPLOYMENT.md)** - Deployment-Details
|
||||
- **[CI_CD_STATUS.md](CI_CD_STATUS.md)** - CI/CD Status
|
||||
|
||||
---
|
||||
|
||||
## 🎉 Ready!
|
||||
|
||||
**Alles ist bereit für Code-Deployments!**
|
||||
|
||||
**Nächster Schritt:**
|
||||
1. Prüfe ob Application Stack deployed ist (siehe oben)
|
||||
2. Falls nicht: `setup-infrastructure.yml` ausführen
|
||||
3. Dann: Code pushen und Deployment genießen! 🚀
|
||||
@@ -17,7 +17,8 @@ Sollte enthalten:
|
||||
```ini
|
||||
[actions]
|
||||
ENABLED = true
|
||||
DEFAULT_ACTIONS_URL = https://github.com
|
||||
# Do NOT set DEFAULT_ACTIONS_URL - it will automatically use Gitea's own instance
|
||||
# Setting DEFAULT_ACTIONS_URL to custom URLs is no longer supported by Gitea
|
||||
```
|
||||
|
||||
#### 2. Timeouts bei langen Steps
|
||||
@@ -43,11 +44,11 @@ docker compose restart gitea-runner
|
||||
|
||||
**Lösung:** Prüfe, ob Buildx richtig läuft. Alternativ: Direktes Docker Build verwenden.
|
||||
|
||||
#### 4. GitHub-Variablen in Gitea
|
||||
#### 4. Gitea-Variablen (früher GitHub-kompatibel)
|
||||
|
||||
**Symptom:** `${{ github.sha }}` ist leer oder falsch
|
||||
|
||||
**Lösung:** Gitea Actions sollte `github.*` Variablen unterstützen, aber manchmal funktioniert `gitea.*` besser.
|
||||
**Lösung:** Gitea Actions unterstützt `github.*` Variablen für Kompatibilität, aber `gitea.*` ist die native Variante.
|
||||
|
||||
**Test:** Prüfe in Workflow-Logs, welche Variablen verfügbar sind:
|
||||
```yaml
|
||||
|
||||
@@ -13,7 +13,9 @@ docker_registry: "localhost:5000"
|
||||
docker_registry_url: "localhost:5000"
|
||||
docker_registry_external: "registry.michaelschiemer.de"
|
||||
docker_registry_username_default: "admin"
|
||||
docker_registry_password_default: "registry-secure-password-2025"
|
||||
# docker_registry_password_default should be set in vault as vault_docker_registry_password
|
||||
# If not using vault, override via -e docker_registry_password_default="your-password"
|
||||
registry_auth_path: "{{ stacks_base_path }}/registry/auth"
|
||||
|
||||
# Application Configuration
|
||||
app_name: "framework"
|
||||
@@ -21,6 +23,18 @@ app_domain: "michaelschiemer.de"
|
||||
app_image: "{{ docker_registry }}/{{ app_name }}"
|
||||
app_image_external: "{{ docker_registry_external }}/{{ app_name }}"
|
||||
|
||||
# Domain Configuration
|
||||
gitea_domain: "git.michaelschiemer.de"
|
||||
|
||||
# Email Configuration
|
||||
mail_from_address: "noreply@{{ app_domain }}"
|
||||
acme_email: "kontakt@{{ app_domain }}"
|
||||
|
||||
# SSL Certificate Domains
|
||||
ssl_domains:
|
||||
- "{{ gitea_domain }}"
|
||||
- "{{ app_domain }}"
|
||||
|
||||
# Health Check Configuration
|
||||
health_check_url: "https://{{ app_domain }}/health"
|
||||
health_check_retries: 10
|
||||
@@ -34,14 +48,26 @@ rollback_timeout: 300
|
||||
wait_timeout: 60
|
||||
|
||||
# Git Configuration (for sync-code.yml)
|
||||
git_repository_url_default: "https://git.michaelschiemer.de/michael/michaelschiemer.git"
|
||||
git_repository_url_default: "https://{{ gitea_domain }}/michael/michaelschiemer.git"
|
||||
git_branch_default: "main"
|
||||
git_token: "{{ vault_git_token | default('') }}"
|
||||
git_username: "{{ vault_git_username | default('') }}"
|
||||
git_password: "{{ vault_git_password | default('') }}"
|
||||
|
||||
# Database Configuration
|
||||
db_user_default: "postgres"
|
||||
db_name_default: "michaelschiemer"
|
||||
|
||||
# MinIO Object Storage Configuration
|
||||
minio_root_user: "{{ vault_minio_root_user | default('minioadmin') }}"
|
||||
minio_root_password: "{{ vault_minio_root_password | default('') }}"
|
||||
minio_api_domain: "minio-api.michaelschiemer.de"
|
||||
minio_console_domain: "minio.michaelschiemer.de"
|
||||
|
||||
# WireGuard Configuration
|
||||
wireguard_interface: "wg0"
|
||||
wireguard_config_path: "/etc/wireguard"
|
||||
wireguard_port_default: 51820
|
||||
wireguard_network_default: "10.8.0.0/24"
|
||||
wireguard_server_ip_default: "10.8.0.1"
|
||||
wireguard_enable_ip_forwarding: true
|
||||
|
||||
7
deployment/ansible/inventory/local.yml
Normal file
7
deployment/ansible/inventory/local.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
# Local inventory for running Ansible playbooks on localhost
|
||||
all:
|
||||
hosts:
|
||||
localhost:
|
||||
ansible_connection: local
|
||||
ansible_python_interpreter: "{{ ansible_playbook_python }}"
|
||||
@@ -1,43 +1,16 @@
|
||||
---
|
||||
all:
|
||||
hosts:
|
||||
children:
|
||||
production:
|
||||
ansible_host: 94.16.110.151
|
||||
ansible_user: deploy
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
ansible_ssh_private_key_file: ~/.ssh/production
|
||||
|
||||
vars:
|
||||
# Docker Registry
|
||||
# Use localhost for internal access (registry only binds to 127.0.0.1:5000)
|
||||
# External access via Traefik: registry.michaelschiemer.de
|
||||
docker_registry: localhost:5000
|
||||
docker_registry_url: localhost:5000
|
||||
# Registry credentials (can be overridden via -e or vault)
|
||||
# Defaults are set here, can be overridden by extra vars or vault
|
||||
docker_registry_username_default: 'admin'
|
||||
docker_registry_password_default: 'registry-secure-password-2025'
|
||||
|
||||
# Application Configuration
|
||||
app_name: framework
|
||||
app_domain: michaelschiemer.de
|
||||
app_image: "{{ docker_registry }}/{{ app_name }}"
|
||||
|
||||
# Docker Stack
|
||||
stack_name: app
|
||||
compose_file: /home/deploy/docker-compose.prod.yml
|
||||
|
||||
# Deployment Paths
|
||||
deploy_user_home: /home/deploy
|
||||
app_base_path: "{{ deploy_user_home }}/app"
|
||||
secrets_path: "{{ deploy_user_home }}/secrets"
|
||||
backups_path: "{{ deploy_user_home }}/backups"
|
||||
|
||||
# Health Check
|
||||
health_check_url: "https://{{ app_domain }}/health"
|
||||
health_check_retries: 10
|
||||
health_check_delay: 10
|
||||
|
||||
# Rollback Configuration
|
||||
max_rollback_versions: 5
|
||||
rollback_timeout: 300
|
||||
hosts:
|
||||
server:
|
||||
ansible_host: 94.16.110.151
|
||||
ansible_user: deploy
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
ansible_ssh_private_key_file: ~/.ssh/production
|
||||
vars:
|
||||
# Note: Centralized variables are defined in group_vars/production.yml
|
||||
# Only override-specific variables should be here
|
||||
|
||||
# Legacy compose_file reference (deprecated - stacks now use deployment/stacks/)
|
||||
compose_file: "{{ stacks_base_path }}/application/docker-compose.yml"
|
||||
|
||||
68
deployment/ansible/playbooks/check-git-logs.yml
Normal file
68
deployment/ansible/playbooks/check-git-logs.yml
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
- name: Check Git Deployment Logs
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
|
||||
tasks:
|
||||
- name: Get full container logs
|
||||
shell: |
|
||||
docker logs app --tail 100
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: container_logs
|
||||
changed_when: false
|
||||
|
||||
- name: Get Git-related logs
|
||||
shell: |
|
||||
docker logs app --tail 100 | grep -E "(Git|Clone|Pull|✅|❌|📥|📦|🔄|🗑️)" || echo "No Git-related logs found"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: git_logs
|
||||
changed_when: false
|
||||
|
||||
- name: Check GIT_REPOSITORY_URL environment variable
|
||||
shell: |
|
||||
docker exec app env | grep GIT_REPOSITORY_URL || echo "GIT_REPOSITORY_URL not set"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: git_env
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Check if .git directory exists
|
||||
shell: |
|
||||
docker exec app test -d /var/www/html/.git && echo "✅ Git repo vorhanden" || echo "❌ Git repo fehlt"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: git_repo_check
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Check entrypoint script for Git functionality
|
||||
shell: |
|
||||
docker exec app cat /usr/local/bin/entrypoint.sh | grep -A 5 "GIT_REPOSITORY_URL" | head -10 || echo "Entrypoint script not found or no Git functionality"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: entrypoint_check
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display Git-related logs
|
||||
debug:
|
||||
msg:
|
||||
- "=== Git-Related Logs ==="
|
||||
- "{{ git_logs.stdout }}"
|
||||
- ""
|
||||
- "=== Git Environment Variable ==="
|
||||
- "{{ git_env.stdout }}"
|
||||
- ""
|
||||
- "=== Git Repository Check ==="
|
||||
- "{{ git_repo_check.stdout }}"
|
||||
- ""
|
||||
- "=== Entrypoint Git Check ==="
|
||||
- "{{ entrypoint_check.stdout }}"
|
||||
|
||||
- name: Display full logs (last 50 lines)
|
||||
debug:
|
||||
msg: "{{ container_logs.stdout_lines[-50:] | join('\n') }}"
|
||||
@@ -22,8 +22,8 @@
|
||||
|
||||
- name: Derive docker registry credentials from vault when not provided
|
||||
set_fact:
|
||||
docker_registry_username: "{{ docker_registry_username | default(vault_docker_registry_username | default(docker_registry_username_default | default('admin'))) }}"
|
||||
docker_registry_password: "{{ docker_registry_password | default(vault_docker_registry_password | default(docker_registry_password_default | default('registry-secure-password-2025'))) }}"
|
||||
docker_registry_username: "{{ docker_registry_username | default(vault_docker_registry_username | default(docker_registry_username_default)) }}"
|
||||
docker_registry_password: "{{ docker_registry_password | default(vault_docker_registry_password | default(docker_registry_password_default)) }}"
|
||||
|
||||
- name: Verify Docker is running
|
||||
systemd:
|
||||
|
||||
81
deployment/ansible/playbooks/fix-gitea-actions-config.yml
Normal file
81
deployment/ansible/playbooks/fix-gitea-actions-config.yml
Normal file
@@ -0,0 +1,81 @@
|
||||
---
|
||||
- name: Fix Gitea Actions Configuration (non-destructive)
|
||||
hosts: production
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
tasks:
|
||||
- name: Check current Gitea Actions configuration
|
||||
shell: |
|
||||
docker exec gitea cat /data/gitea/conf/app.ini 2>/dev/null | grep -A 5 "\[actions\]" || echo "No actions section found"
|
||||
register: current_config
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Backup existing app.ini
|
||||
shell: |
|
||||
docker exec gitea cp /data/gitea/conf/app.ini /data/gitea/conf/app.ini.backup.$(date +%Y%m%d_%H%M%S)
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Copy app.ini from container for editing
|
||||
shell: |
|
||||
docker cp gitea:/data/gitea/conf/app.ini /tmp/gitea_app_ini_$$
|
||||
register: copy_result
|
||||
|
||||
- name: Update app.ini Actions section
|
||||
shell: |
|
||||
# Remove DEFAULT_ACTIONS_URL line if it exists in [actions] section
|
||||
sed -i '/^\[actions\]/,/^\[/{ /^DEFAULT_ACTIONS_URL/d; }' /tmp/gitea_app_ini_$$
|
||||
|
||||
# Ensure ENABLED = true in [actions] section
|
||||
if grep -q "^\[actions\]" /tmp/gitea_app_ini_$$; then
|
||||
# Section exists - ensure ENABLED = true
|
||||
sed -i '/^\[actions\]/,/^\[/{ s/^ENABLED.*/ENABLED = true/; }' /tmp/gitea_app_ini_$$
|
||||
# If ENABLED line doesn't exist, add it
|
||||
if ! grep -A 10 "^\[actions\]" /tmp/gitea_app_ini_$$ | grep -q "^ENABLED"; then
|
||||
sed -i '/^\[actions\]/a ENABLED = true' /tmp/gitea_app_ini_$$
|
||||
fi
|
||||
else
|
||||
# Section doesn't exist - add it
|
||||
echo "" >> /tmp/gitea_app_ini_$$
|
||||
echo "[actions]" >> /tmp/gitea_app_ini_$$
|
||||
echo "ENABLED = true" >> /tmp/gitea_app_ini_$$
|
||||
fi
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: config_updated
|
||||
|
||||
- name: Copy updated app.ini back to container
|
||||
shell: |
|
||||
docker cp /tmp/gitea_app_ini_$$ gitea:/data/gitea/conf/app.ini
|
||||
rm -f /tmp/gitea_app_ini_$$
|
||||
when: config_updated.changed | default(false)
|
||||
|
||||
- name: Verify Actions configuration after update
|
||||
shell: |
|
||||
docker exec gitea cat /data/gitea/conf/app.ini | grep -A 5 "\[actions\]"
|
||||
register: updated_config
|
||||
changed_when: false
|
||||
|
||||
- name: Restart Gitea to apply configuration
|
||||
shell: |
|
||||
cd {{ stacks_base_path }}/gitea
|
||||
docker compose restart gitea
|
||||
when: config_updated.changed | default(false)
|
||||
|
||||
- name: Wait for Gitea to be ready
|
||||
wait_for:
|
||||
timeout: 60
|
||||
when: config_updated.changed | default(false)
|
||||
|
||||
- name: Display configuration result
|
||||
debug:
|
||||
msg:
|
||||
- "=== Gitea Actions Configuration Fixed ==="
|
||||
- ""
|
||||
- "Current [actions] configuration:"
|
||||
- "{{ updated_config.stdout }}"
|
||||
- ""
|
||||
- "Configuration updated: {{ 'Yes' if config_updated.changed else 'No changes needed' }}"
|
||||
- "Gitea restarted: {{ 'Yes' if config_updated.changed else 'No' }}"
|
||||
49
deployment/ansible/playbooks/fix-gitea-actions-url.yml
Normal file
49
deployment/ansible/playbooks/fix-gitea-actions-url.yml
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
- name: Remove DEFAULT_ACTIONS_URL from Gitea configuration
|
||||
hosts: production
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
tasks:
|
||||
- name: Check if DEFAULT_ACTIONS_URL exists in app.ini
|
||||
shell: |
|
||||
docker exec gitea cat /data/gitea/conf/app.ini 2>/dev/null | grep -q "DEFAULT_ACTIONS_URL" && echo "exists" || echo "not_found"
|
||||
register: url_check
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Remove DEFAULT_ACTIONS_URL from app.ini
|
||||
shell: |
|
||||
docker exec gitea sh -c 'sed -i "/^DEFAULT_ACTIONS_URL/d" /data/gitea/conf/app.ini'
|
||||
when: url_check.stdout == "exists"
|
||||
register: url_removed
|
||||
|
||||
- name: Restart Gitea to apply configuration changes
|
||||
shell: |
|
||||
cd {{ stacks_base_path }}/gitea
|
||||
docker compose restart gitea
|
||||
when: url_removed.changed | default(false)
|
||||
|
||||
- name: Wait for Gitea to be ready
|
||||
wait_for:
|
||||
timeout: 60
|
||||
when: url_removed.changed | default(false)
|
||||
|
||||
- name: Verify Gitea Actions configuration
|
||||
shell: |
|
||||
docker exec gitea cat /data/gitea/conf/app.ini 2>/dev/null | grep -A 3 "\[actions\]" || echo "Config not accessible"
|
||||
register: gitea_config
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display Gitea Actions configuration
|
||||
debug:
|
||||
msg:
|
||||
- "=== Gitea Configuration Fix Complete ==="
|
||||
- "DEFAULT_ACTIONS_URL removed: {{ 'Yes' if url_removed.changed else 'No (not found or already removed)' }}"
|
||||
- "Container restarted: {{ 'Yes' if url_removed.changed else 'No' }}"
|
||||
- ""
|
||||
- "Current Actions configuration:"
|
||||
- "{{ gitea_config.stdout if gitea_config.stdout else 'Could not read config' }}"
|
||||
- ""
|
||||
- "Gitea will now use its own instance for actions by default (no GitHub fallback)."
|
||||
@@ -0,0 +1,165 @@
|
||||
---
|
||||
- name: Remove framework-production Stack from Production Server
|
||||
hosts: production
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
stack_name: framework-production
|
||||
stack_path: "~/framework-production"
|
||||
|
||||
tasks:
|
||||
- name: Check if Docker is running
|
||||
systemd:
|
||||
name: docker
|
||||
state: started
|
||||
register: docker_service
|
||||
become: yes
|
||||
|
||||
- name: Fail if Docker is not running
|
||||
fail:
|
||||
msg: "Docker service is not running"
|
||||
when: docker_service.status.ActiveState != 'active'
|
||||
|
||||
- name: Check if framework-production stack directory exists
|
||||
stat:
|
||||
path: "{{ stack_path }}"
|
||||
register: stack_dir
|
||||
|
||||
- name: Check if framework-production containers exist (all states)
|
||||
shell: |
|
||||
docker ps -a --filter "name={{ stack_name }}" --format "{{ '{{' }}.Names{{ '}}' }}"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: all_containers
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display all containers found
|
||||
debug:
|
||||
msg: "Found containers: {{ all_containers.stdout_lines if all_containers.stdout_lines | length > 0 else 'None' }}"
|
||||
|
||||
- name: List all containers to find framework-production related ones
|
||||
shell: |
|
||||
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}\t{{ '{{' }}.Status{{ '}}' }}"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: all_containers_list
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display all containers
|
||||
debug:
|
||||
msg: "{{ all_containers_list.stdout_lines }}"
|
||||
|
||||
- name: Check for containers with framework-production in name or image
|
||||
shell: |
|
||||
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}" | grep -iE "(framework-production|^db$|^php$|^web$)" || echo ""
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: matching_containers
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Check for containers with framework-production images
|
||||
shell: |
|
||||
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}" | grep -i "framework-production" | cut -f1 || echo ""
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: image_based_containers
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display found containers
|
||||
debug:
|
||||
msg:
|
||||
- "Containers by name pattern: {{ matching_containers.stdout_lines if matching_containers.stdout_lines | length > 0 else 'None' }}"
|
||||
- "Containers by image: {{ image_based_containers.stdout_lines if image_based_containers.stdout_lines | length > 0 else 'None' }}"
|
||||
|
||||
- name: Stop and remove containers using docker-compose if stack directory exists
|
||||
shell: |
|
||||
cd {{ stack_path }}
|
||||
docker-compose down -v
|
||||
args:
|
||||
executable: /bin/bash
|
||||
when: stack_dir.stat.exists
|
||||
register: compose_down_result
|
||||
changed_when: true
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Stop and remove containers by name pattern and image
|
||||
shell: |
|
||||
REMOVED_CONTAINERS=""
|
||||
|
||||
# Method 1: Remove containers with framework-production in image name
|
||||
while IFS=$'\t' read -r container image; do
|
||||
if [[ "$image" == *"framework-production"* ]]; then
|
||||
echo "Stopping and removing container '$container' (image: $image)"
|
||||
docker stop "$container" 2>/dev/null || true
|
||||
docker rm "$container" 2>/dev/null || true
|
||||
REMOVED_CONTAINERS="$REMOVED_CONTAINERS $container"
|
||||
fi
|
||||
done < <(docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}")
|
||||
|
||||
# Method 2: Remove containers with specific names that match the pattern
|
||||
for container_name in "db" "php" "web"; do
|
||||
# Check if container exists and has framework-production image
|
||||
container_info=$(docker ps -a --filter "name=^${container_name}$" --format "{{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}" 2>/dev/null || echo "")
|
||||
if [[ -n "$container_info" ]]; then
|
||||
image=$(echo "$container_info" | cut -f2)
|
||||
if [[ "$image" == *"framework-production"* ]] || [[ "$image" == *"mariadb"* ]]; then
|
||||
echo "Stopping and removing container '$container_name' (image: $image)"
|
||||
docker stop "$container_name" 2>/dev/null || true
|
||||
docker rm "$container_name" 2>/dev/null || true
|
||||
REMOVED_CONTAINERS="$REMOVED_CONTAINERS $container_name"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# Method 3: Remove containers with framework-production in name
|
||||
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}" | grep -i framework-production | while read container; do
|
||||
if [ ! -z "$container" ]; then
|
||||
echo "Stopping and removing container '$container'"
|
||||
docker stop "$container" 2>/dev/null || true
|
||||
docker rm "$container" 2>/dev/null || true
|
||||
REMOVED_CONTAINERS="$REMOVED_CONTAINERS $container"
|
||||
fi
|
||||
done
|
||||
|
||||
# Output removed containers
|
||||
if [[ -n "$REMOVED_CONTAINERS" ]]; then
|
||||
echo "Removed containers:$REMOVED_CONTAINERS"
|
||||
else
|
||||
echo "No containers were removed"
|
||||
fi
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: direct_remove_result
|
||||
changed_when: "'Removed containers' in direct_remove_result.stdout"
|
||||
failed_when: false
|
||||
|
||||
- name: Remove stack directory if it exists
|
||||
file:
|
||||
path: "{{ stack_path }}"
|
||||
state: absent
|
||||
when: stack_dir.stat.exists
|
||||
register: dir_removed
|
||||
|
||||
- name: Verify all framework-production containers are removed
|
||||
shell: |
|
||||
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}" | grep -i framework-production || echo ""
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: remaining_containers
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display removal status
|
||||
debug:
|
||||
msg:
|
||||
- "=== framework-production Stack Removal Complete ==="
|
||||
- "Stack directory removed: {{ 'Yes' if dir_removed.changed else 'No (did not exist)' }}"
|
||||
- "Containers removed: {{ 'Yes' if (compose_down_result.changed or direct_remove_result.changed) else 'No (none found)' }}"
|
||||
- "Remaining containers: {{ remaining_containers.stdout if remaining_containers.stdout else 'None' }}"
|
||||
- ""
|
||||
- "Stack '{{ stack_name }}' has been successfully removed."
|
||||
@@ -6,7 +6,7 @@
|
||||
|
||||
vars:
|
||||
rollback_to_version: "{{ rollback_to_version | default('previous') }}"
|
||||
app_stack_path: "{{ deploy_user_home }}/deployment/stacks/application"
|
||||
# app_stack_path is now defined in group_vars/production.yml
|
||||
|
||||
pre_tasks:
|
||||
- name: Optionally load registry credentials from encrypted vault
|
||||
@@ -19,8 +19,8 @@
|
||||
|
||||
- name: Derive docker registry credentials from vault when not provided
|
||||
set_fact:
|
||||
docker_registry_username: "{{ docker_registry_username | default(vault_docker_registry_username | default(docker_registry_username_default | default('admin'))) }}"
|
||||
docker_registry_password: "{{ docker_registry_password | default(vault_docker_registry_password | default(docker_registry_password_default | default('registry-secure-password-2025'))) }}"
|
||||
docker_registry_username: "{{ docker_registry_username | default(vault_docker_registry_username | default(docker_registry_username_default)) }}"
|
||||
docker_registry_password: "{{ docker_registry_password | default(vault_docker_registry_password | default(docker_registry_password_default)) }}"
|
||||
|
||||
- name: Check Docker service
|
||||
systemd:
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
vars:
|
||||
project_root: "{{ lookup('env', 'PWD') | default(playbook_dir + '/../..', true) }}"
|
||||
ci_image_name: "php-ci:latest"
|
||||
ci_image_registry: "{{ ci_registry | default('registry.michaelschiemer.de') }}"
|
||||
ci_image_registry: "{{ ci_registry | default(docker_registry_external) }}"
|
||||
ci_image_registry_path: "{{ ci_registry }}/ci/php-ci:latest"
|
||||
gitea_runner_dir: "{{ project_root }}/deployment/gitea-runner"
|
||||
docker_dind_container: "gitea-runner-dind"
|
||||
|
||||
@@ -5,10 +5,17 @@
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
stacks_base_path: "~/deployment/stacks"
|
||||
wait_timeout: 60
|
||||
# All deployment variables are now defined in group_vars/production.yml
|
||||
# Variables can be overridden via -e flag if needed
|
||||
|
||||
tasks:
|
||||
- name: Debug - Show variables
|
||||
debug:
|
||||
msg:
|
||||
- "stacks_base_path: {{ stacks_base_path | default('NOT SET') }}"
|
||||
- "deploy_user_home: {{ deploy_user_home | default('NOT SET') }}"
|
||||
when: false # Only enable for debugging
|
||||
|
||||
- name: Check if deployment stacks directory exists
|
||||
stat:
|
||||
path: "{{ stacks_base_path }}"
|
||||
@@ -83,22 +90,42 @@
|
||||
# 3. Deploy Docker Registry (Private Registry)
|
||||
- name: Ensure Registry auth directory exists
|
||||
file:
|
||||
path: "{{ stacks_base_path }}/registry/auth"
|
||||
path: "{{ registry_auth_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
become: yes
|
||||
|
||||
- name: Optionally load registry credentials from vault
|
||||
include_vars:
|
||||
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
|
||||
no_log: yes
|
||||
ignore_errors: yes
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Set registry credentials from vault or defaults
|
||||
set_fact:
|
||||
registry_username: "{{ vault_docker_registry_username | default(docker_registry_username_default) }}"
|
||||
registry_password: "{{ vault_docker_registry_password | default(docker_registry_password_default) }}"
|
||||
no_log: true
|
||||
|
||||
- name: Fail if registry password is not set
|
||||
fail:
|
||||
msg: "Registry password must be set in vault or docker_registry_password_default"
|
||||
when: registry_password is not defined or registry_password == ""
|
||||
|
||||
- name: Create Registry htpasswd file if missing
|
||||
shell: |
|
||||
if [ ! -f {{ stacks_base_path }}/registry/auth/htpasswd ]; then
|
||||
docker run --rm --entrypoint htpasswd httpd:2 -Bbn admin registry-secure-password-2025 > {{ stacks_base_path }}/registry/auth/htpasswd
|
||||
chmod 644 {{ stacks_base_path }}/registry/auth/htpasswd
|
||||
if [ ! -f {{ registry_auth_path }}/htpasswd ]; then
|
||||
docker run --rm --entrypoint htpasswd httpd:2 -Bbn {{ registry_username }} {{ registry_password }} > {{ registry_auth_path }}/htpasswd
|
||||
chmod 644 {{ registry_auth_path }}/htpasswd
|
||||
fi
|
||||
args:
|
||||
executable: /bin/bash
|
||||
become: yes
|
||||
changed_when: true
|
||||
register: registry_auth_created
|
||||
no_log: true
|
||||
|
||||
- name: Deploy Docker Registry stack
|
||||
community.docker.docker_compose_v2:
|
||||
@@ -126,19 +153,95 @@
|
||||
- name: Verify Registry is accessible
|
||||
uri:
|
||||
url: "http://127.0.0.1:5000/v2/_catalog"
|
||||
user: admin
|
||||
password: registry-secure-password-2025
|
||||
user: "{{ registry_username }}"
|
||||
password: "{{ registry_password }}"
|
||||
status_code: 200
|
||||
timeout: 5
|
||||
register: registry_check
|
||||
ignore_errors: yes
|
||||
changed_when: false
|
||||
no_log: true
|
||||
|
||||
- name: Display Registry status
|
||||
debug:
|
||||
msg: "Registry accessibility: {{ 'SUCCESS' if registry_check.status == 200 else 'FAILED - may need manual check' }}"
|
||||
|
||||
# 4. Deploy Gitea (CRITICAL - Git Server + MySQL + Redis)
|
||||
# 4. Deploy MinIO (Object Storage)
|
||||
- name: Optionally load MinIO secrets from vault
|
||||
include_vars:
|
||||
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
|
||||
no_log: yes
|
||||
ignore_errors: yes
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Set MinIO root password from vault or generate
|
||||
set_fact:
|
||||
minio_password: "{{ vault_minio_root_password | default(lookup('password', '/dev/null length=32 chars=ascii_letters,digits,punctuation')) }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Set MinIO root user from vault or use default
|
||||
set_fact:
|
||||
minio_user: "{{ vault_minio_root_user | default('minioadmin') }}"
|
||||
|
||||
- name: Ensure MinIO stack directory exists
|
||||
file:
|
||||
path: "{{ stacks_base_path }}/minio"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Create MinIO stack .env file
|
||||
template:
|
||||
src: "{{ playbook_dir }}/../templates/minio.env.j2"
|
||||
dest: "{{ stacks_base_path }}/minio/.env"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0600'
|
||||
vars:
|
||||
minio_root_user: "{{ minio_user }}"
|
||||
minio_root_password: "{{ minio_password }}"
|
||||
minio_api_domain: "{{ minio_api_domain }}"
|
||||
minio_console_domain: "{{ minio_console_domain }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Deploy MinIO stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ stacks_base_path }}/minio"
|
||||
state: present
|
||||
pull: always
|
||||
register: minio_output
|
||||
|
||||
- name: Wait for MinIO to be ready
|
||||
wait_for:
|
||||
timeout: "{{ wait_timeout }}"
|
||||
when: minio_output.changed
|
||||
|
||||
- name: Check MinIO logs for readiness
|
||||
shell: docker compose logs minio 2>&1 | grep -Ei "(API:|WebUI:|MinIO Object Storage Server)" || true
|
||||
args:
|
||||
chdir: "{{ stacks_base_path }}/minio"
|
||||
register: minio_logs
|
||||
until: minio_logs.stdout != ""
|
||||
retries: 6
|
||||
delay: 10
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Verify MinIO health endpoint
|
||||
uri:
|
||||
url: "http://127.0.0.1:9000/minio/health/live"
|
||||
method: GET
|
||||
status_code: [200, 404, 502, 503]
|
||||
timeout: 5
|
||||
register: minio_health_check
|
||||
ignore_errors: yes
|
||||
changed_when: false
|
||||
|
||||
- name: Display MinIO status
|
||||
debug:
|
||||
msg: "MinIO health check: {{ 'SUCCESS' if minio_health_check.status == 200 else 'FAILED - Status: ' + (minio_health_check.status|string) }}"
|
||||
|
||||
# 5. Deploy Gitea (CRITICAL - Git Server + MySQL + Redis)
|
||||
- name: Deploy Gitea stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ stacks_base_path }}/gitea"
|
||||
@@ -162,7 +265,7 @@
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
# 5. Deploy Monitoring (Portainer + Grafana + Prometheus)
|
||||
# 6. Deploy Monitoring (Portainer + Grafana + Prometheus)
|
||||
- name: Optionally load monitoring secrets from vault
|
||||
include_vars:
|
||||
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
|
||||
@@ -229,7 +332,7 @@
|
||||
|
||||
- name: Verify Gitea accessibility via HTTPS
|
||||
uri:
|
||||
url: https://git.michaelschiemer.de
|
||||
url: "https://{{ gitea_domain }}"
|
||||
method: GET
|
||||
validate_certs: no
|
||||
status_code: 200
|
||||
@@ -241,7 +344,7 @@
|
||||
debug:
|
||||
msg: "Gitea HTTPS check: {{ 'SUCCESS' if gitea_http_check.status == 200 else 'FAILED - Status: ' + (gitea_http_check.status|string) }}"
|
||||
|
||||
# 6. Deploy Application Stack
|
||||
# 7. Deploy Application Stack
|
||||
- name: Optionally load application secrets from vault
|
||||
include_vars:
|
||||
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
|
||||
@@ -320,10 +423,10 @@
|
||||
mode: '0600'
|
||||
vars:
|
||||
db_password: "{{ app_db_password }}"
|
||||
db_user: "{{ db_user | default('postgres') }}"
|
||||
db_name: "{{ db_name | default('michaelschiemer') }}"
|
||||
db_user: "{{ db_user | default(db_user_default) }}"
|
||||
db_name: "{{ db_name | default(db_name_default) }}"
|
||||
redis_password: "{{ app_redis_password }}"
|
||||
app_domain: "{{ app_domain | default('michaelschiemer.de') }}"
|
||||
app_domain: "{{ app_domain }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Deploy Application stack
|
||||
@@ -391,7 +494,7 @@
|
||||
|
||||
- name: Verify application accessibility via HTTPS
|
||||
uri:
|
||||
url: "https://{{ app_domain | default('michaelschiemer.de') }}/health"
|
||||
url: "{{ health_check_url }}"
|
||||
method: GET
|
||||
validate_certs: no
|
||||
status_code: [200, 404, 502, 503]
|
||||
@@ -412,13 +515,14 @@
|
||||
- "Traefik: {{ 'Deployed' if traefik_output.changed else 'Already running' }}"
|
||||
- "PostgreSQL: {{ 'Deployed' if postgres_output.changed else 'Already running' }}"
|
||||
- "Docker Registry: {{ 'Deployed' if registry_output.changed else 'Already running' }}"
|
||||
- "MinIO: {{ 'Deployed' if minio_output.changed else 'Already running' }}"
|
||||
- "Gitea: {{ 'Deployed' if gitea_output.changed else 'Already running' }}"
|
||||
- "Monitoring: {{ 'Deployed' if monitoring_output.changed else 'Already running' }}"
|
||||
- "Application: {{ 'Deployed' if application_output.changed else 'Already running' }}"
|
||||
- ""
|
||||
- "Next Steps:"
|
||||
- "1. Access Gitea at: https://git.michaelschiemer.de"
|
||||
- "1. Access Gitea at: https://{{ gitea_domain }}"
|
||||
- "2. Complete Gitea setup wizard if first-time deployment"
|
||||
- "3. Navigate to Admin > Actions > Runners to get registration token"
|
||||
- "4. Continue with Phase 1 - Gitea Runner Setup"
|
||||
- "5. Access Application at: https://{{ app_domain | default('michaelschiemer.de') }}"
|
||||
- "5. Access Application at: https://{{ app_domain }}"
|
||||
|
||||
@@ -5,10 +5,9 @@
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
domains:
|
||||
- git.michaelschiemer.de
|
||||
- michaelschiemer.de
|
||||
acme_email: kontakt@michaelschiemer.de
|
||||
# ssl_domains and acme_email are defined in group_vars/production.yml
|
||||
# Can be overridden via -e flag if needed
|
||||
domains: "{{ ssl_domains | default([gitea_domain, app_domain]) }}"
|
||||
|
||||
tasks:
|
||||
- name: Check if acme.json exists and is a file
|
||||
@@ -70,7 +69,7 @@
|
||||
|
||||
- name: Check if acme.json contains certificates
|
||||
stat:
|
||||
path: "{{ deploy_user_home }}/deployment/stacks/traefik/acme.json"
|
||||
path: "{{ stacks_base_path }}/traefik/acme.json"
|
||||
register: acme_file
|
||||
|
||||
- name: Display certificate status
|
||||
@@ -79,8 +78,9 @@
|
||||
Certificate setup triggered.
|
||||
Traefik will request Let's Encrypt certificates for:
|
||||
{{ domains | join(', ') }}
|
||||
ACME Email: {{ acme_email }}
|
||||
|
||||
Check Traefik logs to see certificate generation progress:
|
||||
docker compose -f {{ deploy_user_home }}/deployment/stacks/traefik/docker-compose.yml logs traefik | grep -i acme
|
||||
docker compose -f {{ stacks_base_path }}/traefik/docker-compose.yml logs traefik | grep -i acme
|
||||
|
||||
Certificates should be ready within 1-2 minutes.
|
||||
|
||||
@@ -5,20 +5,13 @@
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
wireguard_interface: "wg0"
|
||||
wireguard_config_path: "/etc/wireguard"
|
||||
wireguard_config_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}.conf"
|
||||
wireguard_private_key_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}_private.key"
|
||||
wireguard_public_key_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}_public.key"
|
||||
wireguard_client_configs_path: "{{ wireguard_config_path }}/clients"
|
||||
wireguard_enable_ip_forwarding: true
|
||||
# WireGuard variables are defined in group_vars/production.yml
|
||||
# Can be overridden via -e flag if needed
|
||||
wireguard_port: "{{ wireguard_port | default(wireguard_port_default) }}"
|
||||
wireguard_network: "{{ wireguard_network | default(wireguard_network_default) }}"
|
||||
wireguard_server_ip: "{{ wireguard_server_ip | default(wireguard_server_ip_default) }}"
|
||||
|
||||
pre_tasks:
|
||||
- name: Set WireGuard variables with defaults
|
||||
set_fact:
|
||||
wireguard_port: "{{ wireguard_port | default(51820) }}"
|
||||
wireguard_network: "{{ wireguard_network | default('10.8.0.0/24') }}"
|
||||
wireguard_server_ip: "{{ wireguard_server_ip | default('10.8.0.1') }}"
|
||||
|
||||
- name: Optionally load wireguard secrets from vault
|
||||
include_vars:
|
||||
|
||||
102
deployment/ansible/playbooks/sync-code.yml
Normal file
102
deployment/ansible/playbooks/sync-code.yml
Normal file
@@ -0,0 +1,102 @@
|
||||
---
|
||||
- name: Sync Code from Git Repository to Application Container
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
|
||||
vars:
|
||||
# git_repository_url and git_branch are defined in group_vars/production.yml
|
||||
# Can be overridden via -e flag if needed
|
||||
git_repository_url: "{{ git_repo_url | default(git_repository_url_default) }}"
|
||||
git_branch: "{{ git_branch | default(git_branch_default) }}"
|
||||
|
||||
pre_tasks:
|
||||
- name: Optionally load secrets from vault
|
||||
include_vars:
|
||||
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
|
||||
no_log: yes
|
||||
ignore_errors: yes
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
tasks:
|
||||
- name: Verify application stack directory exists
|
||||
stat:
|
||||
path: "{{ app_stack_path }}"
|
||||
register: app_stack_dir
|
||||
|
||||
- name: Fail if application stack directory doesn't exist
|
||||
fail:
|
||||
msg: "Application stack directory not found at {{ app_stack_path }}"
|
||||
when: not app_stack_dir.stat.exists
|
||||
|
||||
- name: Check if docker-compose.yml exists
|
||||
stat:
|
||||
path: "{{ app_stack_path }}/docker-compose.yml"
|
||||
register: compose_file_exists
|
||||
|
||||
- name: Fail if docker-compose.yml doesn't exist
|
||||
fail:
|
||||
msg: "docker-compose.yml not found. Run setup-infrastructure.yml first."
|
||||
when: not compose_file_exists.stat.exists
|
||||
|
||||
- name: Read current .env file
|
||||
slurp:
|
||||
src: "{{ app_stack_path }}/.env"
|
||||
register: env_file_content
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
|
||||
- name: Check if Git configuration exists in .env
|
||||
set_fact:
|
||||
has_git_config: "{{ env_file_content.content | b64decode | regex_search('GIT_REPOSITORY_URL=') is not none }}"
|
||||
when: env_file_content.content is defined
|
||||
|
||||
- name: Update .env with Git configuration
|
||||
lineinfile:
|
||||
path: "{{ app_stack_path }}/.env"
|
||||
regexp: "{{ item.regex }}"
|
||||
line: "{{ item.line }}"
|
||||
state: present
|
||||
loop:
|
||||
- { regex: '^GIT_REPOSITORY_URL=', line: 'GIT_REPOSITORY_URL={{ git_repository_url }}' }
|
||||
- { regex: '^GIT_BRANCH=', line: 'GIT_BRANCH={{ git_branch }}' }
|
||||
- { regex: '^GIT_TOKEN=', line: 'GIT_TOKEN={{ git_token | default("") }}' }
|
||||
- { regex: '^GIT_USERNAME=', line: 'GIT_USERNAME={{ git_username | default("") }}' }
|
||||
- { regex: '^GIT_PASSWORD=', line: 'GIT_PASSWORD={{ git_password | default("") }}' }
|
||||
when: not has_git_config | default(true)
|
||||
|
||||
- name: Restart application container to trigger Git pull
|
||||
shell: |
|
||||
cd {{ app_stack_path }}
|
||||
docker compose restart app
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: container_restart
|
||||
|
||||
- name: Wait for container to be ready
|
||||
wait_for:
|
||||
timeout: 60
|
||||
when: container_restart.changed
|
||||
|
||||
- name: Check container logs for Git operations
|
||||
shell: |
|
||||
cd {{ app_stack_path }}
|
||||
docker compose logs app --tail 50 | grep -E "(Git|Clone|Pull|✅|❌)" || echo "No Git-related logs found"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: git_logs
|
||||
changed_when: false
|
||||
|
||||
- name: Display Git sync result
|
||||
debug:
|
||||
msg:
|
||||
- "=== Code Sync Summary ==="
|
||||
- "Repository: {{ git_repository_url }}"
|
||||
- "Branch: {{ git_branch }}"
|
||||
- "Container restarted: {{ 'Yes' if container_restart.changed else 'No' }}"
|
||||
- ""
|
||||
- "Git Logs:"
|
||||
- "{{ git_logs.stdout }}"
|
||||
- ""
|
||||
- "Next: Check application logs to verify code was synced"
|
||||
62
deployment/ansible/playbooks/tasks/check-health.yml
Normal file
62
deployment/ansible/playbooks/tasks/check-health.yml
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
# Check Container Health Status
|
||||
- name: Check nginx container logs
|
||||
shell: |
|
||||
docker logs nginx --tail 50 2>&1
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: nginx_logs
|
||||
failed_when: false
|
||||
|
||||
- name: Display nginx logs
|
||||
debug:
|
||||
msg: "{{ nginx_logs.stdout_lines }}"
|
||||
|
||||
- name: Test nginx health check manually
|
||||
shell: |
|
||||
docker exec nginx wget --spider -q http://localhost/health 2>&1 || echo "Health check failed"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: nginx_health_test
|
||||
failed_when: false
|
||||
|
||||
- name: Display nginx health check result
|
||||
debug:
|
||||
msg: "{{ nginx_health_test.stdout }}"
|
||||
|
||||
- name: Check queue-worker container logs
|
||||
shell: |
|
||||
docker logs queue-worker --tail 50 2>&1
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: queue_worker_logs
|
||||
failed_when: false
|
||||
|
||||
- name: Display queue-worker logs
|
||||
debug:
|
||||
msg: "{{ queue_worker_logs.stdout_lines }}"
|
||||
|
||||
- name: Check scheduler container logs
|
||||
shell: |
|
||||
docker logs scheduler --tail 50 2>&1
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: scheduler_logs
|
||||
failed_when: false
|
||||
|
||||
- name: Display scheduler logs
|
||||
debug:
|
||||
msg: "{{ scheduler_logs.stdout_lines }}"
|
||||
|
||||
- name: Check container status
|
||||
shell: |
|
||||
cd {{ app_stack_path }}
|
||||
docker compose ps
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: container_status
|
||||
failed_when: false
|
||||
|
||||
- name: Display container status
|
||||
debug:
|
||||
msg: "{{ container_status.stdout_lines }}"
|
||||
73
deployment/ansible/playbooks/tasks/diagnose-404.yml
Normal file
73
deployment/ansible/playbooks/tasks/diagnose-404.yml
Normal file
@@ -0,0 +1,73 @@
|
||||
---
|
||||
# Diagnose 404 Errors
|
||||
- name: Check nginx logs for errors
|
||||
shell: |
|
||||
docker logs nginx --tail 100 2>&1
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: nginx_logs
|
||||
failed_when: false
|
||||
|
||||
- name: Display nginx logs
|
||||
debug:
|
||||
msg: "{{ nginx_logs.stdout_lines }}"
|
||||
|
||||
- name: Check app container logs
|
||||
shell: |
|
||||
docker logs app --tail 100 2>&1
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: app_logs
|
||||
failed_when: false
|
||||
|
||||
- name: Display app container logs
|
||||
debug:
|
||||
msg: "{{ app_logs.stdout_lines }}"
|
||||
|
||||
- name: Test nginx health endpoint directly
|
||||
shell: |
|
||||
docker exec nginx wget -q -O - http://127.0.0.1/health 2>&1 || echo "Health check failed"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: nginx_health_test
|
||||
failed_when: false
|
||||
|
||||
- name: Display nginx health check result
|
||||
debug:
|
||||
msg: "{{ nginx_health_test.stdout }}"
|
||||
|
||||
- name: Check nginx configuration
|
||||
shell: |
|
||||
docker exec nginx cat /etc/nginx/conf.d/default.conf 2>&1
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: nginx_config
|
||||
failed_when: false
|
||||
|
||||
- name: Display nginx configuration
|
||||
debug:
|
||||
msg: "{{ nginx_config.stdout_lines }}"
|
||||
|
||||
- name: Check if app container has files in /var/www/html
|
||||
shell: |
|
||||
docker exec app ls -la /var/www/html/ 2>&1 | head -20
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: app_files
|
||||
failed_when: false
|
||||
|
||||
- name: Display app container files
|
||||
debug:
|
||||
msg: "{{ app_files.stdout_lines }}"
|
||||
|
||||
- name: Check container network connectivity
|
||||
shell: |
|
||||
docker exec nginx ping -c 1 app 2>&1 | head -5
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: network_check
|
||||
failed_when: false
|
||||
|
||||
- name: Display network connectivity
|
||||
debug:
|
||||
msg: "{{ network_check.stdout }}"
|
||||
71
deployment/ansible/playbooks/tasks/fix-health-checks.yml
Normal file
71
deployment/ansible/playbooks/tasks/fix-health-checks.yml
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
# Fix Container Health Checks
|
||||
- name: Check if application stack directory exists
|
||||
stat:
|
||||
path: "{{ app_stack_path }}"
|
||||
register: app_stack_dir
|
||||
|
||||
- name: Fail if application stack directory doesn't exist
|
||||
fail:
|
||||
msg: "Application stack directory not found at {{ app_stack_path }}"
|
||||
when: not app_stack_dir.stat.exists
|
||||
|
||||
- name: Copy updated docker-compose.yml to production
|
||||
copy:
|
||||
src: "{{ playbook_dir }}/../../stacks/application/docker-compose.yml"
|
||||
dest: "{{ app_stack_path }}/docker-compose.yml"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0644'
|
||||
register: compose_updated
|
||||
|
||||
- name: Recreate containers with new health checks
|
||||
shell: |
|
||||
cd {{ app_stack_path }}
|
||||
docker compose up -d --force-recreate nginx queue-worker scheduler
|
||||
args:
|
||||
executable: /bin/bash
|
||||
when: compose_updated.changed
|
||||
register: containers_recreated
|
||||
|
||||
- name: Wait for containers to be healthy
|
||||
shell: |
|
||||
cd {{ app_stack_path }}
|
||||
timeout=120
|
||||
elapsed=0
|
||||
while [ $elapsed -lt $timeout ]; do
|
||||
healthy=$(docker compose ps --format json | jq -r '[.[] | select(.Name=="nginx" or .Name=="queue-worker" or .Name=="scheduler") | .Health] | all(.=="healthy" or .=="")')
|
||||
if [ "$healthy" = "true" ]; then
|
||||
echo "All containers are healthy"
|
||||
exit 0
|
||||
fi
|
||||
sleep 5
|
||||
elapsed=$((elapsed + 5))
|
||||
done
|
||||
echo "Timeout waiting for containers to become healthy"
|
||||
docker compose ps
|
||||
exit 1
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: health_wait
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
|
||||
- name: Check final container status
|
||||
shell: |
|
||||
cd {{ app_stack_path }}
|
||||
docker compose ps
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: final_status
|
||||
|
||||
- name: Display final container status
|
||||
debug:
|
||||
msg: "{{ final_status.stdout_lines }}"
|
||||
|
||||
- name: Display summary
|
||||
debug:
|
||||
msg:
|
||||
- "=== Health Check Fix Complete ==="
|
||||
- "Containers recreated: {{ 'Yes' if containers_recreated.changed else 'No (no changes)' }}"
|
||||
- "Health wait result: {{ 'SUCCESS' if health_wait.rc == 0 else 'TIMEOUT or ERROR - check logs' }}"
|
||||
71
deployment/ansible/playbooks/tasks/fix-nginx-404.yml
Normal file
71
deployment/ansible/playbooks/tasks/fix-nginx-404.yml
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
# Fix Nginx 404 by setting up shared app-code volume
|
||||
- name: Check if application stack directory exists
|
||||
stat:
|
||||
path: "{{ app_stack_path }}"
|
||||
register: app_stack_dir
|
||||
|
||||
- name: Fail if application stack directory doesn't exist
|
||||
fail:
|
||||
msg: "Application stack directory not found at {{ app_stack_path }}"
|
||||
when: not app_stack_dir.stat.exists
|
||||
|
||||
- name: Copy updated docker-compose.yml to production
|
||||
copy:
|
||||
src: "{{ playbook_dir }}/../../stacks/application/docker-compose.yml"
|
||||
dest: "{{ app_stack_path }}/docker-compose.yml"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0644'
|
||||
register: compose_updated
|
||||
|
||||
- name: Initialize app-code volume with files from app image
|
||||
shell: |
|
||||
# Stop containers first
|
||||
cd {{ app_stack_path }}
|
||||
docker compose down nginx || true
|
||||
|
||||
# Create and initialize app-code volume
|
||||
docker volume create app-code || true
|
||||
|
||||
# Copy files from app image to volume using temporary container
|
||||
docker run --rm \
|
||||
-v app-code:/target \
|
||||
{{ app_image_external }}:latest \
|
||||
sh -c "cp -r /var/www/html/* /target/ 2>/dev/null || true"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: volume_init
|
||||
changed_when: true
|
||||
failed_when: false
|
||||
|
||||
- name: Start containers
|
||||
shell: |
|
||||
cd {{ app_stack_path }}
|
||||
docker compose up -d
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: containers_started
|
||||
|
||||
- name: Wait for containers to be healthy
|
||||
pause:
|
||||
seconds: 15
|
||||
|
||||
- name: Check container status
|
||||
shell: |
|
||||
cd {{ app_stack_path }}
|
||||
docker compose ps
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: final_status
|
||||
|
||||
- name: Display container status
|
||||
debug:
|
||||
msg: "{{ final_status.stdout_lines }}"
|
||||
|
||||
- name: Display summary
|
||||
debug:
|
||||
msg:
|
||||
- "=== Nginx 404 Fix Complete ==="
|
||||
- "Volume initialized: {{ 'Yes' if volume_init.changed else 'No' }}"
|
||||
- "Containers restarted: {{ 'Yes' if containers_started.changed else 'No' }}"
|
||||
47
deployment/ansible/playbooks/troubleshoot.yml
Normal file
47
deployment/ansible/playbooks/troubleshoot.yml
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
- name: Application Troubleshooting
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
|
||||
# All variables are now defined in group_vars/production.yml
|
||||
|
||||
tasks:
|
||||
- name: Check container health
|
||||
include_tasks: tasks/check-health.yml
|
||||
tags: ['health', 'check', 'all']
|
||||
|
||||
- name: Diagnose 404 errors
|
||||
include_tasks: tasks/diagnose-404.yml
|
||||
tags: ['404', 'diagnose', 'all']
|
||||
|
||||
- name: Fix container health checks
|
||||
include_tasks: tasks/fix-health-checks.yml
|
||||
tags: ['health', 'fix', 'all']
|
||||
|
||||
- name: Fix nginx 404
|
||||
include_tasks: tasks/fix-nginx-404.yml
|
||||
tags: ['nginx', '404', 'fix', 'all']
|
||||
|
||||
- name: Display usage information
|
||||
debug:
|
||||
msg:
|
||||
- "=== Troubleshooting Playbook ==="
|
||||
- ""
|
||||
- "Usage examples:"
|
||||
- " # Check health only:"
|
||||
- " ansible-playbook troubleshoot.yml --tags health,check"
|
||||
- ""
|
||||
- " # Diagnose 404 only:"
|
||||
- " ansible-playbook troubleshoot.yml --tags 404,diagnose"
|
||||
- ""
|
||||
- " # Fix health checks:"
|
||||
- " ansible-playbook troubleshoot.yml --tags health,fix"
|
||||
- ""
|
||||
- " # Fix nginx 404:"
|
||||
- " ansible-playbook troubleshoot.yml --tags nginx,404,fix"
|
||||
- ""
|
||||
- " # Run all checks:"
|
||||
- " ansible-playbook troubleshoot.yml --tags all"
|
||||
when: true
|
||||
tags: ['never']
|
||||
49
deployment/ansible/playbooks/update-gitea-config.yml
Normal file
49
deployment/ansible/playbooks/update-gitea-config.yml
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
- name: Update Gitea Configuration and Restart
|
||||
hosts: production
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
gitea_stack_path: "{{ stacks_base_path }}/gitea"
|
||||
|
||||
tasks:
|
||||
- name: Copy updated docker-compose.yml to production server
|
||||
copy:
|
||||
src: "{{ playbook_dir }}/../../stacks/gitea/docker-compose.yml"
|
||||
dest: "{{ gitea_stack_path }}/docker-compose.yml"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0644'
|
||||
|
||||
- name: Restart Gitea stack with updated configuration
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ gitea_stack_path }}"
|
||||
state: present
|
||||
pull: never
|
||||
recreate: always
|
||||
remove_orphans: no
|
||||
register: gitea_restart
|
||||
|
||||
- name: Wait for Gitea to be ready
|
||||
wait_for:
|
||||
timeout: 60
|
||||
when: gitea_restart.changed
|
||||
|
||||
- name: Verify Gitea Actions configuration
|
||||
shell: |
|
||||
docker exec gitea cat /data/gitea/conf/app.ini 2>/dev/null | grep -A 3 "\[actions\]" || echo "Config not accessible"
|
||||
register: gitea_config
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display Gitea Actions configuration
|
||||
debug:
|
||||
msg:
|
||||
- "=== Gitea Configuration Update Complete ==="
|
||||
- "Container restarted: {{ 'Yes' if gitea_restart.changed else 'No' }}"
|
||||
- ""
|
||||
- "Current Actions configuration:"
|
||||
- "{{ gitea_config.stdout if gitea_config.stdout else 'Could not read config (container may still be starting)' }}"
|
||||
- ""
|
||||
- "The DEFAULT_ACTIONS_URL should now point to your Gitea instance instead of GitHub."
|
||||
41
deployment/ansible/scripts/set-git-credentials.sh
Executable file
41
deployment/ansible/scripts/set-git-credentials.sh
Executable file
@@ -0,0 +1,41 @@
|
||||
#!/bin/bash
|
||||
# Script to set Git credentials in production .env file
|
||||
|
||||
set -e
|
||||
|
||||
cd "$(dirname "$0")/.."
|
||||
|
||||
echo "Setting Git credentials in production .env file..."
|
||||
echo ""
|
||||
echo "Choose authentication method:"
|
||||
echo "1) Personal Access Token (recommended)"
|
||||
echo "2) Username/Password"
|
||||
read -p "Enter choice (1 or 2): " choice
|
||||
|
||||
case $choice in
|
||||
1)
|
||||
read -p "Enter Gitea Personal Access Token: " token
|
||||
ansible production -i inventory/production.yml -m lineinfile \
|
||||
-a "path=~/deployment/stacks/application/.env regexp='^GIT_TOKEN=' line='GIT_TOKEN=$token' state=present" 2>&1
|
||||
echo "✅ GIT_TOKEN set successfully"
|
||||
;;
|
||||
2)
|
||||
read -p "Enter Gitea Username: " username
|
||||
read -s -p "Enter Gitea Password: " password
|
||||
echo ""
|
||||
ansible production -i inventory/production.yml -m lineinfile \
|
||||
-a "path=~/deployment/stacks/application/.env regexp='^GIT_USERNAME=' line='GIT_USERNAME=$username' state=present" 2>&1
|
||||
ansible production -i inventory/production.yml -m lineinfile \
|
||||
-a "path=~/deployment/stacks/application/.env regexp='^GIT_PASSWORD=' line='GIT_PASSWORD=$password' state=present" 2>&1
|
||||
echo "✅ GIT_USERNAME and GIT_PASSWORD set successfully"
|
||||
;;
|
||||
*)
|
||||
echo "❌ Invalid choice"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo "1. Restart the nginx container: docker compose restart nginx"
|
||||
echo "2. Check logs: docker compose logs nginx"
|
||||
@@ -21,6 +21,13 @@ vault_mail_password: "change-me-mail-password"
|
||||
vault_docker_registry_username: "gitea-user"
|
||||
vault_docker_registry_password: "change-me-registry-password"
|
||||
|
||||
# Git Repository Credentials (for code cloning in containers)
|
||||
# Option 1: Use Personal Access Token (recommended)
|
||||
vault_git_token: "change-me-gitea-personal-access-token"
|
||||
# Option 2: Use username/password (less secure)
|
||||
# vault_git_username: "your-gitea-username"
|
||||
# vault_git_password: "your-gitea-password"
|
||||
|
||||
# Optional: Additional Secrets
|
||||
vault_encryption_key: "change-me-encryption-key"
|
||||
vault_session_secret: "change-me-session-secret"
|
||||
@@ -28,3 +35,7 @@ vault_session_secret: "change-me-session-secret"
|
||||
# Monitoring Stack Credentials
|
||||
vault_grafana_admin_password: "change-me-secure-grafana-password"
|
||||
vault_prometheus_password: "change-me-secure-prometheus-password"
|
||||
|
||||
# MinIO Object Storage Credentials
|
||||
vault_minio_root_user: "minioadmin"
|
||||
vault_minio_root_password: "change-me-secure-minio-password"
|
||||
|
||||
20
deployment/ansible/show-nginx-volumes.sh
Executable file
20
deployment/ansible/show-nginx-volumes.sh
Executable file
@@ -0,0 +1,20 @@
|
||||
#!/bin/bash
|
||||
# Script to show nginx volumes configuration from docker compose
|
||||
|
||||
set -e
|
||||
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
echo "Fetching nginx volumes configuration..."
|
||||
echo ""
|
||||
|
||||
# Solution 1: Use sed to extract the full nginx block, then grep for volumes
|
||||
timeout 90 ansible production -i inventory/production.yml -m shell -a \
|
||||
"cd ~/deployment/stacks/application && docker compose config 2>&1 | sed -n '/^ nginx:/,/^ [a-z]/p' | grep -A 10 'volumes:'" \
|
||||
2>&1 || {
|
||||
echo ""
|
||||
echo "⚠️ Alternative method: Using larger grep context..."
|
||||
timeout 90 ansible production -i inventory/production.yml -m shell -a \
|
||||
"cd ~/deployment/stacks/application && docker compose config 2>&1 | grep -A 50 'nginx:' | grep -A 10 'volumes:'" \
|
||||
2>&1
|
||||
}
|
||||
@@ -43,7 +43,7 @@ RATE_LIMIT_WINDOW={{ rate_limit_window | default('60') }}
|
||||
ADMIN_ALLOWED_IPS={{ admin_allowed_ips | default('127.0.0.1,::1') }}
|
||||
|
||||
# App domain
|
||||
APP_DOMAIN={{ app_domain | default('michaelschiemer.de') }}
|
||||
APP_DOMAIN={{ app_domain }}
|
||||
# Production Environment Configuration
|
||||
# Generated by Ansible - DO NOT EDIT MANUALLY
|
||||
# Last Updated: {{ ansible_date_time.iso8601 }}
|
||||
|
||||
@@ -5,19 +5,19 @@
|
||||
TZ={{ timezone | default('Europe/Berlin') }}
|
||||
|
||||
# Application Domain
|
||||
APP_DOMAIN={{ app_domain | default('michaelschiemer.de') }}
|
||||
APP_DOMAIN={{ app_domain }}
|
||||
|
||||
# Application Settings
|
||||
APP_ENV={{ app_env | default('production') }}
|
||||
APP_DEBUG={{ app_debug | default('false') }}
|
||||
APP_URL=https://{{ app_domain | default('michaelschiemer.de') }}
|
||||
APP_URL=https://{{ app_domain }}
|
||||
|
||||
# Database Configuration
|
||||
# Using PostgreSQL from postgres stack
|
||||
DB_HOST=postgres
|
||||
DB_PORT={{ db_port | default('5432') }}
|
||||
DB_NAME={{ db_name | default('michaelschiemer') }}
|
||||
DB_USER={{ db_user | default('postgres') }}
|
||||
DB_NAME={{ db_name | default(db_name_default) }}
|
||||
DB_USER={{ db_user | default(db_user_default) }}
|
||||
DB_PASS={{ db_password }}
|
||||
|
||||
# Redis Configuration
|
||||
@@ -37,4 +37,11 @@ QUEUE_DRIVER={{ queue_driver | default('redis') }}
|
||||
QUEUE_CONNECTION={{ queue_connection | default('default') }}
|
||||
QUEUE_WORKER_SLEEP={{ queue_worker_sleep | default('3') }}
|
||||
QUEUE_WORKER_TRIES={{ queue_worker_tries | default('3') }}
|
||||
QUEUE_WORKER_TIMEOUT={{ queue_worker_timeout | default('60') }}
|
||||
QUEUE_WORKER_TIMEOUT={{ queue_worker_timeout | default('60') }}
|
||||
|
||||
# Git Repository Configuration (optional - if set, container will clone/pull code on start)
|
||||
GIT_REPOSITORY_URL={{ git_repository_url | default('') }}
|
||||
GIT_BRANCH={{ git_branch | default('main') }}
|
||||
GIT_TOKEN={{ git_token | default('') }}
|
||||
GIT_USERNAME={{ git_username | default('') }}
|
||||
GIT_PASSWORD={{ git_password | default('') }}
|
||||
81
deployment/ansible/templates/gitea-app.ini.j2
Normal file
81
deployment/ansible/templates/gitea-app.ini.j2
Normal file
@@ -0,0 +1,81 @@
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Gitea Configuration File
|
||||
;; Generated by Ansible - DO NOT EDIT MANUALLY
|
||||
;; This file is based on the official Gitea example configuration
|
||||
;; https://github.com/go-gitea/gitea/blob/main/custom/conf/app.example.ini
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; General Settings
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
APP_NAME = Gitea: Git with a cup of tea
|
||||
RUN_MODE = prod
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Server Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[server]
|
||||
PROTOCOL = http
|
||||
DOMAIN = {{ gitea_domain }}
|
||||
HTTP_ADDR = 0.0.0.0
|
||||
HTTP_PORT = 3000
|
||||
ROOT_URL = https://{{ gitea_domain }}/
|
||||
PUBLIC_URL_DETECTION = auto
|
||||
|
||||
;; SSH Configuration
|
||||
DISABLE_SSH = false
|
||||
START_SSH_SERVER = true
|
||||
SSH_DOMAIN = {{ gitea_domain }}
|
||||
SSH_PORT = 22
|
||||
SSH_LISTEN_PORT = 22
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Database Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[database]
|
||||
DB_TYPE = postgres
|
||||
HOST = postgres:5432
|
||||
NAME = {{ postgres_db | default('gitea') }}
|
||||
USER = {{ postgres_user | default('gitea') }}
|
||||
PASSWD = {{ postgres_password | default('gitea_password') }}
|
||||
SSL_MODE = disable
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Cache Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[cache]
|
||||
ENABLED = false
|
||||
ADAPTER = memory
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Session Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[session]
|
||||
PROVIDER = file
|
||||
PROVIDER_CONFIG = data/sessions
|
||||
COOKIE_SECURE = true
|
||||
COOKIE_NAME = i_like_gitea
|
||||
GC_INTERVAL_TIME = 86400
|
||||
SESSION_LIFE_TIME = 86400
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Queue Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[queue]
|
||||
TYPE = channel
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Service Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[service]
|
||||
DISABLE_REGISTRATION = {{ disable_registration | default(true) | lower }}
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Actions Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[actions]
|
||||
ENABLED = true
|
||||
;; Use "self" to use the current Gitea instance for actions (not GitHub)
|
||||
;; Do NOT set DEFAULT_ACTIONS_URL to a custom URL - it's not supported
|
||||
;; Leaving it unset or setting to "self" will use the current instance
|
||||
;DEFAULT_ACTIONS_URL = self
|
||||
16
deployment/ansible/templates/minio.env.j2
Normal file
16
deployment/ansible/templates/minio.env.j2
Normal file
@@ -0,0 +1,16 @@
|
||||
# MinIO Object Storage Stack Environment Configuration
|
||||
# Generated by Ansible - DO NOT EDIT MANUALLY
|
||||
|
||||
# Timezone
|
||||
TZ={{ timezone | default('Europe/Berlin') }}
|
||||
|
||||
# MinIO Root Credentials
|
||||
MINIO_ROOT_USER={{ minio_root_user }}
|
||||
MINIO_ROOT_PASSWORD={{ minio_root_password }}
|
||||
|
||||
# Domain Configuration
|
||||
# API endpoint (S3-compatible)
|
||||
MINIO_API_DOMAIN={{ minio_api_domain }}
|
||||
|
||||
# Console endpoint (Web UI)
|
||||
MINIO_CONSOLE_DOMAIN={{ minio_console_domain }}
|
||||
@@ -2,7 +2,7 @@
|
||||
# Generated by Ansible - DO NOT EDIT MANUALLY
|
||||
|
||||
# Domain Configuration
|
||||
DOMAIN={{ app_domain | default('michaelschiemer.de') }}
|
||||
DOMAIN={{ app_domain }}
|
||||
|
||||
# Grafana Configuration
|
||||
GRAFANA_ADMIN_USER={{ grafana_admin_user | default('admin') }}
|
||||
|
||||
17
deployment/ansible/test-nginx-volumes.sh
Executable file
17
deployment/ansible/test-nginx-volumes.sh
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
# Test script to check nginx volumes configuration with timeout
|
||||
|
||||
set -e
|
||||
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
echo "Testing nginx volumes configuration..."
|
||||
echo "Timeout: 90 seconds"
|
||||
echo ""
|
||||
|
||||
timeout 90 ansible production -i inventory/production.yml -m shell -a "cd ~/deployment/stacks/application && docker compose config 2>&1 | sed -n '/^ nginx:/,/^ [a-z]/p' | grep -A 10 'volumes:'" 2>&1 || {
|
||||
echo ""
|
||||
echo "⚠️ Command timed out or failed!"
|
||||
echo "Checking if docker compose is working..."
|
||||
timeout 30 ansible production -i inventory/production.yml -m shell -a "cd ~/deployment/stacks/application && docker compose config 2>&1 | head -5" || true
|
||||
}
|
||||
@@ -13,7 +13,9 @@ GITEA_RUNNER_NAME=dev-runner-01
|
||||
# Runner Labels (comma-separated)
|
||||
# Format: label:image
|
||||
# Example: ubuntu-latest:docker://node:16-bullseye
|
||||
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye,debian-latest:docker://debian:bullseye
|
||||
# php-ci: Uses optimized CI image with PHP 8.5, Composer, Ansible pre-installed
|
||||
# Build the image first: ./build-ci-image.sh
|
||||
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye,debian-latest:docker://debian:bullseye,php-ci:docker://php-ci:latest
|
||||
|
||||
# Optional: Custom Docker registry for job images
|
||||
# DOCKER_REGISTRY_MIRROR=https://registry.michaelschiemer.de
|
||||
|
||||
@@ -28,7 +28,9 @@ services:
|
||||
networks:
|
||||
- gitea-runner
|
||||
- traefik-public # Zugriff auf Registry und andere Services
|
||||
command: ["dockerd", "--host=unix:///var/run/docker.sock", "--host=tcp://0.0.0.0:2375", "--insecure-registry=94.16.110.151:5000", "--insecure-registry=registry.michaelschiemer.de:5000"]
|
||||
command: ["dockerd", "--host=unix:///var/run/docker.sock", "--host=tcp://0.0.0.0:2375", "--insecure-registry=94.16.110.151:5000", "--insecure-registry=172.25.0.1:5000", "--insecure-registry=registry:5000", "--insecure-registry=host.docker.internal:5000"]
|
||||
# HINWEIS: registry.michaelschiemer.de wird ?ber HTTPS (via Traefik) verwendet - KEINE insecure-registry n?tig!
|
||||
# Die insecure-registry Flags sind nur f?r HTTP-Fallbacks (Port 5000) gedacht
|
||||
|
||||
networks:
|
||||
gitea-runner:
|
||||
|
||||
92
deployment/gitea-runner/setup-php-ci.sh
Executable file
92
deployment/gitea-runner/setup-php-ci.sh
Executable file
@@ -0,0 +1,92 @@
|
||||
#!/bin/bash
|
||||
# Complete setup script for PHP CI image and runner registration
|
||||
# This builds the CI image, loads it into docker-dind, and updates runner labels
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
|
||||
echo "🚀 Setting up PHP CI Image for Gitea Runner"
|
||||
echo ""
|
||||
|
||||
# Step 1: Build CI Image
|
||||
echo "📦 Step 1: Building CI Docker Image..."
|
||||
cd "$PROJECT_ROOT"
|
||||
./deployment/gitea-runner/build-ci-image.sh
|
||||
|
||||
# Step 2: Load image into docker-dind
|
||||
echo ""
|
||||
echo "📥 Step 2: Loading image into docker-dind..."
|
||||
IMAGE_NAME="php-ci:latest"
|
||||
docker save "${IMAGE_NAME}" | docker exec -i gitea-runner-dind docker load
|
||||
|
||||
echo ""
|
||||
echo "✅ Image loaded into docker-dind"
|
||||
echo ""
|
||||
|
||||
# Step 3: Check current .env
|
||||
echo "📋 Step 3: Checking runner configuration..."
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
if [ ! -f .env ]; then
|
||||
echo "⚠️ .env file not found, copying from .env.example"
|
||||
cp .env.example .env
|
||||
echo ""
|
||||
echo "⚠️ IMPORTANT: Please edit .env and add:"
|
||||
echo " - GITEA_RUNNER_REGISTRATION_TOKEN"
|
||||
echo " - Update GITEA_RUNNER_LABELS to include php-ci:docker://php-ci:latest"
|
||||
echo ""
|
||||
read -p "Press Enter after updating .env to continue..."
|
||||
fi
|
||||
|
||||
# Check if php-ci label is already in .env
|
||||
if grep -q "php-ci:docker://php-ci:latest" .env; then
|
||||
echo "✅ php-ci label already in .env"
|
||||
else
|
||||
echo "⚠️ php-ci label not found in .env"
|
||||
echo ""
|
||||
read -p "Add php-ci label to GITEA_RUNNER_LABELS? (y/N) " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
# Add php-ci to labels
|
||||
sed -i 's|GITEA_RUNNER_LABELS=\(.*\)|GITEA_RUNNER_LABELS=\1,php-ci:docker://php-ci:latest|' .env
|
||||
echo "✅ Added php-ci label to .env"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Step 4: Re-register runner
|
||||
echo ""
|
||||
echo "🔄 Step 4: Re-registering runner with new labels..."
|
||||
read -p "Re-register runner now? This will unregister the current runner. (y/N) " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
if [ -f ./unregister.sh ]; then
|
||||
./unregister.sh
|
||||
fi
|
||||
|
||||
if [ -f ./register.sh ]; then
|
||||
./register.sh
|
||||
else
|
||||
echo "⚠️ register.sh not found, please register manually"
|
||||
fi
|
||||
else
|
||||
echo ""
|
||||
echo "ℹ️ To register manually:"
|
||||
echo " cd deployment/gitea-runner"
|
||||
echo " ./unregister.sh"
|
||||
echo " ./register.sh"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "✅ Setup complete!"
|
||||
echo ""
|
||||
echo "📝 Summary:"
|
||||
echo " - CI Image built: php-ci:latest"
|
||||
echo " - Image loaded into docker-dind"
|
||||
echo " - Runner labels updated (restart runner to apply)"
|
||||
echo ""
|
||||
echo "🎯 Next steps:"
|
||||
echo " 1. Verify runner is registered with php-ci label in Gitea UI"
|
||||
echo " 2. Test workflows using runs-on: php-ci"
|
||||
echo ""
|
||||
@@ -1,284 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Deploy Application to Production
|
||||
# This script deploys application updates using Ansible
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
DEPLOYMENT_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
ANSIBLE_DIR="$DEPLOYMENT_DIR/ansible"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Function to print colored messages
|
||||
print_success() {
|
||||
echo -e "${GREEN}✅ $1${NC}"
|
||||
}
|
||||
|
||||
print_error() {
|
||||
echo -e "${RED}❌ $1${NC}"
|
||||
}
|
||||
|
||||
print_warning() {
|
||||
echo -e "${YELLOW}⚠️ $1${NC}"
|
||||
}
|
||||
|
||||
print_info() {
|
||||
echo -e "${BLUE}ℹ️ $1${NC}"
|
||||
}
|
||||
|
||||
# Show usage
|
||||
usage() {
|
||||
echo "Usage: $0 <image-tag> [options]"
|
||||
echo ""
|
||||
echo "Arguments:"
|
||||
echo " image-tag Docker image tag to deploy (e.g., sha-abc123, v1.2.3, latest)"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --registry-user USER Docker registry username (default: from vault)"
|
||||
echo " --registry-pass PASS Docker registry password (default: from vault)"
|
||||
echo " --commit SHA Git commit SHA (default: auto-detect)"
|
||||
echo " --dry-run Run in check mode without making changes"
|
||||
echo " --help Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 sha-abc123"
|
||||
echo " $0 v1.2.3 --commit abc123def"
|
||||
echo " $0 latest --dry-run"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
IMAGE_TAG=""
|
||||
REGISTRY_USER=""
|
||||
REGISTRY_PASS=""
|
||||
GIT_COMMIT=""
|
||||
DRY_RUN=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--registry-user)
|
||||
REGISTRY_USER="$2"
|
||||
shift 2
|
||||
;;
|
||||
--registry-pass)
|
||||
REGISTRY_PASS="$2"
|
||||
shift 2
|
||||
;;
|
||||
--commit)
|
||||
GIT_COMMIT="$2"
|
||||
shift 2
|
||||
;;
|
||||
--dry-run)
|
||||
DRY_RUN="--check"
|
||||
shift
|
||||
;;
|
||||
--help)
|
||||
usage
|
||||
;;
|
||||
*)
|
||||
if [ -z "$IMAGE_TAG" ]; then
|
||||
IMAGE_TAG="$1"
|
||||
shift
|
||||
else
|
||||
print_error "Unknown argument: $1"
|
||||
usage
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate image tag
|
||||
if [ -z "$IMAGE_TAG" ]; then
|
||||
print_error "Image tag is required"
|
||||
usage
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🚀 Deploy Application to Production"
|
||||
echo "===================================="
|
||||
echo ""
|
||||
|
||||
# Check if running from correct directory
|
||||
if [ ! -f "$ANSIBLE_DIR/ansible.cfg" ]; then
|
||||
print_error "Error: Must run from deployment/scripts directory"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$ANSIBLE_DIR"
|
||||
|
||||
# Auto-detect git commit if not provided
|
||||
if [ -z "$GIT_COMMIT" ]; then
|
||||
if command -v git &> /dev/null && [ -d "$(git rev-parse --git-dir 2>/dev/null)" ]; then
|
||||
GIT_COMMIT=$(git rev-parse --short HEAD)
|
||||
print_info "Auto-detected git commit: $GIT_COMMIT"
|
||||
else
|
||||
GIT_COMMIT="unknown"
|
||||
print_warning "Could not auto-detect git commit"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Generate deployment timestamp
|
||||
DEPLOYMENT_TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ)
|
||||
|
||||
# Check Prerequisites
|
||||
echo "Checking Prerequisites..."
|
||||
echo "------------------------"
|
||||
|
||||
# Check Ansible
|
||||
if ! command -v ansible &> /dev/null; then
|
||||
print_error "Ansible is not installed"
|
||||
exit 1
|
||||
fi
|
||||
print_success "Ansible installed"
|
||||
|
||||
# Check vault password
|
||||
if [ ! -f "$ANSIBLE_DIR/secrets/.vault_pass" ]; then
|
||||
print_error "Vault password file not found"
|
||||
exit 1
|
||||
fi
|
||||
print_success "Vault password file found"
|
||||
|
||||
# Check playbook
|
||||
if [ ! -f "$ANSIBLE_DIR/playbooks/deploy-update.yml" ]; then
|
||||
print_error "Deploy playbook not found"
|
||||
exit 1
|
||||
fi
|
||||
print_success "Deploy playbook found"
|
||||
|
||||
# Test connection
|
||||
print_info "Testing connection to production..."
|
||||
if ansible production -m ping > /dev/null 2>&1; then
|
||||
print_success "Connection successful"
|
||||
else
|
||||
print_error "Connection to production failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Deployment Summary
|
||||
echo "Deployment Summary"
|
||||
echo "-----------------"
|
||||
echo " Image Tag: $IMAGE_TAG"
|
||||
echo " Git Commit: $GIT_COMMIT"
|
||||
echo " Timestamp: $DEPLOYMENT_TIMESTAMP"
|
||||
echo " Registry: git.michaelschiemer.de:5000"
|
||||
if [ -n "$DRY_RUN" ]; then
|
||||
echo " Mode: DRY RUN (no changes will be made)"
|
||||
else
|
||||
echo " Mode: PRODUCTION DEPLOYMENT"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Confirmation
|
||||
if [ -z "$DRY_RUN" ]; then
|
||||
read -p "Proceed with deployment? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
print_warning "Deployment cancelled"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Build ansible-playbook command
|
||||
ANSIBLE_CMD="ansible-playbook $ANSIBLE_DIR/playbooks/deploy-update.yml"
|
||||
ANSIBLE_CMD="$ANSIBLE_CMD --vault-password-file $ANSIBLE_DIR/secrets/.vault_pass"
|
||||
ANSIBLE_CMD="$ANSIBLE_CMD -e image_tag=$IMAGE_TAG"
|
||||
ANSIBLE_CMD="$ANSIBLE_CMD -e git_commit_sha=$GIT_COMMIT"
|
||||
ANSIBLE_CMD="$ANSIBLE_CMD -e deployment_timestamp=$DEPLOYMENT_TIMESTAMP"
|
||||
|
||||
# Add registry credentials if provided
|
||||
if [ -n "$REGISTRY_USER" ]; then
|
||||
ANSIBLE_CMD="$ANSIBLE_CMD -e docker_registry_username=$REGISTRY_USER"
|
||||
fi
|
||||
if [ -n "$REGISTRY_PASS" ]; then
|
||||
ANSIBLE_CMD="$ANSIBLE_CMD -e docker_registry_password=$REGISTRY_PASS"
|
||||
fi
|
||||
|
||||
# Add dry-run flag if set
|
||||
if [ -n "$DRY_RUN" ]; then
|
||||
ANSIBLE_CMD="$ANSIBLE_CMD $DRY_RUN"
|
||||
fi
|
||||
|
||||
# Execute deployment
|
||||
print_info "Starting deployment..."
|
||||
echo ""
|
||||
|
||||
if eval "$ANSIBLE_CMD"; then
|
||||
echo ""
|
||||
if [ -z "$DRY_RUN" ]; then
|
||||
print_success "Deployment completed successfully!"
|
||||
else
|
||||
print_success "Dry run completed successfully!"
|
||||
fi
|
||||
else
|
||||
echo ""
|
||||
print_error "Deployment failed!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Post-deployment checks
|
||||
if [ -z "$DRY_RUN" ]; then
|
||||
echo "Post-Deployment Checks"
|
||||
echo "---------------------"
|
||||
|
||||
SSH_KEY="$HOME/.ssh/production"
|
||||
|
||||
# Check service status
|
||||
print_info "Checking service status..."
|
||||
ssh -i "$SSH_KEY" deploy@94.16.110.151 "docker service ls --filter name=app_" || true
|
||||
echo ""
|
||||
|
||||
# Show recent logs
|
||||
print_info "Recent application logs:"
|
||||
ssh -i "$SSH_KEY" deploy@94.16.110.151 "docker service logs --tail 20 app_app" || true
|
||||
echo ""
|
||||
|
||||
# Health check URL
|
||||
HEALTH_URL="https://michaelschiemer.de/health"
|
||||
print_info "Health check URL: $HEALTH_URL"
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
echo "✅ Deployment Complete!"
|
||||
echo "======================"
|
||||
echo ""
|
||||
echo "Deployed:"
|
||||
echo " Image: git.michaelschiemer.de:5000/framework:$IMAGE_TAG"
|
||||
echo " Commit: $GIT_COMMIT"
|
||||
echo " Timestamp: $DEPLOYMENT_TIMESTAMP"
|
||||
echo ""
|
||||
echo "Next Steps:"
|
||||
echo ""
|
||||
echo "1. Monitor application:"
|
||||
echo " ssh -i $SSH_KEY deploy@94.16.110.151 'docker service logs -f app_app'"
|
||||
echo ""
|
||||
echo "2. Check service status:"
|
||||
echo " ssh -i $SSH_KEY deploy@94.16.110.151 'docker service ps app_app'"
|
||||
echo ""
|
||||
echo "3. Test application:"
|
||||
echo " curl $HEALTH_URL"
|
||||
echo ""
|
||||
echo "4. Rollback if needed:"
|
||||
echo " $SCRIPT_DIR/rollback.sh"
|
||||
echo ""
|
||||
else
|
||||
echo "Dry Run Summary"
|
||||
echo "==============="
|
||||
echo ""
|
||||
echo "This was a dry run. No changes were made to production."
|
||||
echo ""
|
||||
echo "To deploy for real, run:"
|
||||
echo " $0 $IMAGE_TAG"
|
||||
echo ""
|
||||
fi
|
||||
@@ -1,330 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Rollback Application Deployment
|
||||
# This script rolls back to a previous deployment using Ansible
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
DEPLOYMENT_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
ANSIBLE_DIR="$DEPLOYMENT_DIR/ansible"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Function to print colored messages
|
||||
print_success() {
|
||||
echo -e "${GREEN}✅ $1${NC}"
|
||||
}
|
||||
|
||||
print_error() {
|
||||
echo -e "${RED}❌ $1${NC}"
|
||||
}
|
||||
|
||||
print_warning() {
|
||||
echo -e "${YELLOW}⚠️ $1${NC}"
|
||||
}
|
||||
|
||||
print_info() {
|
||||
echo -e "${BLUE}ℹ️ $1${NC}"
|
||||
}
|
||||
|
||||
# Show usage
|
||||
usage() {
|
||||
echo "Usage: $0 [version] [options]"
|
||||
echo ""
|
||||
echo "Arguments:"
|
||||
echo " version Backup version to rollback to (e.g., 2025-01-28T15-30-00)"
|
||||
echo " If not provided, will rollback to previous version"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --list List available backup versions"
|
||||
echo " --dry-run Run in check mode without making changes"
|
||||
echo " --help Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 # Rollback to previous version"
|
||||
echo " $0 2025-01-28T15-30-00 # Rollback to specific version"
|
||||
echo " $0 --list # List available backups"
|
||||
echo " $0 2025-01-28T15-30-00 --dry-run # Test rollback"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
ROLLBACK_VERSION=""
|
||||
LIST_BACKUPS=false
|
||||
DRY_RUN=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--list)
|
||||
LIST_BACKUPS=true
|
||||
shift
|
||||
;;
|
||||
--dry-run)
|
||||
DRY_RUN="--check"
|
||||
shift
|
||||
;;
|
||||
--help)
|
||||
usage
|
||||
;;
|
||||
*)
|
||||
if [ -z "$ROLLBACK_VERSION" ]; then
|
||||
ROLLBACK_VERSION="$1"
|
||||
shift
|
||||
else
|
||||
print_error "Unknown argument: $1"
|
||||
usage
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "🔄 Rollback Application Deployment"
|
||||
echo "==================================="
|
||||
echo ""
|
||||
|
||||
# Check if running from correct directory
|
||||
if [ ! -f "$ANSIBLE_DIR/ansible.cfg" ]; then
|
||||
print_error "Error: Must run from deployment/scripts directory"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$ANSIBLE_DIR"
|
||||
|
||||
SSH_KEY="$HOME/.ssh/production"
|
||||
DEPLOY_USER="deploy"
|
||||
DEPLOY_HOST="94.16.110.151"
|
||||
|
||||
# List available backups
|
||||
list_backups() {
|
||||
print_info "Fetching available backups from production server..."
|
||||
echo ""
|
||||
|
||||
if ! ssh -i "$SSH_KEY" "$DEPLOY_USER@$DEPLOY_HOST" \
|
||||
"ls -lt /home/deploy/backups/ 2>/dev/null" | tail -n +2; then
|
||||
print_error "Failed to list backups"
|
||||
print_info "Make sure backups exist on production server: /home/deploy/backups/"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
print_info "To rollback to a specific version, run:"
|
||||
echo " $0 <version>"
|
||||
echo ""
|
||||
print_info "To rollback to previous version, run:"
|
||||
echo " $0"
|
||||
exit 0
|
||||
}
|
||||
|
||||
# Check Prerequisites
|
||||
check_prerequisites() {
|
||||
echo "Checking Prerequisites..."
|
||||
echo "------------------------"
|
||||
|
||||
# Check Ansible
|
||||
if ! command -v ansible &> /dev/null; then
|
||||
print_error "Ansible is not installed"
|
||||
exit 1
|
||||
fi
|
||||
print_success "Ansible installed"
|
||||
|
||||
# Check vault password
|
||||
if [ ! -f "$ANSIBLE_DIR/secrets/.vault_pass" ]; then
|
||||
print_error "Vault password file not found"
|
||||
exit 1
|
||||
fi
|
||||
print_success "Vault password file found"
|
||||
|
||||
# Check playbook
|
||||
if [ ! -f "$ANSIBLE_DIR/playbooks/rollback.yml" ]; then
|
||||
print_error "Rollback playbook not found"
|
||||
exit 1
|
||||
fi
|
||||
print_success "Rollback playbook found"
|
||||
|
||||
# Test connection
|
||||
print_info "Testing connection to production..."
|
||||
if ansible production -m ping > /dev/null 2>&1; then
|
||||
print_success "Connection successful"
|
||||
else
|
||||
print_error "Connection to production failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Get current deployment info
|
||||
get_current_deployment() {
|
||||
print_info "Fetching current deployment information..."
|
||||
|
||||
CURRENT_IMAGE=$(ssh -i "$SSH_KEY" "$DEPLOY_USER@$DEPLOY_HOST" \
|
||||
"docker service inspect app_app --format '{{.Spec.TaskTemplate.ContainerSpec.Image}}' 2>/dev/null" || echo "unknown")
|
||||
|
||||
if [ "$CURRENT_IMAGE" != "unknown" ]; then
|
||||
print_info "Current deployment: $CURRENT_IMAGE"
|
||||
else
|
||||
print_warning "Could not determine current deployment"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Main logic
|
||||
if [ "$LIST_BACKUPS" = true ]; then
|
||||
list_backups
|
||||
fi
|
||||
|
||||
check_prerequisites
|
||||
get_current_deployment
|
||||
|
||||
# Show available backups
|
||||
echo "Available Backups"
|
||||
echo "----------------"
|
||||
ssh -i "$SSH_KEY" "$DEPLOY_USER@$DEPLOY_HOST" \
|
||||
"ls -lt /home/deploy/backups/ 2>/dev/null | tail -n +2 | head -10" || {
|
||||
print_warning "No backups found on production server"
|
||||
echo ""
|
||||
print_info "Backups are created automatically during deployments"
|
||||
print_info "You need at least one previous deployment to rollback"
|
||||
exit 1
|
||||
}
|
||||
echo ""
|
||||
|
||||
# Rollback Summary
|
||||
echo "Rollback Summary"
|
||||
echo "---------------"
|
||||
if [ -n "$ROLLBACK_VERSION" ]; then
|
||||
echo " Target Version: $ROLLBACK_VERSION"
|
||||
echo " Current Image: $CURRENT_IMAGE"
|
||||
else
|
||||
echo " Target Version: Previous deployment (most recent backup)"
|
||||
echo " Current Image: $CURRENT_IMAGE"
|
||||
fi
|
||||
if [ -n "$DRY_RUN" ]; then
|
||||
echo " Mode: DRY RUN (no changes will be made)"
|
||||
else
|
||||
echo " Mode: PRODUCTION ROLLBACK"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Confirmation
|
||||
if [ -z "$DRY_RUN" ]; then
|
||||
print_warning "⚠️ WARNING: This will rollback your production deployment!"
|
||||
echo ""
|
||||
read -p "Are you sure you want to proceed with rollback? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
print_warning "Rollback cancelled"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Build ansible-playbook command
|
||||
ANSIBLE_CMD="ansible-playbook $ANSIBLE_DIR/playbooks/rollback.yml"
|
||||
ANSIBLE_CMD="$ANSIBLE_CMD --vault-password-file $ANSIBLE_DIR/secrets/.vault_pass"
|
||||
|
||||
# Add version if specified
|
||||
if [ -n "$ROLLBACK_VERSION" ]; then
|
||||
ANSIBLE_CMD="$ANSIBLE_CMD -e rollback_to_version=$ROLLBACK_VERSION"
|
||||
fi
|
||||
|
||||
# Add dry-run flag if set
|
||||
if [ -n "$DRY_RUN" ]; then
|
||||
ANSIBLE_CMD="$ANSIBLE_CMD $DRY_RUN"
|
||||
fi
|
||||
|
||||
# Execute rollback
|
||||
print_info "Starting rollback..."
|
||||
echo ""
|
||||
|
||||
if eval "$ANSIBLE_CMD"; then
|
||||
echo ""
|
||||
if [ -z "$DRY_RUN" ]; then
|
||||
print_success "Rollback completed successfully!"
|
||||
else
|
||||
print_success "Dry run completed successfully!"
|
||||
fi
|
||||
else
|
||||
echo ""
|
||||
print_error "Rollback failed!"
|
||||
echo ""
|
||||
print_info "Check the Ansible output above for error details"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Post-rollback checks
|
||||
if [ -z "$DRY_RUN" ]; then
|
||||
echo "Post-Rollback Checks"
|
||||
echo "-------------------"
|
||||
|
||||
# Check service status
|
||||
print_info "Checking service status..."
|
||||
ssh -i "$SSH_KEY" "$DEPLOY_USER@$DEPLOY_HOST" "docker service ls --filter name=app_" || true
|
||||
echo ""
|
||||
|
||||
# Get new image
|
||||
NEW_IMAGE=$(ssh -i "$SSH_KEY" "$DEPLOY_USER@$DEPLOY_HOST" \
|
||||
"docker service inspect app_app --format '{{.Spec.TaskTemplate.ContainerSpec.Image}}' 2>/dev/null" || echo "unknown")
|
||||
|
||||
if [ "$NEW_IMAGE" != "unknown" ]; then
|
||||
print_info "Rolled back to: $NEW_IMAGE"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Show recent logs
|
||||
print_info "Recent application logs:"
|
||||
ssh -i "$SSH_KEY" "$DEPLOY_USER@$DEPLOY_HOST" "docker service logs --tail 20 app_app" || true
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
echo "✅ Rollback Complete!"
|
||||
echo "===================="
|
||||
echo ""
|
||||
echo "Rollback Details:"
|
||||
echo " From: $CURRENT_IMAGE"
|
||||
echo " To: $NEW_IMAGE"
|
||||
if [ -n "$ROLLBACK_VERSION" ]; then
|
||||
echo " Version: $ROLLBACK_VERSION"
|
||||
else
|
||||
echo " Version: Previous deployment"
|
||||
fi
|
||||
echo ""
|
||||
echo "Next Steps:"
|
||||
echo ""
|
||||
echo "1. Monitor application:"
|
||||
echo " ssh -i $SSH_KEY $DEPLOY_USER@$DEPLOY_HOST 'docker service logs -f app_app'"
|
||||
echo ""
|
||||
echo "2. Check service status:"
|
||||
echo " ssh -i $SSH_KEY $DEPLOY_USER@$DEPLOY_HOST 'docker service ps app_app'"
|
||||
echo ""
|
||||
echo "3. Test application:"
|
||||
echo " curl https://michaelschiemer.de/health"
|
||||
echo ""
|
||||
echo "4. If rollback didn't fix the issue, check available backups:"
|
||||
echo " $0 --list"
|
||||
echo ""
|
||||
else
|
||||
echo "Dry Run Summary"
|
||||
echo "==============="
|
||||
echo ""
|
||||
echo "This was a dry run. No changes were made to production."
|
||||
echo ""
|
||||
if [ -n "$ROLLBACK_VERSION" ]; then
|
||||
echo "To rollback for real, run:"
|
||||
echo " $0 $ROLLBACK_VERSION"
|
||||
else
|
||||
echo "To rollback for real, run:"
|
||||
echo " $0"
|
||||
fi
|
||||
echo ""
|
||||
fi
|
||||
@@ -1,211 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Setup Production Server
|
||||
# This script performs initial production server setup with Ansible
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
DEPLOYMENT_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
ANSIBLE_DIR="$DEPLOYMENT_DIR/ansible"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
echo ""
|
||||
echo "🚀 Production Server Setup"
|
||||
echo "=========================="
|
||||
echo ""
|
||||
|
||||
# Function to print colored messages
|
||||
print_success() {
|
||||
echo -e "${GREEN}✅ $1${NC}"
|
||||
}
|
||||
|
||||
print_error() {
|
||||
echo -e "${RED}❌ $1${NC}"
|
||||
}
|
||||
|
||||
print_warning() {
|
||||
echo -e "${YELLOW}⚠️ $1${NC}"
|
||||
}
|
||||
|
||||
print_info() {
|
||||
echo -e "${BLUE}ℹ️ $1${NC}"
|
||||
}
|
||||
|
||||
# Check if running from correct directory
|
||||
if [ ! -f "$ANSIBLE_DIR/ansible.cfg" ]; then
|
||||
print_error "Error: Must run from deployment/scripts directory"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$ANSIBLE_DIR"
|
||||
|
||||
# Step 1: Check Prerequisites
|
||||
echo "Step 1: Checking Prerequisites"
|
||||
echo "------------------------------"
|
||||
|
||||
# Check Ansible installed
|
||||
if ! command -v ansible &> /dev/null; then
|
||||
print_error "Ansible is not installed"
|
||||
echo ""
|
||||
echo "Install Ansible:"
|
||||
echo " pip install ansible"
|
||||
exit 1
|
||||
fi
|
||||
print_success "Ansible is installed: $(ansible --version | head -n1)"
|
||||
|
||||
# Check Ansible playbooks exist
|
||||
if [ ! -f "$ANSIBLE_DIR/playbooks/setup-production-secrets.yml" ]; then
|
||||
print_error "Ansible playbooks not found"
|
||||
exit 1
|
||||
fi
|
||||
print_success "Ansible playbooks found"
|
||||
|
||||
# Check SSH key
|
||||
SSH_KEY="$HOME/.ssh/production"
|
||||
if [ ! -f "$SSH_KEY" ]; then
|
||||
print_warning "SSH key not found: $SSH_KEY"
|
||||
echo ""
|
||||
read -p "Do you want to create SSH key now? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
ssh-keygen -t ed25519 -f "$SSH_KEY" -C "ansible-deploy"
|
||||
chmod 600 "$SSH_KEY"
|
||||
chmod 644 "$SSH_KEY.pub"
|
||||
print_success "SSH key created"
|
||||
echo ""
|
||||
echo "📋 Public key:"
|
||||
cat "$SSH_KEY.pub"
|
||||
echo ""
|
||||
print_warning "You must add this public key to the production server:"
|
||||
echo " ssh-copy-id -i $SSH_KEY.pub deploy@94.16.110.151"
|
||||
echo ""
|
||||
read -p "Press ENTER after adding SSH key to server..."
|
||||
else
|
||||
print_error "SSH key is required for Ansible"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
print_success "SSH key found: $SSH_KEY"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 2: Setup Ansible Secrets
|
||||
echo "Step 2: Setup Ansible Secrets"
|
||||
echo "-----------------------------"
|
||||
|
||||
# Check if vault file exists
|
||||
if [ ! -f "$ANSIBLE_DIR/secrets/production.vault.yml" ]; then
|
||||
print_warning "Vault file not found"
|
||||
echo ""
|
||||
read -p "Do you want to run init-secrets.sh now? (Y/n): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Nn]$ ]]; then
|
||||
"$ANSIBLE_DIR/scripts/init-secrets.sh"
|
||||
else
|
||||
print_error "Vault file is required"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
print_success "Vault file exists"
|
||||
fi
|
||||
|
||||
# Check vault password file
|
||||
if [ ! -f "$ANSIBLE_DIR/secrets/.vault_pass" ]; then
|
||||
print_error "Vault password file not found: secrets/.vault_pass"
|
||||
echo ""
|
||||
echo "Run init-secrets.sh to create vault password file:"
|
||||
echo " $ANSIBLE_DIR/scripts/init-secrets.sh"
|
||||
exit 1
|
||||
fi
|
||||
print_success "Vault password file found"
|
||||
|
||||
# Verify vault can be decrypted
|
||||
if ! ansible-vault view "$ANSIBLE_DIR/secrets/production.vault.yml" \
|
||||
--vault-password-file "$ANSIBLE_DIR/secrets/.vault_pass" > /dev/null 2>&1; then
|
||||
print_error "Failed to decrypt vault file"
|
||||
echo "Check your vault password in: secrets/.vault_pass"
|
||||
exit 1
|
||||
fi
|
||||
print_success "Vault file can be decrypted"
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 3: Test Connection
|
||||
echo "Step 3: Test Connection to Production"
|
||||
echo "-------------------------------------"
|
||||
|
||||
if ansible production -m ping 2>&1 | grep -q "SUCCESS"; then
|
||||
print_success "Connection to production server successful"
|
||||
else
|
||||
print_error "Connection to production server failed"
|
||||
echo ""
|
||||
echo "Troubleshooting steps:"
|
||||
echo "1. Test SSH manually: ssh -i $SSH_KEY deploy@94.16.110.151"
|
||||
echo "2. Verify SSH key is added: ssh-copy-id -i $SSH_KEY.pub deploy@94.16.110.151"
|
||||
echo "3. Check inventory file: cat $ANSIBLE_DIR/inventory/production.yml"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 4: Deploy Secrets to Production
|
||||
echo "Step 4: Deploy Secrets to Production"
|
||||
echo "------------------------------------"
|
||||
|
||||
read -p "Deploy secrets to production server? (Y/n): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Nn]$ ]]; then
|
||||
print_info "Deploying secrets to production..."
|
||||
echo ""
|
||||
|
||||
if ansible-playbook "$ANSIBLE_DIR/playbooks/setup-production-secrets.yml" \
|
||||
--vault-password-file "$ANSIBLE_DIR/secrets/.vault_pass"; then
|
||||
print_success "Secrets deployed successfully"
|
||||
else
|
||||
print_error "Failed to deploy secrets"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
print_warning "Skipped secrets deployment"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 5: Verify Docker Services
|
||||
echo "Step 5: Verify Docker Services"
|
||||
echo "------------------------------"
|
||||
|
||||
print_info "Checking Docker services on production..."
|
||||
echo ""
|
||||
|
||||
ssh -i "$SSH_KEY" deploy@94.16.110.151 "docker node ls" || true
|
||||
echo ""
|
||||
ssh -i "$SSH_KEY" deploy@94.16.110.151 "docker service ls" || true
|
||||
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
echo ""
|
||||
echo "✅ Production Server Setup Complete!"
|
||||
echo "===================================="
|
||||
echo ""
|
||||
echo "Next Steps:"
|
||||
echo ""
|
||||
echo "1. Verify secrets are deployed:"
|
||||
echo " ssh -i $SSH_KEY deploy@94.16.110.151 'cat /home/deploy/secrets/.env'"
|
||||
echo ""
|
||||
echo "2. Deploy your application:"
|
||||
echo " $SCRIPT_DIR/deploy.sh <image-tag>"
|
||||
echo ""
|
||||
echo "3. Monitor deployment:"
|
||||
echo " ssh -i $SSH_KEY deploy@94.16.110.151 'docker service logs -f app_app'"
|
||||
echo ""
|
||||
echo "📖 For more information, see: $ANSIBLE_DIR/README.md"
|
||||
echo ""
|
||||
@@ -38,3 +38,11 @@ QUEUE_CONNECTION=default
|
||||
QUEUE_WORKER_SLEEP=3
|
||||
QUEUE_WORKER_TRIES=3
|
||||
QUEUE_WORKER_TIMEOUT=60
|
||||
|
||||
# Git Repository Configuration (optional - if set, container will clone/pull code on start)
|
||||
# Uncomment to enable Git-based deployment:
|
||||
# GIT_REPOSITORY_URL=https://git.michaelschiemer.de/michael/michaelschiemer.git
|
||||
# GIT_BRANCH=main
|
||||
# GIT_TOKEN=
|
||||
# GIT_USERNAME=
|
||||
# GIT_PASSWORD=
|
||||
|
||||
@@ -1,10 +1,9 @@
|
||||
version: '3.8'
|
||||
# Docker Registry: registry.michaelschiemer.de (HTTPS via Traefik)
|
||||
|
||||
services:
|
||||
# PHP-FPM Application Runtime
|
||||
app:
|
||||
image: registry.michaelschiemer.de/framework:latest
|
||||
image: git.michaelschiemer.de:5000/framework:latest
|
||||
container_name: app
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
@@ -55,8 +54,9 @@ services:
|
||||
condition: service_started
|
||||
|
||||
# Nginx Web Server
|
||||
# Uses same image as app - clones code from Git if GIT_REPOSITORY_URL is set, then runs nginx
|
||||
nginx:
|
||||
image: nginx:1.25-alpine
|
||||
image: git.michaelschiemer.de:5000/framework:latest
|
||||
container_name: nginx
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
@@ -64,12 +64,89 @@ services:
|
||||
- app-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- APP_ENV=${APP_ENV:-production}
|
||||
- APP_DEBUG=${APP_DEBUG:-false}
|
||||
# Git Repository (same as app - will clone code on start)
|
||||
- GIT_REPOSITORY_URL=${GIT_REPOSITORY_URL:-}
|
||||
- GIT_BRANCH=${GIT_BRANCH:-main}
|
||||
- GIT_TOKEN=${GIT_TOKEN:-}
|
||||
- GIT_USERNAME=${GIT_USERNAME:-}
|
||||
- GIT_PASSWORD=${GIT_PASSWORD:-}
|
||||
volumes:
|
||||
- ./nginx/conf.d:/etc/nginx/conf.d:ro
|
||||
- app-code:/var/www/html:ro
|
||||
- app-storage:/var/www/html/storage:ro
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
# Use custom entrypoint that ensures code is available then starts nginx only (no PHP-FPM)
|
||||
entrypoint: ["/bin/sh", "-c"]
|
||||
command:
|
||||
- |
|
||||
# Ensure code is available in /var/www/html (from image or Git)
|
||||
GIT_TARGET_DIR="/var/www/html"
|
||||
|
||||
# If storage is mounted but code is missing, copy from image's original location
|
||||
if [ ! -d "$$GIT_TARGET_DIR/public" ] && [ -d "/var/www/html.orig" ]; then
|
||||
echo "?? [nginx] Copying code from image..."
|
||||
# Copy everything except storage (which is a volume mount)
|
||||
find /var/www/html.orig -mindepth 1 -maxdepth 1 ! -name "storage" -exec cp -r {} "$$GIT_TARGET_DIR/" \; 2>/dev/null || true
|
||||
fi
|
||||
|
||||
if [ -n "$$GIT_REPOSITORY_URL" ]; then
|
||||
# Configure Git to be non-interactive
|
||||
export GIT_TERMINAL_PROMPT=0
|
||||
export GIT_ASKPASS=echo
|
||||
|
||||
# Determine authentication method
|
||||
|
||||
|
||||
if [ -n "$$GIT_TOKEN" ]; then
|
||||
GIT_URL_WITH_AUTH=$$(echo "$$GIT_REPOSITORY_URL" | sed "s|https://|https://$${GIT_TOKEN}@|")
|
||||
elif [ -n "$$GIT_USERNAME" ] && [ -n "$$GIT_PASSWORD" ]; then
|
||||
GIT_URL_WITH_AUTH=$$(echo "$$GIT_REPOSITORY_URL" | sed "s|https://|https://$${GIT_USERNAME}:$${GIT_PASSWORD}@|")
|
||||
else
|
||||
echo "⚠️ [nginx] No Git credentials provided (GIT_TOKEN or GIT_USERNAME/GIT_PASSWORD). Using image contents."
|
||||
GIT_URL_WITH_AUTH=""
|
||||
fi
|
||||
|
||||
|
||||
if [ -n "$$GIT_URL_WITH_AUTH" ] && [ ! -d "$$GIT_TARGET_DIR/.git" ]; then
|
||||
echo "?? [nginx] Cloning repository from $$GIT_REPOSITORY_URL (branch: $${GIT_BRANCH:-main})..."
|
||||
# Remove only files/dirs that are not storage (which is a volume mount)
|
||||
# Clone into a temporary directory first, then move contents
|
||||
TEMP_CLONE="$${GIT_TARGET_DIR}.tmp"
|
||||
rm -rf "$$TEMP_CLONE" 2>/dev/null || true
|
||||
if git clone --branch "$${GIT_BRANCH:-main}" --depth 1 "$$GIT_URL_WITH_AUTH" "$$TEMP_CLONE"; then
|
||||
# Remove only files/dirs that are not storage (which is a volume mount)
|
||||
find "$$GIT_TARGET_DIR" -mindepth 1 -maxdepth 1 ! -name "storage" -exec rm -rf {} \\; 2>/dev/null || true
|
||||
# Move contents from temp directory to target (preserving storage)
|
||||
find "$$TEMP_CLONE" -mindepth 1 -maxdepth 1 ! -name "." ! -name ".." -exec mv {} "$$GIT_TARGET_DIR/" \\; 2>/dev/null || true
|
||||
rm -rf "$$TEMP_CLONE" 2>/dev/null || true
|
||||
echo "✅ [nginx] Repository cloned successfully"
|
||||
else
|
||||
echo "? Git clone failed. Using image contents."
|
||||
rm -rf "$$TEMP_CLONE" 2>/dev/null || true
|
||||
fi
|
||||
else
|
||||
echo "?? [nginx] Pulling latest changes..."
|
||||
cd "$$GIT_TARGET_DIR"
|
||||
git fetch origin "$${GIT_BRANCH:-main}" || true
|
||||
git reset --hard "origin/$${GIT_BRANCH:-main}" || true
|
||||
git clean -fd || true
|
||||
fi
|
||||
if [ -f "$$GIT_TARGET_DIR/composer.json" ]; then
|
||||
echo "?? [nginx] Installing dependencies..."
|
||||
cd "$$GIT_TARGET_DIR"
|
||||
composer install --no-dev --optimize-autoloader --no-interaction --no-scripts || true
|
||||
composer dump-autoload --optimize --classmap-authoritative || true
|
||||
fi
|
||||
echo "? [nginx] Git sync completed"
|
||||
else
|
||||
echo "?? [nginx] GIT_REPOSITORY_URL not set, using code from image"
|
||||
fi
|
||||
|
||||
# Start nginx only (no PHP-FPM)
|
||||
echo "?? [nginx] Starting nginx..."
|
||||
exec nginx -g "daemon off;"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
# HTTP Router
|
||||
@@ -84,7 +161,7 @@ services:
|
||||
# Network
|
||||
- "traefik.docker.network=traefik-public"
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget --spider -q http://127.0.0.1/health || exit 1"]
|
||||
test: ["CMD-SHELL", "curl -f http://127.0.0.1/health || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
@@ -125,7 +202,7 @@ services:
|
||||
|
||||
# Queue Worker (Background Jobs)
|
||||
queue-worker:
|
||||
image: registry.michaelschiemer.de/framework:latest
|
||||
image: git.michaelschiemer.de:5000/framework:latest
|
||||
container_name: queue-worker
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
@@ -170,7 +247,7 @@ services:
|
||||
|
||||
# Scheduler (Cron Jobs)
|
||||
scheduler:
|
||||
image: registry.michaelschiemer.de/framework:latest
|
||||
image: git.michaelschiemer.de:5000/framework:latest
|
||||
container_name: scheduler
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
|
||||
286
deployment/stacks/application/docker-compose.yml.backup
Normal file
286
deployment/stacks/application/docker-compose.yml.backup
Normal file
@@ -0,0 +1,286 @@
|
||||
version: '3.8'
|
||||
# Docker Registry: registry.michaelschiemer.de (HTTPS via Traefik)
|
||||
|
||||
services:
|
||||
# PHP-FPM Application Runtime
|
||||
app:
|
||||
image: git.michaelschiemer.de:5000/framework:latest
|
||||
container_name: app
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- app-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- APP_ENV=${APP_ENV:-production}
|
||||
- APP_DEBUG=${APP_DEBUG:-false}
|
||||
- APP_URL=${APP_URL:-https://michaelschiemer.de}
|
||||
# Git Repository (optional - if set, container will clone/pull code on start)
|
||||
- GIT_REPOSITORY_URL=${GIT_REPOSITORY_URL:-}
|
||||
- GIT_BRANCH=${GIT_BRANCH:-main}
|
||||
- GIT_TOKEN=${GIT_TOKEN:-}
|
||||
- GIT_USERNAME=${GIT_USERNAME:-}
|
||||
- GIT_PASSWORD=${GIT_PASSWORD:-}
|
||||
# Database
|
||||
- DB_HOST=${DB_HOST:-postgres}
|
||||
- DB_PORT=${DB_PORT:-5432}
|
||||
- DB_DATABASE=${DB_DATABASE}
|
||||
- DB_USERNAME=${DB_USERNAME}
|
||||
- DB_PASSWORD=${DB_PASSWORD}
|
||||
# Redis
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=${REDIS_PASSWORD}
|
||||
# Cache
|
||||
- CACHE_DRIVER=redis
|
||||
- CACHE_PREFIX=${CACHE_PREFIX:-app}
|
||||
# Session
|
||||
- SESSION_DRIVER=redis
|
||||
- SESSION_LIFETIME=${SESSION_LIFETIME:-120}
|
||||
# Queue
|
||||
- QUEUE_DRIVER=redis
|
||||
- QUEUE_CONNECTION=default
|
||||
volumes:
|
||||
- app-storage:/var/www/html/storage
|
||||
- app-logs:/var/www/html/storage/logs
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "true"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
depends_on:
|
||||
redis:
|
||||
condition: service_started
|
||||
|
||||
# Nginx Web Server
|
||||
# Uses same image as app - clones code from Git if GIT_REPOSITORY_URL is set, then runs nginx
|
||||
nginx:
|
||||
image: git.michaelschiemer.de:5000/framework:latest
|
||||
container_name: nginx
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- traefik-public
|
||||
- app-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- APP_ENV=${APP_ENV:-production}
|
||||
- APP_DEBUG=${APP_DEBUG:-false}
|
||||
# Git Repository (same as app - will clone code on start)
|
||||
- GIT_REPOSITORY_URL=${GIT_REPOSITORY_URL:-}
|
||||
- GIT_BRANCH=${GIT_BRANCH:-main}
|
||||
- GIT_TOKEN=${GIT_TOKEN:-}
|
||||
- GIT_USERNAME=${GIT_USERNAME:-}
|
||||
- GIT_PASSWORD=${GIT_PASSWORD:-}
|
||||
volumes:
|
||||
- ./nginx/conf.d:/etc/nginx/conf.d:ro
|
||||
- app-storage:/var/www/html/storage:ro
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
# Use custom entrypoint that ensures code is available then starts nginx only (no PHP-FPM)
|
||||
entrypoint: ["/bin/sh", "-c"]
|
||||
command:
|
||||
- |
|
||||
# Ensure code is available in /var/www/html (from image or Git)
|
||||
GIT_TARGET_DIR="/var/www/html"
|
||||
|
||||
# If storage is mounted but code is missing, copy from image's original location
|
||||
if [ ! -d "$$GIT_TARGET_DIR/public" ] && [ -d "/var/www/html.orig" ]; then
|
||||
echo "?? [nginx] Copying code from image..."
|
||||
# Copy everything except storage (which is a volume mount)
|
||||
find /var/www/html.orig -mindepth 1 -maxdepth 1 ! -name "storage" -exec cp -r {} "$$GIT_TARGET_DIR/" \; 2>/dev/null || true
|
||||
fi
|
||||
|
||||
if [ -n "$$GIT_REPOSITORY_URL" ]; then
|
||||
# Determine authentication method
|
||||
if [ -n "$$GIT_TOKEN" ]; then
|
||||
GIT_URL_WITH_AUTH=$$(echo "$$GIT_REPOSITORY_URL" | sed "s|https://|https://$${GIT_TOKEN}@|")
|
||||
elif [ -n "$$GIT_USERNAME" ] && [ -n "$$GIT_PASSWORD" ]; then
|
||||
GIT_URL_WITH_AUTH=$$(echo "$$GIT_REPOSITORY_URL" | sed "s|https://|https://$${GIT_USERNAME}:$${GIT_PASSWORD}@|")
|
||||
else
|
||||
GIT_URL_WITH_AUTH="$$GIT_REPOSITORY_URL"
|
||||
fi
|
||||
|
||||
if [ ! -d "$$GIT_TARGET_DIR/.git" ]; then
|
||||
echo "?? [nginx] Cloning repository from $$GIT_REPOSITORY_URL (branch: $${GIT_BRANCH:-main})..."
|
||||
# Remove only files/dirs that are not storage (which is a volume mount)
|
||||
find "$$GIT_TARGET_DIR" -mindepth 1 -maxdepth 1 ! -name "storage" -exec rm -rf {} \; 2>/dev/null || true
|
||||
git clone --branch "$${GIT_BRANCH:-main}" --depth 1 "$$GIT_URL_WITH_AUTH" "$$GIT_TARGET_DIR" || {
|
||||
echo "? Git clone failed. Using image contents."
|
||||
}
|
||||
else
|
||||
echo "?? [nginx] Pulling latest changes..."
|
||||
cd "$$GIT_TARGET_DIR"
|
||||
git fetch origin "$${GIT_BRANCH:-main}" || true
|
||||
git reset --hard "origin/$${GIT_BRANCH:-main}" || true
|
||||
git clean -fd || true
|
||||
fi
|
||||
|
||||
if [ -f "$$GIT_TARGET_DIR/composer.json" ]; then
|
||||
echo "?? [nginx] Installing dependencies..."
|
||||
cd "$$GIT_TARGET_DIR"
|
||||
composer install --no-dev --optimize-autoloader --no-interaction --no-scripts || true
|
||||
composer dump-autoload --optimize --classmap-authoritative || true
|
||||
fi
|
||||
echo "? [nginx] Git sync completed"
|
||||
else
|
||||
echo "?? [nginx] GIT_REPOSITORY_URL not set, using code from image"
|
||||
fi
|
||||
|
||||
# Start nginx only (no PHP-FPM)
|
||||
echo "?? [nginx] Starting nginx..."
|
||||
exec nginx -g "daemon off;"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
# HTTP Router
|
||||
- "traefik.http.routers.app.rule=Host(`${APP_DOMAIN:-michaelschiemer.de}`)"
|
||||
- "traefik.http.routers.app.entrypoints=websecure"
|
||||
- "traefik.http.routers.app.tls=true"
|
||||
- "traefik.http.routers.app.tls.certresolver=letsencrypt"
|
||||
# Service
|
||||
- "traefik.http.services.app.loadbalancer.server.port=80"
|
||||
# Middleware
|
||||
- "traefik.http.routers.app.middlewares=default-chain@file"
|
||||
# Network
|
||||
- "traefik.docker.network=traefik-public"
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://127.0.0.1/health || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
depends_on:
|
||||
app:
|
||||
condition: service_started
|
||||
|
||||
# Redis Cache/Session/Queue Backend
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
container_name: redis
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- app-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
command: >
|
||||
redis-server
|
||||
--requirepass ${REDIS_PASSWORD}
|
||||
--maxmemory 512mb
|
||||
--maxmemory-policy allkeys-lru
|
||||
--save 900 1
|
||||
--save 300 10
|
||||
--save 60 10000
|
||||
--appendonly yes
|
||||
--appendfsync everysec
|
||||
volumes:
|
||||
- redis-data:/data
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
# Queue Worker (Background Jobs)
|
||||
queue-worker:
|
||||
image: git.michaelschiemer.de:5000/framework:latest
|
||||
container_name: queue-worker
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- app-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- APP_ENV=${APP_ENV:-production}
|
||||
- APP_DEBUG=${APP_DEBUG:-false}
|
||||
# Database
|
||||
- DB_HOST=${DB_HOST:-postgres}
|
||||
- DB_PORT=${DB_PORT:-5432}
|
||||
- DB_DATABASE=${DB_DATABASE}
|
||||
- DB_USERNAME=${DB_USERNAME}
|
||||
- DB_PASSWORD=${DB_PASSWORD}
|
||||
# Redis
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=${REDIS_PASSWORD}
|
||||
# Queue
|
||||
- QUEUE_DRIVER=redis
|
||||
- QUEUE_CONNECTION=default
|
||||
- QUEUE_WORKER_SLEEP=${QUEUE_WORKER_SLEEP:-3}
|
||||
- QUEUE_WORKER_TRIES=${QUEUE_WORKER_TRIES:-3}
|
||||
- QUEUE_WORKER_TIMEOUT=${QUEUE_WORKER_TIMEOUT:-60}
|
||||
volumes:
|
||||
- app-storage:/var/www/html/storage
|
||||
- app-logs:/var/www/html/storage/logs
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
command: php console.php queue:work --queue=default --timeout=${QUEUE_WORKER_TIMEOUT:-60}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "php -r 'exit(0);' && test -f /var/www/html/console.php || exit 1"]
|
||||
interval: 60s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
depends_on:
|
||||
app:
|
||||
condition: service_started
|
||||
redis:
|
||||
condition: service_started
|
||||
|
||||
# Scheduler (Cron Jobs)
|
||||
scheduler:
|
||||
image: git.michaelschiemer.de:5000/framework:latest
|
||||
container_name: scheduler
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- app-internal
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- APP_ENV=${APP_ENV:-production}
|
||||
- APP_DEBUG=${APP_DEBUG:-false}
|
||||
# Database
|
||||
- DB_HOST=${DB_HOST:-postgres}
|
||||
- DB_PORT=${DB_PORT:-5432}
|
||||
- DB_DATABASE=${DB_DATABASE}
|
||||
- DB_USERNAME=${DB_USERNAME}
|
||||
- DB_PASSWORD=${DB_PASSWORD}
|
||||
# Redis
|
||||
- REDIS_HOST=redis
|
||||
- REDIS_PORT=6379
|
||||
- REDIS_PASSWORD=${REDIS_PASSWORD}
|
||||
volumes:
|
||||
- app-storage:/var/www/html/storage
|
||||
- app-logs:/var/www/html/storage/logs
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
command: php console.php scheduler:run
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "php -r 'exit(0);' && test -f /var/www/html/console.php || exit 1"]
|
||||
interval: 60s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
depends_on:
|
||||
app:
|
||||
condition: service_started
|
||||
redis:
|
||||
condition: service_started
|
||||
|
||||
volumes:
|
||||
app-code:
|
||||
name: app-code
|
||||
app-storage:
|
||||
name: app-storage
|
||||
app-logs:
|
||||
name: app-logs
|
||||
redis-data:
|
||||
name: redis-data
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
app-internal:
|
||||
external: true
|
||||
name: app-internal
|
||||
@@ -119,6 +119,21 @@ docker compose ps
|
||||
git clone ssh://git@git.michaelschiemer.de:2222/username/repo.git
|
||||
```
|
||||
|
||||
### Configuration File
|
||||
|
||||
Gitea configuration is managed via `app.ini` file:
|
||||
- **Local file**: `deployment/stacks/gitea/app.ini` (for local development)
|
||||
- **Production**: Generated from Ansible template `deployment/ansible/templates/gitea-app.ini.j2`
|
||||
- The `app.ini` is mounted read-only into the container at `/data/gitea/conf/app.ini`
|
||||
- Configuration is based on the official Gitea example: https://github.com/go-gitea/gitea/blob/main/custom/conf/app.example.ini
|
||||
|
||||
**Key Configuration Sections:**
|
||||
- `[server]`: Domain, ports, SSH settings
|
||||
- `[database]`: PostgreSQL connection
|
||||
- `[actions]`: Actions enabled, no GitHub dependency
|
||||
- `[service]`: Registration settings
|
||||
- `[cache]` / `[session]` / `[queue]`: Storage configuration
|
||||
|
||||
### Gitea Actions
|
||||
|
||||
Gitea Actions (GitHub Actions compatible) are enabled by default. To use them:
|
||||
|
||||
80
deployment/stacks/gitea/app.ini
Normal file
80
deployment/stacks/gitea/app.ini
Normal file
@@ -0,0 +1,80 @@
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Gitea Configuration File
|
||||
;; This file is based on the official Gitea example configuration
|
||||
;; https://github.com/go-gitea/gitea/blob/main/custom/conf/app.example.ini
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; General Settings
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
APP_NAME = Gitea: Git with a cup of tea
|
||||
RUN_MODE = prod
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Server Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[server]
|
||||
PROTOCOL = http
|
||||
DOMAIN = git.michaelschiemer.de
|
||||
HTTP_ADDR = 0.0.0.0
|
||||
HTTP_PORT = 3000
|
||||
ROOT_URL = https://git.michaelschiemer.de/
|
||||
PUBLIC_URL_DETECTION = auto
|
||||
|
||||
;; SSH Configuration
|
||||
DISABLE_SSH = false
|
||||
START_SSH_SERVER = true
|
||||
SSH_DOMAIN = git.michaelschiemer.de
|
||||
SSH_PORT = 22
|
||||
SSH_LISTEN_PORT = 22
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Database Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[database]
|
||||
DB_TYPE = postgres
|
||||
HOST = postgres:5432
|
||||
NAME = gitea
|
||||
USER = gitea
|
||||
PASSWD = gitea_password
|
||||
SSL_MODE = disable
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Cache Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[cache]
|
||||
ENABLED = false
|
||||
ADAPTER = memory
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Session Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[session]
|
||||
PROVIDER = file
|
||||
PROVIDER_CONFIG = data/sessions
|
||||
COOKIE_SECURE = true
|
||||
COOKIE_NAME = i_like_gitea
|
||||
GC_INTERVAL_TIME = 86400
|
||||
SESSION_LIFE_TIME = 86400
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Queue Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[queue]
|
||||
TYPE = channel
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Service Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[service]
|
||||
DISABLE_REGISTRATION = true
|
||||
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Actions Configuration
|
||||
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
[actions]
|
||||
ENABLED = true
|
||||
;; Use "self" to use the current Gitea instance for actions (not GitHub)
|
||||
;; Do NOT set DEFAULT_ACTIONS_URL to a custom URL - it's not supported
|
||||
;; Leaving it unset or setting to "self" will use the current instance
|
||||
;DEFAULT_ACTIONS_URL = self
|
||||
@@ -1,5 +1,3 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
gitea:
|
||||
image: gitea/gitea:1.25
|
||||
|
||||
17
deployment/stacks/minio/.env.example
Normal file
17
deployment/stacks/minio/.env.example
Normal file
@@ -0,0 +1,17 @@
|
||||
# MinIO Object Storage Stack Configuration
|
||||
# Copy this file to .env and adjust values
|
||||
|
||||
# Timezone
|
||||
TZ=Europe/Berlin
|
||||
|
||||
# MinIO Root Credentials
|
||||
# Generate secure password with: openssl rand -base64 32
|
||||
MINIO_ROOT_USER=minioadmin
|
||||
MINIO_ROOT_PASSWORD=<generate-with-openssl-rand-base64-32>
|
||||
|
||||
# Domain Configuration
|
||||
# API endpoint (S3-compatible)
|
||||
MINIO_API_DOMAIN=minio-api.michaelschiemer.de
|
||||
|
||||
# Console endpoint (Web UI)
|
||||
MINIO_CONSOLE_DOMAIN=minio.michaelschiemer.de
|
||||
657
deployment/stacks/minio/README.md
Normal file
657
deployment/stacks/minio/README.md
Normal file
@@ -0,0 +1,657 @@
|
||||
# MinIO Object Storage Stack - S3-compatible Object Storage
|
||||
|
||||
## Overview
|
||||
|
||||
MinIO ist ein hochperformanter, S3-kompatibler Object Storage Service für private Cloud- und Edge-Computing-Umgebungen.
|
||||
|
||||
**Features**:
|
||||
- S3-kompatible API (Port 9000)
|
||||
- Web-basierte Console für Management (Port 9001)
|
||||
- SSL via Traefik
|
||||
- Persistent storage
|
||||
- Health checks und Monitoring
|
||||
- Multi-Tenant Bucket Management
|
||||
|
||||
## Services
|
||||
|
||||
- **minio-api.michaelschiemer.de** - S3-kompatible API Endpoint
|
||||
- **minio.michaelschiemer.de** - Web Console (Management UI)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Traefik Stack Running**
|
||||
```bash
|
||||
cd ../traefik
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
2. **DNS Configuration**
|
||||
Point these domains to your server IP (94.16.110.151):
|
||||
- `minio-api.michaelschiemer.de`
|
||||
- `minio.michaelschiemer.de`
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. Create Environment File
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 2. Generate MinIO Root Password
|
||||
|
||||
```bash
|
||||
openssl rand -base64 32
|
||||
```
|
||||
|
||||
Update `.env`:
|
||||
```env
|
||||
MINIO_ROOT_PASSWORD=<generated-password>
|
||||
```
|
||||
|
||||
**Important**: Change default `MINIO_ROOT_USER` in production!
|
||||
|
||||
### 3. Adjust Domains (Optional)
|
||||
|
||||
Edit `.env` to customize domains:
|
||||
```env
|
||||
MINIO_API_DOMAIN=storage-api.example.com
|
||||
MINIO_CONSOLE_DOMAIN=storage.example.com
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
### Initial Setup
|
||||
|
||||
```bash
|
||||
# Ensure Traefik is running
|
||||
docker network inspect traefik-public
|
||||
|
||||
# Start MinIO
|
||||
docker compose up -d
|
||||
|
||||
# Check logs
|
||||
docker compose logs -f
|
||||
|
||||
# Verify health
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### Verify Deployment
|
||||
|
||||
```bash
|
||||
# Test API endpoint
|
||||
curl -I https://minio-api.michaelschiemer.de/minio/health/live
|
||||
|
||||
# Expected: HTTP/2 200
|
||||
|
||||
# Access Console
|
||||
open https://minio.michaelschiemer.de
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Web Console Access
|
||||
|
||||
1. Navigate to: https://minio.michaelschiemer.de
|
||||
2. Login with:
|
||||
- **Access Key**: Value of `MINIO_ROOT_USER`
|
||||
- **Secret Key**: Value of `MINIO_ROOT_PASSWORD`
|
||||
|
||||
### Create Bucket via Console
|
||||
|
||||
1. Login to Console
|
||||
2. Click "Create Bucket"
|
||||
3. Enter bucket name (e.g., `my-bucket`)
|
||||
4. Configure bucket settings:
|
||||
- Versioning
|
||||
- Object Locking
|
||||
- Quota
|
||||
- Retention policies
|
||||
|
||||
### S3 API Access
|
||||
|
||||
#### Using AWS CLI
|
||||
|
||||
```bash
|
||||
# Install AWS CLI (if not installed)
|
||||
pip install awscli
|
||||
|
||||
# Configure MinIO endpoint
|
||||
aws configure set default.s3.signature_version s3v4
|
||||
aws configure set default.s3.addressing_style virtual
|
||||
|
||||
# Set credentials
|
||||
export AWS_ACCESS_KEY_ID=minioadmin
|
||||
export AWS_SECRET_ACCESS_KEY=<MINIO_ROOT_PASSWORD>
|
||||
|
||||
# Test connection
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 ls
|
||||
|
||||
# Create bucket
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 mb s3://my-bucket
|
||||
|
||||
# Upload file
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 cp file.txt s3://my-bucket/
|
||||
|
||||
# Download file
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 cp s3://my-bucket/file.txt ./
|
||||
|
||||
# List objects
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 ls s3://my-bucket/
|
||||
|
||||
# Delete object
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 rm s3://my-bucket/file.txt
|
||||
|
||||
# Delete bucket
|
||||
aws --endpoint-url https://minio-api.michaelschiemer.de s3 rb s3://my-bucket
|
||||
```
|
||||
|
||||
#### Using MinIO Client (mc)
|
||||
|
||||
```bash
|
||||
# Install MinIO Client
|
||||
wget https://dl.min.io/client/mc/release/linux-amd64/mc
|
||||
chmod +x mc
|
||||
sudo mv mc /usr/local/bin/
|
||||
|
||||
# Configure alias
|
||||
mc alias set minio https://minio-api.michaelschiemer.de minioadmin <MINIO_ROOT_PASSWORD>
|
||||
|
||||
# Test connection
|
||||
mc admin info minio
|
||||
|
||||
# List buckets
|
||||
mc ls minio
|
||||
|
||||
# Create bucket
|
||||
mc mb minio/my-bucket
|
||||
|
||||
# Upload file
|
||||
mc cp file.txt minio/my-bucket/
|
||||
|
||||
# Download file
|
||||
mc cp minio/my-bucket/file.txt ./
|
||||
|
||||
# List objects
|
||||
mc ls minio/my-bucket/
|
||||
|
||||
# Remove object
|
||||
mc rm minio/my-bucket/file.txt
|
||||
|
||||
# Remove bucket
|
||||
mc rb minio/my-bucket
|
||||
```
|
||||
|
||||
#### Using cURL
|
||||
|
||||
```bash
|
||||
# List buckets
|
||||
curl -X GET https://minio-api.michaelschiemer.de \
|
||||
-u "minioadmin:<MINIO_ROOT_PASSWORD>"
|
||||
|
||||
# Upload file (using presigned URL from Console or SDK)
|
||||
```
|
||||
|
||||
### Programmatic Access
|
||||
|
||||
#### PHP (Using AWS SDK)
|
||||
|
||||
```php
|
||||
use Aws\S3\S3Client;
|
||||
use Aws\Exception\AwsException;
|
||||
|
||||
$s3Client = new S3Client([
|
||||
'version' => 'latest',
|
||||
'region' => 'us-east-1',
|
||||
'endpoint' => 'https://minio-api.michaelschiemer.de',
|
||||
'use_path_style_endpoint' => true,
|
||||
'credentials' => [
|
||||
'key' => 'minioadmin',
|
||||
'secret' => '<MINIO_ROOT_PASSWORD>',
|
||||
],
|
||||
]);
|
||||
|
||||
// Create bucket
|
||||
$s3Client->createBucket(['Bucket' => 'my-bucket']);
|
||||
|
||||
// Upload file
|
||||
$s3Client->putObject([
|
||||
'Bucket' => 'my-bucket',
|
||||
'Key' => 'file.txt',
|
||||
'Body' => fopen('/path/to/file.txt', 'r'),
|
||||
]);
|
||||
|
||||
// Download file
|
||||
$result = $s3Client->getObject([
|
||||
'Bucket' => 'my-bucket',
|
||||
'Key' => 'file.txt',
|
||||
]);
|
||||
echo $result['Body'];
|
||||
```
|
||||
|
||||
#### JavaScript/Node.js
|
||||
|
||||
```javascript
|
||||
const AWS = require('aws-sdk');
|
||||
const s3 = new AWS.S3({
|
||||
endpoint: 'https://minio-api.michaelschiemer.de',
|
||||
accessKeyId: 'minioadmin',
|
||||
secretAccessKey: '<MINIO_ROOT_PASSWORD>',
|
||||
s3ForcePathStyle: true,
|
||||
signatureVersion: 'v4',
|
||||
});
|
||||
|
||||
// Create bucket
|
||||
s3.createBucket({ Bucket: 'my-bucket' }, (err, data) => {
|
||||
if (err) console.error(err);
|
||||
else console.log('Bucket created');
|
||||
});
|
||||
|
||||
// Upload file
|
||||
const params = {
|
||||
Bucket: 'my-bucket',
|
||||
Key: 'file.txt',
|
||||
Body: require('fs').createReadStream('/path/to/file.txt'),
|
||||
};
|
||||
s3.upload(params, (err, data) => {
|
||||
if (err) console.error(err);
|
||||
else console.log('File uploaded:', data.Location);
|
||||
});
|
||||
```
|
||||
|
||||
#### Python (boto3)
|
||||
|
||||
```python
|
||||
import boto3
|
||||
from botocore.client import Config
|
||||
|
||||
s3_client = boto3.client(
|
||||
's3',
|
||||
endpoint_url='https://minio-api.michaelschiemer.de',
|
||||
aws_access_key_id='minioadmin',
|
||||
aws_secret_access_key='<MINIO_ROOT_PASSWORD>',
|
||||
config=Config(signature_version='s3v4'),
|
||||
region_name='us-east-1'
|
||||
)
|
||||
|
||||
# Create bucket
|
||||
s3_client.create_bucket(Bucket='my-bucket')
|
||||
|
||||
# Upload file
|
||||
s3_client.upload_file('/path/to/file.txt', 'my-bucket', 'file.txt')
|
||||
|
||||
# Download file
|
||||
s3_client.download_file('my-bucket', 'file.txt', '/path/to/downloaded.txt')
|
||||
|
||||
# List objects
|
||||
response = s3_client.list_objects_v2(Bucket='my-bucket')
|
||||
for obj in response.get('Contents', []):
|
||||
print(obj['Key'])
|
||||
```
|
||||
|
||||
## User Management
|
||||
|
||||
### Create Access Keys via Console
|
||||
|
||||
1. Login to Console: https://minio.michaelschiemer.de
|
||||
2. Navigate to "Access Keys" → "Create Access Key"
|
||||
3. Assign policies (read-only, read-write, admin)
|
||||
4. Save Access Key and Secret Key (only shown once!)
|
||||
|
||||
### Create Access Keys via mc CLI
|
||||
|
||||
```bash
|
||||
# Create new user
|
||||
mc admin user add minio myuser mypassword
|
||||
|
||||
# Create access key for user
|
||||
mc admin user svcacct add minio myuser --name my-access-key
|
||||
|
||||
# Output shows Access Key and Secret Key
|
||||
```
|
||||
|
||||
### Policy Management
|
||||
|
||||
```bash
|
||||
# List policies
|
||||
mc admin policy list minio
|
||||
|
||||
# Create custom policy
|
||||
mc admin policy add minio readwrite-policy /path/to/policy.json
|
||||
|
||||
# Assign policy to user
|
||||
mc admin policy set minio readwrite-policy user=myuser
|
||||
|
||||
# Remove policy from user
|
||||
mc admin policy remove minio readwrite-policy user=myuser
|
||||
```
|
||||
|
||||
**Example Policy** (`policy.json`):
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:GetObject",
|
||||
"s3:PutObject",
|
||||
"s3:DeleteObject"
|
||||
],
|
||||
"Resource": ["arn:aws:s3:::my-bucket/*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket"],
|
||||
"Resource": ["arn:aws:s3:::my-bucket"]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with Application Stack
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Add to `application/.env`:
|
||||
|
||||
```env
|
||||
# MinIO Object Storage
|
||||
MINIO_ENDPOINT=https://minio-api.michaelschiemer.de
|
||||
MINIO_ACCESS_KEY=<access-key>
|
||||
MINIO_SECRET_KEY=<secret-key>
|
||||
MINIO_BUCKET=<bucket-name>
|
||||
MINIO_USE_SSL=true
|
||||
MINIO_REGION=us-east-1
|
||||
```
|
||||
|
||||
### PHP Integration
|
||||
|
||||
```php
|
||||
// Use AWS SDK or MinIO PHP SDK
|
||||
use Aws\S3\S3Client;
|
||||
|
||||
$s3Client = new S3Client([
|
||||
'version' => 'latest',
|
||||
'region' => $_ENV['MINIO_REGION'],
|
||||
'endpoint' => $_ENV['MINIO_ENDPOINT'],
|
||||
'use_path_style_endpoint' => true,
|
||||
'credentials' => [
|
||||
'key' => $_ENV['MINIO_ACCESS_KEY'],
|
||||
'secret' => $_ENV['MINIO_SECRET_KEY'],
|
||||
],
|
||||
]);
|
||||
```
|
||||
|
||||
## Backup & Recovery
|
||||
|
||||
### Manual Backup
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# backup-minio.sh
|
||||
|
||||
BACKUP_DIR="/backups/minio"
|
||||
DATE=$(date +%Y%m%d_%H%M%S)
|
||||
|
||||
mkdir -p $BACKUP_DIR
|
||||
|
||||
# Backup MinIO data
|
||||
docker run --rm \
|
||||
-v minio-data:/data \
|
||||
-v $BACKUP_DIR:/backup \
|
||||
alpine tar czf /backup/minio-data-$DATE.tar.gz -C /data .
|
||||
|
||||
echo "Backup completed: $BACKUP_DIR/minio-data-$DATE.tar.gz"
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
|
||||
```bash
|
||||
# Stop MinIO
|
||||
docker compose down
|
||||
|
||||
# Restore data
|
||||
docker run --rm \
|
||||
-v minio-data:/data \
|
||||
-v /backups/minio:/backup \
|
||||
alpine tar xzf /backup/minio-data-YYYYMMDD_HHMMSS.tar.gz -C /data
|
||||
|
||||
# Start MinIO
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Automated Backups
|
||||
|
||||
Add to crontab:
|
||||
|
||||
```bash
|
||||
# Daily backup at 3 AM
|
||||
0 3 * * * /path/to/backup-minio.sh
|
||||
|
||||
# Keep only last 30 days
|
||||
0 4 * * * find /backups/minio -type f -mtime +30 -delete
|
||||
```
|
||||
|
||||
### Bucket-Level Backup (Recommended)
|
||||
|
||||
Use MinIO's built-in replication or external tools:
|
||||
|
||||
```bash
|
||||
# Sync bucket to external storage
|
||||
mc mirror minio/my-bucket /backup/my-bucket/
|
||||
|
||||
# Or sync to another MinIO instance
|
||||
mc mirror minio/my-bucket remote-minio/my-bucket/
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Check MinIO health
|
||||
docker compose ps
|
||||
|
||||
# API health endpoint
|
||||
curl -f https://minio-api.michaelschiemer.de/minio/health/live
|
||||
|
||||
# Check storage usage
|
||||
docker exec minio du -sh /data
|
||||
```
|
||||
|
||||
### Logs
|
||||
|
||||
```bash
|
||||
# View logs
|
||||
docker compose logs -f
|
||||
|
||||
# Check for errors
|
||||
docker compose logs minio | grep -i error
|
||||
|
||||
# Monitor API access
|
||||
docker compose logs -f minio | grep "GET /"
|
||||
```
|
||||
|
||||
### Storage Statistics
|
||||
|
||||
```bash
|
||||
# Check volume size
|
||||
docker volume inspect minio-data
|
||||
|
||||
# Check disk usage
|
||||
docker system df -v | grep minio
|
||||
|
||||
# List buckets via mc
|
||||
mc ls minio
|
||||
```
|
||||
|
||||
### Metrics (via mc)
|
||||
|
||||
```bash
|
||||
# Server info
|
||||
mc admin info minio
|
||||
|
||||
# Service status
|
||||
mc admin service status minio
|
||||
|
||||
# Trace operations
|
||||
mc admin trace minio
|
||||
```
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### MinIO Configuration
|
||||
|
||||
For high-traffic scenarios, edit `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
# Increase concurrent operations
|
||||
- MINIO_CACHE_DRIVES=/mnt/cache1,/mnt/cache2
|
||||
- MINIO_CACHE_QUOTA=80
|
||||
- MINIO_CACHE_AFTER=0
|
||||
- MINIO_CACHE_WATERMARK_LOW=70
|
||||
- MINIO_CACHE_WATERMARK_HIGH=90
|
||||
```
|
||||
|
||||
### Storage Optimization
|
||||
|
||||
```bash
|
||||
# Monitor storage growth
|
||||
du -sh /var/lib/docker/volumes/minio-data/
|
||||
|
||||
# Enable compression (handled automatically by MinIO)
|
||||
|
||||
# Set bucket quotas via Console or mc
|
||||
mc admin quota set minio/my-bucket --hard 100GB
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Cannot Access Console
|
||||
|
||||
```bash
|
||||
# Check service is running
|
||||
docker compose ps
|
||||
|
||||
# Check Traefik routing
|
||||
docker exec traefik cat /etc/traefik/traefik.yml
|
||||
|
||||
# Check network
|
||||
docker network inspect traefik-public | grep minio
|
||||
|
||||
# Test from server
|
||||
curl -k https://localhost:9001
|
||||
```
|
||||
|
||||
### Authentication Failed
|
||||
|
||||
```bash
|
||||
# Verify environment variables
|
||||
docker exec minio env | grep MINIO_ROOT
|
||||
|
||||
# Check logs
|
||||
docker compose logs minio | grep -i auth
|
||||
|
||||
# Reset root password (stop container, remove volume, restart)
|
||||
```
|
||||
|
||||
### SSL Certificate Issues
|
||||
|
||||
```bash
|
||||
# Verify Traefik certificate
|
||||
docker exec traefik cat /acme.json | grep minio
|
||||
|
||||
# Test SSL
|
||||
openssl s_client -connect minio-api.michaelschiemer.de:443 \
|
||||
-servername minio-api.michaelschiemer.de < /dev/null
|
||||
```
|
||||
|
||||
### Storage Issues
|
||||
|
||||
```bash
|
||||
# Check volume mount
|
||||
docker exec minio df -h /data
|
||||
|
||||
# Check for corrupted data
|
||||
docker exec minio find /data -type f -name "*.json" | head
|
||||
|
||||
# Check disk space
|
||||
df -h /var/lib/docker/volumes/minio-data/
|
||||
```
|
||||
|
||||
### API Connection Errors
|
||||
|
||||
```bash
|
||||
# Verify endpoint URL
|
||||
curl -I https://minio-api.michaelschiemer.de/minio/health/live
|
||||
|
||||
# Test with credentials
|
||||
curl -u minioadmin:<password> https://minio-api.michaelschiemer.de
|
||||
|
||||
# Check CORS settings (if needed for web apps)
|
||||
mc admin config set minio api cors_allow_origin "https://yourdomain.com"
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
### Security Best Practices
|
||||
|
||||
1. **Strong Credentials**: Use strong passwords for root user and access keys
|
||||
2. **Change Default Root User**: Don't use `minioadmin` in production
|
||||
3. **SSL Only**: Always use HTTPS (enforced via Traefik)
|
||||
4. **Access Key Rotation**: Regularly rotate access keys
|
||||
5. **Policy-Based Access**: Use IAM policies to limit permissions
|
||||
6. **Bucket Policies**: Configure bucket-level policies
|
||||
7. **Audit Logging**: Enable audit logging for compliance
|
||||
8. **Encryption**: Enable encryption at rest and in transit
|
||||
|
||||
### Enable Audit Logging
|
||||
|
||||
```bash
|
||||
# Configure audit logging
|
||||
mc admin config set minio audit_webhook \
|
||||
endpoint=https://log-service.example.com/webhook \
|
||||
auth_token=<token>
|
||||
```
|
||||
|
||||
### Enable Encryption
|
||||
|
||||
```bash
|
||||
# Set encryption keys
|
||||
mc admin config set minio encryption \
|
||||
sse-s3 enable \
|
||||
kms endpoint=https://kms.example.com \
|
||||
kms-key-id=<key-id>
|
||||
```
|
||||
|
||||
### Update Stack
|
||||
|
||||
```bash
|
||||
# Pull latest images
|
||||
docker compose pull
|
||||
|
||||
# Recreate containers
|
||||
docker compose up -d
|
||||
|
||||
# Verify
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
### Security Headers
|
||||
|
||||
Security headers are applied via Traefik's `default-chain@file` middleware:
|
||||
- HSTS
|
||||
- Content-Type Nosniff
|
||||
- XSS Protection
|
||||
- Frame Deny
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- **MinIO Documentation**: https://min.io/docs/
|
||||
- **S3 API Compatibility**: https://min.io/docs/minio/linux/reference/minio-mc/mc.html
|
||||
- **MinIO Client (mc)**: https://min.io/docs/minio/linux/reference/minio-mc.html
|
||||
- **MinIO Erasure Coding**: https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html
|
||||
- **MinIO Security**: https://min.io/docs/minio/linux/operations/security.html
|
||||
50
deployment/stacks/minio/docker-compose.yml
Normal file
50
deployment/stacks/minio/docker-compose.yml
Normal file
@@ -0,0 +1,50 @@
|
||||
services:
|
||||
minio:
|
||||
image: minio/minio:latest
|
||||
container_name: minio
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- traefik-public
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- MINIO_ROOT_USER=${MINIO_ROOT_USER:-minioadmin}
|
||||
- MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
|
||||
command: server /data --console-address ":9001"
|
||||
volumes:
|
||||
- minio-data:/data
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
|
||||
# API Router (S3-compatible endpoint)
|
||||
- "traefik.http.routers.minio-api.rule=Host(`${MINIO_API_DOMAIN:-minio-api.michaelschiemer.de}`)"
|
||||
- "traefik.http.routers.minio-api.entrypoints=websecure"
|
||||
- "traefik.http.routers.minio-api.tls=true"
|
||||
- "traefik.http.routers.minio-api.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.routers.minio-api.service=minio-api"
|
||||
- "traefik.http.routers.minio-api.middlewares=default-chain@file"
|
||||
- "traefik.http.services.minio-api.loadbalancer.server.port=9000"
|
||||
|
||||
# Console Router (Web UI)
|
||||
- "traefik.http.routers.minio-console.rule=Host(`${MINIO_CONSOLE_DOMAIN:-minio.michaelschiemer.de}`)"
|
||||
- "traefik.http.routers.minio-console.entrypoints=websecure"
|
||||
- "traefik.http.routers.minio-console.tls=true"
|
||||
- "traefik.http.routers.minio-console.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.routers.minio-console.service=minio-console"
|
||||
- "traefik.http.routers.minio-console.middlewares=default-chain@file"
|
||||
- "traefik.http.services.minio-console.loadbalancer.server.port=9001"
|
||||
|
||||
volumes:
|
||||
minio-data:
|
||||
name: minio-data
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
@@ -1,5 +1,3 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
# PostgreSQL Database
|
||||
postgres:
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
registry:
|
||||
image: registry:2.8
|
||||
@@ -8,7 +6,7 @@ services:
|
||||
networks:
|
||||
- traefik-public
|
||||
ports:
|
||||
- "127.0.0.1:5000:5000"
|
||||
- "0.0.0.0:5000:5000"
|
||||
environment:
|
||||
- TZ=Europe/Berlin
|
||||
- REGISTRY_STORAGE_DELETE_ENABLED=true
|
||||
@@ -39,7 +37,7 @@ services:
|
||||
# Middleware
|
||||
- "traefik.http.routers.registry.middlewares=default-chain@file"
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--spider", "-q", "http://localhost:5000/v2/"]
|
||||
test: ["CMD-SHELL", "wget --spider -q --header='Authorization: Basic YWRtaW46cmVnaXN0cnktc2VjdXJlLXBhc3N3b3JkLTIwMjU=' http://localhost:5000/v2/ || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
traefik:
|
||||
image: traefik:v3.0
|
||||
|
||||
Reference in New Issue
Block a user