fix: Gitea Traefik routing and connection pool optimization
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Failing after 10m14s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Has been skipped
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
Security Vulnerability Scan / Check for Dependency Changes (push) Failing after 11m25s
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Failing after 10m14s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Has been skipped
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
Security Vulnerability Scan / Check for Dependency Changes (push) Failing after 11m25s
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled
- Remove middleware reference from Gitea Traefik labels (caused routing issues) - Optimize Gitea connection pool settings (MAX_IDLE_CONNS=30, authentication_timeout=180s) - Add explicit service reference in Traefik labels - Fix intermittent 504 timeouts by improving PostgreSQL connection handling Fixes Gitea unreachability via git.michaelschiemer.de
This commit is contained in:
22
docs/deployment/DEPLOYMENT_FIX.md
Normal file
22
docs/deployment/DEPLOYMENT_FIX.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# Deployment Fix f?r Staging 502 Error
|
||||
|
||||
Das Problem: Nach jedem Deployment tritt der 502-Fehler wieder auf.
|
||||
|
||||
## L?sung: Deployment-Script erweitern
|
||||
|
||||
F?ge nach Zeile 991 in `.gitea/workflows/build-image.yml` folgenden Code ein:
|
||||
|
||||
```yaml
|
||||
# Fix nginx upstream configuration - critical fix for 502 errors
|
||||
# sites-available/default uses 127.0.0.1:9000 but PHP-FPM runs in staging-app container
|
||||
echo \"?? Fixing nginx PHP-FPM upstream configuration (post-deploy fix)...\"
|
||||
sleep 5
|
||||
docker compose exec -T staging-nginx sed -i '/upstream php-upstream {/,/}/s|server 127.0.0.1:9000;|server staging-app:9000;|g' /etc/nginx/sites-available/default || echo \"?? Upstream fix (127.0.0.1) failed\"
|
||||
docker compose exec -T staging-nginx sed -i '/upstream php-upstream {/,/}/s|server localhost:9000;|server staging-app:9000;|g' /etc/nginx/sites-available/default || echo \"?? Upstream fix (localhost) failed\"
|
||||
docker compose exec -T staging-nginx nginx -t && docker compose restart staging-nginx || echo \"?? Nginx config test or restart failed\"
|
||||
echo \"? Nginx configuration fixed and reloaded\"
|
||||
```
|
||||
|
||||
**Position:** Nach Zeile 991 (`docker compose restart staging-app || echo \"?? Failed to restart staging-app\"`) und vor Zeile 993 (`echo \"? Waiting for services to stabilize...\"`)
|
||||
|
||||
**Grund:** Die Container werden mit `--force-recreate` neu erstellt, wodurch die Datei `/etc/nginx/sites-available/default` wieder aus dem Docker-Image kommt und `127.0.0.1:9000` verwendet. Dieser Fix muss nach jedem Deployment ausgef?hrt werden.
|
||||
266
docs/deployment/ENV_SETUP.md
Normal file
266
docs/deployment/ENV_SETUP.md
Normal file
@@ -0,0 +1,266 @@
|
||||
# Environment Configuration Guide
|
||||
|
||||
## 📁 .env File Structure (Base + Override Pattern)
|
||||
|
||||
Die neue Struktur verwendet ein **Base + Override Pattern** (analog zu docker-compose):
|
||||
|
||||
```
|
||||
├── .env.example # Template für neue Entwickler (vollständige Dokumentation)
|
||||
├── .env.base # Gemeinsame Variablen für alle Environments (versioniert)
|
||||
├── .env.local # Lokale Development-Overrides (gitignored)
|
||||
├── .env.staging # Staging-spezifische Overrides (optional, gitignored)
|
||||
└── .env.production # Production (generiert durch Ansible, nicht im Repo)
|
||||
```
|
||||
|
||||
## 🏗️ Development Setup
|
||||
|
||||
### Initial Setup
|
||||
|
||||
```bash
|
||||
# 1. .env.base ist bereits im Repository (gemeinsame Variablen)
|
||||
# 2. Erstelle .env.local für lokale Overrides
|
||||
cp .env.example .env.local
|
||||
|
||||
# 3. Passe .env.local an deine lokale Entwicklung an
|
||||
# - DB Credentials (lokal)
|
||||
# - API Keys (lokal)
|
||||
# - Debug-Flags
|
||||
```
|
||||
|
||||
### Framework lädt automatisch: `.env.base` → `.env.local` (Overrides)
|
||||
|
||||
**Priorität:**
|
||||
1. System Environment Variables (Docker ENV)
|
||||
2. `.env.base` (gemeinsame Basis)
|
||||
3. `.env.local` (lokale Overrides)
|
||||
4. `.env.secrets` (verschlüsselte Secrets, optional)
|
||||
|
||||
**Wichtig:** `env_file` in Docker Compose ist nicht nötig!
|
||||
- Framework lädt automatisch `.env.base` → `.env.local` via `EncryptedEnvLoader`
|
||||
- Docker Compose `env_file` ist optional und wird nur für Container-interne Variablen benötigt
|
||||
- PHP-Anwendung lädt ENV-Variablen direkt aus den Dateien
|
||||
|
||||
**Backward Compatibility:**
|
||||
- Falls `.env.base` oder `.env.local` nicht existieren, wird `.env` geladen (Fallback)
|
||||
- Migration: Bestehende `.env` Files funktionieren weiterhin
|
||||
|
||||
## 🚀 Production Deployment
|
||||
|
||||
### Production .env Management
|
||||
|
||||
**WICHTIG**: Production `.env` wird **NICHT** aus dem Repository deployed!
|
||||
|
||||
### Single Source of Truth
|
||||
|
||||
```
|
||||
Server: /home/deploy/michaelschiemer/shared/.env.production
|
||||
```
|
||||
|
||||
Dieser File wird von **Ansible** automatisch erstellt aus:
|
||||
```
|
||||
deployment/infrastructure/templates/.env.production.j2
|
||||
```
|
||||
|
||||
### Production Deployment Process
|
||||
|
||||
```bash
|
||||
# Ansible Playbook erstellt automatisch die Production .env
|
||||
cd deployment/infrastructure
|
||||
ansible-playbook -i inventories/production/hosts.yml \
|
||||
playbooks/deploy-rsync-based.yml \
|
||||
--vault-password-file .vault_pass
|
||||
```
|
||||
|
||||
**Ansible macht dabei**:
|
||||
1. Template `.env.production.j2` rendern
|
||||
2. Vault-verschlüsselte Secrets einsetzen
|
||||
3. File nach `/home/deploy/michaelschiemer/shared/.env.production` deployen
|
||||
4. Docker Compose mounted diesen File in Container
|
||||
|
||||
## 🔒 Security & Secret Management
|
||||
|
||||
### Docker Secrets (Production & Staging)
|
||||
|
||||
**Production und Staging verwenden Docker Secrets:**
|
||||
|
||||
1. **Ansible Vault → Docker Secrets Dateien**
|
||||
- Ansible Playbook erstellt Secret-Dateien in `secrets/` Verzeichnis
|
||||
- Dateien haben sichere Permissions (0600)
|
||||
|
||||
2. **Docker Compose Secrets**
|
||||
- Secrets werden in `docker-compose.base.yml` definiert
|
||||
- Environment-Variablen nutzen `*_FILE` Pattern (z.B. `DB_PASSWORD_FILE=/run/secrets/db_user_password`)
|
||||
- Framework lädt automatisch via `DockerSecretsResolver`
|
||||
|
||||
3. **Framework Support**
|
||||
- `DockerSecretsResolver` unterstützt automatisch `*_FILE` Pattern
|
||||
- Kein manuelles Secret-Loading mehr nötig (wird automatisch vom Framework behandelt)
|
||||
|
||||
### Development
|
||||
|
||||
```bash
|
||||
# .env.local niemals committen
|
||||
git status
|
||||
# Should show: .env.local (untracked) ✅
|
||||
|
||||
# .env.base ist versioniert (keine Secrets!)
|
||||
# Falls versehentlich staged:
|
||||
git reset HEAD .env.local
|
||||
```
|
||||
|
||||
### Production
|
||||
|
||||
- ✅ Secrets in Ansible Vault
|
||||
- ✅ Ansible erstellt Docker Secrets Dateien (`secrets/*.txt`)
|
||||
- ✅ Docker Compose Secrets aktiviert
|
||||
- ✅ Framework lädt automatisch via `*_FILE` Pattern
|
||||
- ✅ .env.production auf Server wird NICHT ins Repository committed
|
||||
- ✅ Template `application.env.j2` verwendet `*_FILE` Pattern
|
||||
|
||||
## 📝 Adding New Environment Variables
|
||||
|
||||
### Development
|
||||
|
||||
```bash
|
||||
# 1. Add to .env.base if shared across environments
|
||||
echo "NEW_API_KEY=" >> .env.base
|
||||
|
||||
# 2. Add to .env.local for local development
|
||||
echo "NEW_API_KEY=abc123..." >> .env.local
|
||||
|
||||
# 3. Update .env.example for documentation
|
||||
echo "NEW_API_KEY=your_api_key_here" >> .env.example
|
||||
```
|
||||
|
||||
**Hinweis:** Wenn die Variable nur für lokale Entwicklung ist, nur in `.env.local` hinzufügen.
|
||||
|
||||
### Production (mit Docker Secrets)
|
||||
|
||||
```bash
|
||||
# 1. Add to Ansible Template (use *_FILE pattern for secrets)
|
||||
# File: deployment/ansible/templates/application.env.j2
|
||||
echo "# Use Docker Secrets via *_FILE pattern" >> application.env.j2
|
||||
echo "NEW_API_KEY_FILE=/run/secrets/new_api_key" >> application.env.j2
|
||||
|
||||
# 2. Add to docker-compose.base.yml secrets section
|
||||
# File: docker-compose.base.yml
|
||||
# secrets:
|
||||
# new_api_key:
|
||||
# file: ./secrets/new_api_key.txt
|
||||
|
||||
# 3. Add secret to Ansible Vault
|
||||
ansible-vault edit deployment/ansible/secrets/production.vault.yml
|
||||
# Add: vault_new_api_key: "production_value"
|
||||
|
||||
# 4. Update setup-production-secrets.yml to create secret file
|
||||
# File: deployment/ansible/playbooks/setup-production-secrets.yml
|
||||
# Add to loop:
|
||||
# - name: new_api_key
|
||||
# value: "{{ vault_new_api_key }}"
|
||||
|
||||
# 5. Deploy
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/setup-production-secrets.yml \
|
||||
--vault-password-file .vault_pass
|
||||
```
|
||||
|
||||
### Production (ohne Docker Secrets, fallback)
|
||||
|
||||
Falls Docker Secrets nicht verwendet werden sollen:
|
||||
|
||||
```bash
|
||||
# 1. Add to Ansible Template
|
||||
# File: deployment/ansible/templates/application.env.j2
|
||||
echo "NEW_API_KEY={{ vault_new_api_key }}" >> application.env.j2
|
||||
|
||||
# 2. Add secret to Ansible Vault
|
||||
ansible-vault edit deployment/ansible/secrets/production.vault.yml
|
||||
# Add: vault_new_api_key: "production_value"
|
||||
|
||||
# 3. Deploy
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/deploy-update.yml
|
||||
```
|
||||
|
||||
## 🗑️ Removed Files (Consolidation 27.10.2024)
|
||||
|
||||
Diese Files wurden gelöscht, da sie redundant/nicht verwendet wurden:
|
||||
|
||||
```
|
||||
❌ .env.production (Root - redundant)
|
||||
❌ .env.production.example (Root - nicht verwendet)
|
||||
❌ .env.backup.20250912_133135 (Altes Backup)
|
||||
❌ .env.analytics.example (In .env.example integriert)
|
||||
❌ .env.secrets.example (In .env.example integriert)
|
||||
❌ deployment/applications/environments/ (Gesamtes Directory gelöscht)
|
||||
```
|
||||
|
||||
## ✅ Current State
|
||||
|
||||
### Local Development
|
||||
- ✅ Base File: `.env.base` (versioniert, gemeinsame Variablen)
|
||||
- ✅ Override File: `.env.local` (gitignored, lokale Anpassungen)
|
||||
- ✅ Template: `.env.example` (Dokumentation)
|
||||
- ✅ Framework lädt automatisch: `.env.base` → `.env.local` (Overrides)
|
||||
|
||||
### Production
|
||||
- ✅ Single Source: `/home/deploy/michaelschiemer/shared/.env.production` (auf Server)
|
||||
- ✅ Verwaltet durch: Ansible Template `application.env.j2`
|
||||
- ✅ Secrets: Docker Secrets (`secrets/*.txt` Dateien)
|
||||
- ✅ Framework lädt automatisch via `*_FILE` Pattern (`DockerSecretsResolver`)
|
||||
- ✅ Keine Duplikate
|
||||
|
||||
### Staging
|
||||
- ✅ Docker Compose Environment Variables
|
||||
- ✅ Docker Secrets aktiviert (wie Production)
|
||||
- ✅ Optional: `.env.staging` für Staging-spezifische Overrides
|
||||
|
||||
## 🔍 Verification
|
||||
|
||||
```bash
|
||||
# Check local .env files
|
||||
ls -la .env*
|
||||
# Should show: .env.base (versioniert), .env.local (gitignored), .env.example
|
||||
|
||||
# Check Ansible template exists
|
||||
ls -la deployment/ansible/templates/application.env.j2
|
||||
# Should exist
|
||||
|
||||
# Check Docker Secrets files exist (on server)
|
||||
ls -la {{ app_stack_path }}/secrets/
|
||||
# Should show: db_user_password.txt, redis_password.txt, app_key.txt, etc.
|
||||
|
||||
# Check NO old files remain
|
||||
find . -name ".env.production" -o -name ".env.*.example" | grep -v .env.example | grep -v .env.base
|
||||
# Should be empty
|
||||
```
|
||||
|
||||
## 📞 Support
|
||||
|
||||
Bei Fragen zum .env Setup:
|
||||
- Development: Siehe `.env.base` (gemeinsame Variablen) und `.env.example` (Dokumentation)
|
||||
- Production: Siehe `deployment/ansible/templates/application.env.j2`
|
||||
- Secrets: Docker Secrets aktiviert, verwaltet durch Ansible Vault
|
||||
- Migration: Framework unterstützt Fallback auf `.env` (alte Struktur)
|
||||
|
||||
## 🔄 Migration von alter Struktur
|
||||
|
||||
**Von `.env` zu `.env.base` + `.env.local`:**
|
||||
|
||||
```bash
|
||||
# 1. Erstelle .env.base (gemeinsame Variablen extrahieren)
|
||||
# (wird automatisch vom Framework erkannt)
|
||||
|
||||
# 2. Erstelle .env.local (nur lokale Overrides)
|
||||
cp .env .env.local
|
||||
|
||||
# 3. Entferne gemeinsame Variablen aus .env.local
|
||||
# (nur lokale Anpassungen behalten)
|
||||
|
||||
# 4. Alte .env kann später entfernt werden
|
||||
# (nach erfolgreicher Migration)
|
||||
```
|
||||
|
||||
**Hinweis:** Framework lädt automatisch `.env.base` + `.env.local`. Falls diese nicht existieren, wird `.env` als Fallback geladen (Backward Compatibility).
|
||||
100
docs/deployment/status/.deployment-test-status.md
Normal file
100
docs/deployment/status/.deployment-test-status.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# Deployment-Test Status - Chat-Zusammenfassung
|
||||
|
||||
**Datum**: Aktuelle Session
|
||||
**Zweck**: Staging-Setup mit separaten Datenbank-Stacks testen
|
||||
|
||||
## ✅ Abgeschlossen
|
||||
|
||||
### 1. Implementierung
|
||||
- ✅ Separate PostgreSQL-Stacks erstellt (Production & Staging)
|
||||
- ✅ Ansible-Roles für PostgreSQL-Stacks
|
||||
- ✅ Application-Stacks angepasst (DB-Verbindungen)
|
||||
- ✅ Staging-Setup Playbook erstellt
|
||||
- ✅ Dokumentation aktualisiert
|
||||
- ✅ Quick-Start-Scripts erstellt
|
||||
- ✅ Ansible-Verifikations-Playbooks erstellt
|
||||
|
||||
### 2. Lokale Verifikation
|
||||
- ✅ PostgreSQL-Production Stack: Syntax OK
|
||||
- ✅ PostgreSQL-Staging Stack: Syntax OK
|
||||
- ✅ Staging Stack (Root): Syntax OK
|
||||
- ⚠️ Ansible: Nicht lokal installiert (wird auf Control-Node benötigt)
|
||||
|
||||
## ❌ Aktuelle Probleme
|
||||
|
||||
### SSH-Zugriff
|
||||
- **Problem**: SSH-Key kann nicht geladen werden
|
||||
- **Fehler**: `error in libcrypto` / `Permission denied (publickey)`
|
||||
- **Key vorhanden**: `~/.ssh/production` (Berechtigungen: 600)
|
||||
- **Mögliche Ursachen**: Key-Format, WSL/libcrypto-Kompatibilität
|
||||
|
||||
## 📋 Nächste Schritte
|
||||
|
||||
### Option 1: Manuelle Tests auf Server
|
||||
- **Anleitung**: `deployment/docs/guides/manual-server-test.md`
|
||||
- **Schritte**:
|
||||
1. SSH-Verbindung: `ssh production`
|
||||
2. PostgreSQL-Stacks starten
|
||||
3. Networks verifizieren
|
||||
4. Datenbank-Verbindungen testen
|
||||
5. Health-Checks durchführen
|
||||
|
||||
### Option 2: Ansible verwenden
|
||||
```bash
|
||||
cd ~/dev/michaelschiemer/deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/verify-staging.yml
|
||||
```
|
||||
|
||||
### Option 3: Quick-Start-Script (auf Server)
|
||||
```bash
|
||||
cd ~/deployment
|
||||
./scripts/staging-quick-start.sh
|
||||
```
|
||||
|
||||
## 📁 Wichtige Dateien
|
||||
|
||||
### Dokumentation
|
||||
- `deployment/docs/guides/test-execution-plan.md` - Detaillierter Testplan
|
||||
- `deployment/docs/guides/manual-server-test.md` - Manuelle Test-Anleitung
|
||||
- `deployment/docs/guides/ansible-vs-bash-scripts.md` - Tool-Vergleich
|
||||
|
||||
### Scripts
|
||||
- `deployment/scripts/staging-quick-start.sh` - Interaktives Test-Script
|
||||
- `deployment/scripts/production-quick-start.sh` - Production Test-Script
|
||||
|
||||
### Ansible-Playbooks
|
||||
- `deployment/ansible/playbooks/setup-staging.yml` - Staging-Setup
|
||||
- `deployment/ansible/playbooks/verify-staging.yml` - Staging-Verifikation
|
||||
- `deployment/ansible/playbooks/verify-production.yml` - Production-Verifikation
|
||||
|
||||
## 🎯 Test-Phasen (noch ausstehend)
|
||||
|
||||
1. ✅ Phase 1: Lokale Syntax-Verifikation
|
||||
2. ⏳ Phase 2: PostgreSQL-Stacks testen (auf Server)
|
||||
3. ⏳ Phase 3: Networks verifizieren
|
||||
4. ⏳ Phase 4: Ansible-Setup testen
|
||||
5. ⏳ Phase 5: Datenbank-Verbindungen testen
|
||||
6. ⏳ Phase 6: Health-Checks
|
||||
7. ⏳ Phase 7: CI/CD-Workflow testen
|
||||
8. ⏳ Phase 8: Datenbank-Isolation testen
|
||||
|
||||
## 💡 Empfehlung für Fortsetzung
|
||||
|
||||
1. **Server-Neuaufbau durchführen** (siehe `deployment/docs/guides/server-rebuild-plan.md`)
|
||||
- ✅ Detaillierter Plan erstellt (Debian 13 Trixie + UEFI)
|
||||
- ✅ SSH-Zugriff dokumentiert (`deployment/docs/guides/ssh-access.md`)
|
||||
- ✅ Initial-Server-Setup Playbook erstellt (`deployment/ansible/playbooks/initial-server-setup.yml`)
|
||||
- ✅ Docker-Installation für Debian angepasst
|
||||
- ⏳ Server über Netcup Control Panel zurücksetzen (Debian 13 Trixie UEFI)
|
||||
- ⏳ Komplettes Setup via Ansible
|
||||
2. **Nach Neuaufbau**: PostgreSQL-Stacks, Ansible-Verifikation, End-to-End-Tests
|
||||
|
||||
## 🔧 Bekannte Issues
|
||||
|
||||
- SSH-Key-Problem: `error in libcrypto` - muss behoben werden für automatische Tests
|
||||
- Alternative: Manuelle Tests oder Ansible verwenden
|
||||
|
||||
---
|
||||
|
||||
**Hinweis**: Dieser Status kann mit `cursor-agent resume` fortgesetzt werden.
|
||||
|
||||
Reference in New Issue
Block a user