chore: Update deployment configuration and documentation

- Update Gitea configuration (remove DEFAULT_ACTIONS_URL)
- Fix deployment documentation
- Update Ansible playbooks
- Clean up deprecated files
- Add new deployment scripts and templates
This commit is contained in:
2025-10-31 21:11:11 +01:00
parent cf4748f8db
commit 16d586ecdf
92 changed files with 4601 additions and 10524 deletions

View File

@@ -1,543 +0,0 @@
# Production Deployment Plan
**Projekt**: Custom PHP Framework
**Datum**: 2025-01-15
**Ziel**: Erstes Production Deployment
---
## Status Check
### Git Status
- ✅ 18 Commits ahead of remote
- ⚠️ Viele unstaged changes
- ⚠️ Neue Deployment-Dokumentation noch nicht committed
### Code Status
- ✅ Health Check System implementiert
- ✅ Metrics Endpoint implementiert
- ✅ Production Logging konfiguriert
- ✅ Deployment-Dokumentation vollständig
---
## Vor dem Deployment
### 1. Code vorbereiten (15 Minuten)
```bash
# Alle Änderungen committen
git add .
git commit -m "feat(Production): Complete production deployment infrastructure
- Add comprehensive health check system with multiple endpoints
- Add Prometheus metrics endpoint
- Add production logging configurations
- Add complete deployment documentation (Quick Start, Workflow, Ansible, Checklists)
- Add deployment scripts and automation
- Update README with deployment links
All production infrastructure is now complete and ready for deployment."
# Zu Remote pushen
git push origin main
```
### 2. Lokale Tests (10 Minuten)
```bash
# Tests ausführen
./vendor/bin/pest
# Code Style Check
composer cs
# PHPStan
./vendor/bin/phpstan analyse
# Health Checks lokal testen
curl http://localhost/health/summary
curl http://localhost/health/detailed
```
### 3. Server-Informationen sammeln (5 Minuten)
**Benötigte Informationen**:
- [ ] Server IP-Adresse: _________________
- [ ] Domain Name: _________________
- [ ] SSH Username: _________________
- [ ] SSH Key vorhanden: [ ] Ja [ ] Nein
**Prüfen ob Server Anforderungen erfüllt**:
- [ ] Ubuntu 22.04+ oder Debian 11+
- [ ] Root oder sudo Zugriff
- [ ] Mindestens 4GB RAM
- [ ] Mindestens 40GB Festplatte
- [ ] Ports 22, 80, 443 offen
---
## Deployment Optionen
### Option A: Quick Start (Empfohlen für erstes Deployment)
**Zeit**: 30 Minuten
**Dokumentation**: [docs/deployment/QUICKSTART.md](docs/deployment/QUICKSTART.md)
**Schritte**:
1. SSH zum Server
2. System updates
3. Docker installieren
4. SSL Zertifikat mit Let's Encrypt
5. Repository klonen
6. Secrets generieren
7. Environment konfigurieren
8. Container starten
9. Health checks verifizieren
10. Nginx konfigurieren
**Vorteile**:
- Schnellster Weg zu laufendem System
- Minimal konfiguration
- Sofortige Verifikation
**Starten**:
```bash
# Dokumentation öffnen
cat docs/deployment/QUICKSTART.md
# Oder online auf GitHub
```
### Option B: Kompletter Workflow (Für Produktions-Setup)
**Zeit**: 2 Stunden
**Dokumentation**: [docs/deployment/DEPLOYMENT_WORKFLOW.md](docs/deployment/DEPLOYMENT_WORKFLOW.md)
**Schritte**:
- Phase 1: Initial Server Setup (einmalig)
- Phase 2: Initial Deployment
- Phase 3: Ongoing Deployment Setup
- Phase 4: Monitoring Setup
**Vorteile**:
- Vollständige Produktions-Konfiguration
- Monitoring und Alerting
- Automatisierte Deployment-Scripts
- Backup-Strategien
**Starten**:
```bash
cat docs/deployment/DEPLOYMENT_WORKFLOW.md
```
### Option C: Ansible (Für Multiple Server)
**Zeit**: 4 Stunden initial, 5 Minuten ongoing
**Dokumentation**: [docs/deployment/ANSIBLE_DEPLOYMENT.md](docs/deployment/ANSIBLE_DEPLOYMENT.md)
**Nur verwenden wenn**:
- Multiple Server (Staging + Production)
- Team Collaboration
- Infrastructure as Code gewünscht
---
## Empfohlener Ablauf für dich
Basierend auf deinem Setup empfehle ich **Option A: Quick Start** für das erste Deployment.
### Schritt 1: Code vorbereiten
```bash
# Im Projekt-Verzeichnis
cd /home/michael/dev/michaelschiemer
# Alle Änderungen committen
git add .
git commit -m "feat(Production): Complete production deployment infrastructure"
git push origin main
```
### Schritt 2: Server vorbereiten
```bash
# SSH zum Server
ssh user@your-server.com
# System aktualisieren
sudo apt update && sudo apt upgrade -y
# Docker installieren
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
# Docker Compose installieren
sudo apt install docker-compose-plugin -y
# Certbot installieren
sudo apt install certbot -y
# Application User erstellen
sudo useradd -m -s /bin/bash appuser
sudo usermod -aG docker appuser
```
### Schritt 3: SSL Zertifikat
```bash
# Webroot erstellen
sudo mkdir -p /var/www/certbot
# Zertifikat holen (ersetze yourdomain.com)
sudo certbot certonly --webroot \
-w /var/www/certbot \
-d yourdomain.com \
--email your-email@example.com \
--agree-tos \
--non-interactive
# Zertifikate prüfen
ls -la /etc/letsencrypt/live/yourdomain.com/
```
### Schritt 4: Application klonen
```bash
# Als appuser
sudo su - appuser
# Repository klonen
git clone https://github.com/your-org/your-repo.git /home/appuser/app
cd /home/appuser/app
# Production Branch
git checkout main
```
### Schritt 5: Secrets generieren
```bash
# Vault Key generieren
php scripts/deployment/generate-vault-key.php
# WICHTIG: Output sicher speichern!
# Beispiel: vault_key_abc123def456...
```
### Schritt 6: Environment konfigurieren
```bash
# .env.production erstellen
cp .env.example .env.production
nano .env.production
```
**Minimal Configuration**:
```env
APP_ENV=production
APP_DEBUG=false
APP_URL=https://yourdomain.com
DB_HOST=database
DB_PORT=3306
DB_NAME=app_production
DB_USER=app_user
DB_PASS=<GENERATED_PASSWORD>
VAULT_ENCRYPTION_KEY=<YOUR_KEY_FROM_STEP_5>
LOG_PATH=/var/log/app
LOG_LEVEL=INFO
ADMIN_ALLOWED_IPS=<YOUR_IP>,127.0.0.1
```
**Passwords generieren**:
```bash
# DB Password
openssl rand -base64 32
# JWT Secret
openssl rand -base64 64
```
### Schritt 7: Build und Start
```bash
# Dependencies installieren
docker compose -f docker-compose.production.yml run --rm php composer install --no-dev --optimize-autoloader
# Container bauen
docker compose -f docker-compose.production.yml build
# Container starten
docker compose -f docker-compose.production.yml up -d
# Status prüfen
docker compose -f docker-compose.production.yml ps
```
### Schritt 8: Database initialisieren
```bash
# Migrations ausführen
docker compose -f docker-compose.production.yml exec php php console.php db:migrate
# Status prüfen
docker compose -f docker-compose.production.yml exec php php console.php db:status
```
### Schritt 9: Health Checks
```bash
# Health endpoint prüfen
curl http://localhost/health/summary
# Erwartete Ausgabe:
# {
# "overall_healthy": true,
# "summary": {
# "total_checks": 8,
# "healthy": 8
# }
# }
# Bei Problemen: Logs prüfen
docker compose -f docker-compose.production.yml logs php
```
### Schritt 10: Nginx konfigurieren
```bash
# Als root
exit
# Nginx installieren (falls nicht vorhanden)
sudo apt install nginx -y
# Config erstellen
sudo nano /etc/nginx/sites-available/app
```
**Nginx Configuration**:
```nginx
server {
listen 80;
server_name yourdomain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name yourdomain.com;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256';
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /health {
proxy_pass http://localhost:8080/health;
access_log off;
}
}
```
```bash
# Site aktivieren
sudo ln -s /etc/nginx/sites-available/app /etc/nginx/sites-enabled/
# Config testen
sudo nginx -t
# Nginx neu starten
sudo systemctl restart nginx
```
### Schritt 11: Finale Verifikation
```bash
# HTTPS Endpoint testen
curl -f https://yourdomain.com/health/summary
# Detailed Health Check
curl -f https://yourdomain.com/health/detailed
# Metrics prüfen
curl -f https://yourdomain.com/metrics
```
---
## Nach dem Deployment
### Sofort
- [ ] Health Checks alle grün
- [ ] SSL Zertifikat gültig
- [ ] Logs werden geschrieben
- [ ] Metrics erreichbar
- [ ] Admin Panel erreichbar (von erlaubten IPs)
### Innerhalb 24 Stunden
- [ ] Backup Script einrichten
- [ ] Log Rotation konfigurieren
- [ ] Monitoring Alerts konfigurieren
- [ ] Firewall konfigurieren (UFW)
- [ ] SSH Key-Only Authentication
- [ ] Fail2Ban installieren
### Innerhalb 1 Woche
- [ ] Monitoring Dashboard (Grafana)
- [ ] Performance Baseline etablieren
- [ ] Security Audit durchführen
- [ ] Team Zugriff dokumentieren
- [ ] Disaster Recovery testen
---
## Troubleshooting
### Container startet nicht
```bash
# Logs prüfen
docker compose -f docker-compose.production.yml logs php
# Häufige Probleme:
# - Falsche DB Credentials → .env.production prüfen
# - Port bereits in Verwendung → netstat -tulpn | grep 8080
# - Permissions → chown -R appuser:appuser /home/appuser/app
```
### Health Checks failing
```bash
# Spezifischen Check prüfen
curl http://localhost/health/category/database
# Häufige Probleme:
# - Database nicht migriert → php console.php db:migrate
# - Cache nicht beschreibbar → ls -la /var/cache/app
# - Queue läuft nicht → docker compose ps
```
### SSL Probleme
```bash
# Zertifikat prüfen
openssl x509 -in /etc/letsencrypt/live/yourdomain.com/fullchain.pem -noout -dates
# Erneuern
certbot renew --force-renewal
# Nginx neu starten
systemctl restart nginx
```
---
## Rollback Plan
Falls Deployment fehlschlägt:
```bash
# Container stoppen
docker compose -f docker-compose.production.yml down
# Alten Code auschecken (falls deployed)
git checkout <previous-commit>
# Container mit altem Code starten
docker compose -f docker-compose.production.yml up -d
# Migrations rollback (falls nötig)
docker compose -f docker-compose.production.yml exec php php console.php db:rollback
```
---
## Checklisten
### Pre-Deployment Checklist
- [ ] Alle Tests grün lokal
- [ ] Code committed und gepushed
- [ ] Server Anforderungen geprüft
- [ ] Domain DNS konfiguriert
- [ ] SSL Email-Adresse bereit
- [ ] Passwords generiert
### Deployment Checklist
- [ ] SSH Zugriff funktioniert
- [ ] Docker installiert
- [ ] SSL Zertifikat erhalten
- [ ] Repository geklont
- [ ] Secrets generiert und gesichert
- [ ] Environment konfiguriert
- [ ] Container laufen
- [ ] Migrations ausgeführt
- [ ] Health Checks grün
- [ ] Nginx konfiguriert
- [ ] HTTPS funktioniert
### Post-Deployment Checklist
- [ ] Alle Health Checks passing
- [ ] Metrics endpoint erreichbar
- [ ] Logs werden geschrieben
- [ ] Backups konfiguriert
- [ ] Monitoring aktiv
- [ ] Security hardening begonnen
- [ ] Team informiert
---
## Dokumentation
**Vollständige Guides**:
- [Quick Start Guide](docs/deployment/QUICKSTART.md) - Schneller Einstieg
- [Deployment Workflow](docs/deployment/DEPLOYMENT_WORKFLOW.md) - Detaillierter Prozess
- [Production Guide](docs/deployment/PRODUCTION_DEPLOYMENT.md) - Umfassende Referenz
- [Deployment Checklist](docs/deployment/DEPLOYMENT_CHECKLIST.md) - Printable Checklist
- [README](docs/deployment/README.md) - Navigation
---
## Support
Bei Problemen:
1. **Deployment-Dokumentation prüfen** (siehe oben)
2. **Logs prüfen**: `docker compose logs -f`
3. **Health Endpoints prüfen**: `curl http://localhost/health/detailed`
4. **Metrics prüfen**: `curl http://localhost/metrics`
---
## Nächste Schritte nach erfolgreichem Deployment
1. **Monitoring einrichten** (Phase 4 in DEPLOYMENT_WORKFLOW.md)
2. **Backup-System aktivieren** (siehe QUICKSTART.md)
3. **Security Hardening** (siehe DEPLOYMENT_CHECKLIST.md)
4. **Team Training** (Deployment-Prozess dokumentieren)
5. **Performance Baseline** (Metrics über 7 Tage sammeln)
---
**Bereit zum Deployment?**
Führe die Schritte 1-11 oben aus oder folge direkt dem [Quick Start Guide](docs/deployment/QUICKSTART.md).
**Viel Erfolg! 🚀**

View File

@@ -57,6 +57,8 @@ FROM php:8.5.0RC3-fpm AS production
RUN apt-get update && apt-get install -y \
nginx \
curl \
git \
unzip \
libzip-dev \
libpng-dev \
libjpeg-dev \
@@ -97,6 +99,9 @@ RUN pecl install apcu redis-6.3.0RC1 \
RUN echo "apc.enable_cli=1" >> /usr/local/etc/php/conf.d/docker-php-ext-apcu.ini \
&& echo "apc.shm_size=128M" >> /usr/local/etc/php/conf.d/docker-php-ext-apcu.ini
# Install Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Set working directory
WORKDIR /var/www/html

View File

@@ -1,298 +0,0 @@
# Production Deployment TODO
**Status**: 70% Ready | **Target**: 85% Ready | **Estimated Time**: 4-5 Weeks
## ✅ WEEK 1 COMPLETED - Security & Configuration (2025-10-12)
### Security & Configuration
- [x] **Generate VAULT_ENCRYPTION_KEY** ✅ COMPLETED
- Generated via: `docker exec php php console.php vault:generate-key`
- Updated: `.env` with new production key
- Key: T2bWqKK7ShzU6pKuRFAneVW87TcjGqibLh3LKc53q6I=
- [x] **Replace Hardcoded Credentials** ✅ COMPLETED
- Updated `.env.example` with secure placeholders
- Replaced: RapidMail, Shopify, Database credentials
- Added security warnings and examples
- [x] **Configure Admin IP Whitelist** ✅ COMPLETED
- Updated `.env` with configuration instructions
- Added example for production deployment
- Documented CIDR notation support
- [x] **Audit Shell Command Usage** ✅ COMPLETED
- Audited: 38 files with shell commands
- Result: **ALL commands properly sanitized** with `escapeshellarg()`
- GitTools.php: Exemplary security implementation
- Other files: PDO->exec() or internal framework calls (safe)
### Security Documentation
- [x] **Complete Security Documentation** ✅ COMPLETED (744 lines)
- Location: `docs/claude/security-patterns.md`
- **WAF System**: 6 security layers documented with examples
- **OWASP Event Logging**: Event types, integration, monitoring
- **CSRF Protection**: Token generation, validation, template integration
- **Rate Limiting**: Multi-level, adaptive, configuration
- **Authentication & Authorization**: IP-based, session, token patterns
- **Security Headers**: Auto-configuration, CSP
- **Input Validation**: Value objects, request validation
- **Best Practices**: 6-point security checklist
- **Production Checklist**: 12-point deployment verification
## ⚠️ HIGH PRIORITY (Should Fix) - Week 2-3
### Exception Handling Refactoring
- [ ] **Refactor Critical Path Exceptions** (20 priority files) - **POSTPONED**
- **Decision**: Postponed until exception & logging system refactoring
- Partial work completed:
- ✅ Created `HoneypotTriggeredException` with Security Event integration
- ✅ Created `CsrfValidationFailedException` with ErrorCode integration
- ✅ Created `BotDetectedEvent` for OWASP logging
- ✅ Refactored `HoneypotMiddleware` (3 exceptions)
- ✅ Refactored `CsrfMiddleware` (1 exception)
- **Next**: Complete exception system refactoring before continuing
### Test Coverage (Target: 40%)
- [x] **SmartLink System Tests** ✅ COMPLETED (2025-10-12)
- Status: 100% coverage (27 tests, 104 assertions)
- Coverage:
- ✅ ShortCode value object validation (7 tests)
- ✅ ShortCodeGenerator uniqueness and retry logic (6 tests)
- ✅ SmartLinkService CRUD operations (14 tests)
- Test: URL shortening, analytics, routing
- [x] **MagicLinks System Tests** ✅ COMPLETED (2025-10-12)
- Status: 100% coverage (63 tests, 144 assertions)
- Coverage:
- ✅ MagicLinkToken value object validation (8 tests)
- ✅ TokenAction value object validation (10 tests)
- ✅ MagicLinkData entity validation (8 tests)
- ✅ ActionResult wrapper (14 tests)
- ✅ InMemoryMagicLinkService comprehensive tests (23 tests)
- Test: Token generation, expiry, one-time-use, revocation, cleanup
- [x] **OAuth Token Refresh Tests** ✅ COMPLETED (2025-10-12)
- Status: 100% coverage (84 tests, 195 assertions)
- Coverage:
- ✅ AccessToken value object (13 tests) - expiry, validation, masking
- ✅ RefreshToken value object (6 tests) - validation, security
- ✅ TokenType enum (9 tests) - parsing, header generation
- ✅ TokenScope value object (14 tests) - parsing, validation, operations
- ✅ OAuthToken composite (18 tests) - creation, refresh, conversion
- ✅ StoredOAuthToken entity (12 tests) - persistence, timestamps
- ✅ OAuthService integration (13 tests) - automatic refresh, batch operations, cleanup
- Architecture:
- Created OAuthTokenRepositoryInterface for testability
- Implemented InMemoryOAuthTokenRepository for tests
- Fixed Timestamp API (added fromTimestamp(), standardized toTimestamp())
- All tests use real repository operations (no mocking)
- Test: Token expiry detection, automatic refresh, error scenarios, batch refresh, cleanup
- [ ] **File Upload Chunking Tests**
- Test edge cases and error recovery
- [ ] **SSE Connection Management Tests**
- Test reconnection logic and error handling
- [ ] **Payment Processing Tests**
- Test failure scenarios and rollback
- [ ] **LiveComponents Tests**
- Current: 30% coverage
- Target: 60% coverage
### Workflow Documentation
- [ ] **API Endpoint Implementation Guide**
- Location: `docs/claude/common-workflows.md`
- Step-by-step with code examples
- [ ] **Bug Fix Workflow**
- Location: `docs/claude/common-workflows.md`
- Include debugging strategies
- [ ] **Database Migration Process**
- Location: `docs/claude/common-workflows.md`
- Best practices and rollback procedures
- [ ] **Performance Optimization Playbook**
- Location: `docs/claude/common-workflows.md`
- Systematic optimization approach
## 📋 MEDIUM PRIORITY (Nice-to-have) - Week 4
### JavaScript Testing
- [ ] **Setup JavaScript Test Framework**
- Choose: Jest or Vitest
- Configure for ES modules
- [ ] **LiveComponents Client Tests**
- Test WebSocket connection management
- Test SSE event handling
- [ ] **Core Module Tests**
- Test module system functionality
### Complete Documentation
- [ ] **Async Components Guide**
- Location: `docs/claude/async-components.md`
- Document Fiber Manager, AsyncPromise patterns
- [ ] **Console Commands Guide**
- Location: `docs/claude/console-commands.md`
- Document command creation and testing
- [ ] **Database Patterns**
- Location: `docs/claude/database-patterns.md`
- Document EntityManager, Repository patterns
- [ ] **Event System**
- Location: `docs/claude/event-system.md`
- Document EventBus vs EventDispatcher
- [ ] **Performance Monitoring**
- Location: `docs/claude/performance-monitoring.md`
- Document metrics collection and circuit breaker
- [ ] **Queue System**
- Location: `docs/claude/queue-system.md`
- Document queue drivers and retry mechanisms
- [ ] **Troubleshooting Guide**
- Location: `docs/claude/troubleshooting.md`
- Common errors and solutions
### Value Object Validation
- [ ] **Audit Value Object Validation**
- Review all VOs for consistent validation
- Add missing validation:
- `Url` - URL format validation
- `Hash` - Length checks
- Others identified during audit
## 🎯 FINAL PREP - Week 5
### Load Testing
- [ ] **Performance Load Test**
- Tool: Apache Bench / K6
- Test realistic user scenarios
- Identify bottlenecks
### Security Audit
- [ ] **OWASP ZAP Security Scan**
- Run automated security scan
- Address high/critical findings
- [ ] **Manual Penetration Testing**
- Test authentication bypass
- Test injection vulnerabilities
- Test CSRF protection
### Performance Profiling
- [ ] **Profile Application Performance**
- Tool: Blackfire or XHProf
- Profile critical paths
- Optimize identified bottlenecks
### Deployment Dry-Run
- [ ] **Deploy to Staging Environment**
- Full deployment process test
- Verify all services start correctly
- Test critical user journeys
### Monitoring Setup
- [ ] **Error Tracking Setup**
- Tool: Sentry or Rollbar
- Configure error reporting
- [ ] **Performance Monitoring Setup**
- Tool: New Relic or DataDog
- Configure APM
- [ ] **Uptime Monitoring**
- Tool: Pingdom or UptimeRobot
- Configure health checks
- [ ] **Log Aggregation**
- Tool: ELK Stack or Grafana Loki
- Configure log shipping
## 🔢 Production Readiness Metrics
| Metric | Current | Target | Progress |
|--------|---------|--------|----------|
| Test Coverage | 25% | 40% | ▓▓▓▓▓▓░░░░ 62% |
| Security Config | 60% | 100% | ▓▓▓▓▓▓░░░░ 60% |
| Documentation | 40% | 80% | ▓▓▓▓░░░░░░ 50% |
| Error Handling | 65% | 95% | ▓▓▓▓▓▓░░░░ 68% |
| Performance | 85% | 90% | ▓▓▓▓▓▓▓▓░░ 94% |
| Framework Compliance | 95% | 95% | ▓▓▓▓▓▓▓▓▓▓ 100% |
| **Overall** | **74%** | **85%** | ▓▓▓▓▓▓▓░░░ 87% |
## 📝 Quick Wins (Can be done in 1-2 days)
1. ✅ Generate Vault Key & update .env
2. ✅ Replace hardcoded credentials in .env.example
3. ✅ Complete Security Documentation (features already implemented)
4. ✅ Add shell command input validation
5. ✅ Document workflow patterns (copy from existing code)
## 🔄 Progress Tracking
**Week 1 Completion**: 9 / 9 tasks (100%) ✅ COMPLETED 2025-10-12
**Week 2 Completion**: 3 / 6 tasks (50%) 🔄 IN PROGRESS
**Week 3 Completion**: 0 / 8 tasks (0%)
**Week 4 Completion**: 0 / 11 tasks (0%)
**Week 5 Completion**: 0 / 5 tasks (0%)
**Overall Completion**: 12 / 39 critical tasks (31%)
---
## 📌 Notes & Decisions
### Week 1 Achievements
- ✅ All critical security configuration completed
- ✅ Comprehensive security documentation (744 lines)
- ✅ Shell command audit: ALL commands properly sanitized
- ✅ Framework has excellent security baseline
### Week 2 Progress (Started 2025-10-12)
-**SmartLink System Tests Completed** (27 tests, 100% pass rate)
- Created comprehensive test suite covering value objects, services, and business logic
- Learned framework patterns: readonly classes, factory methods, Value Object patterns
- Fixed mock expectations to work with final readonly classes
- Test coverage improved from 10% → 15%
-**MagicLinks System Tests Completed** (63 tests, 100% pass rate)
- Created comprehensive test suite for secure token-based actions
- Fixed ActionResult.php constructor (private constructor pattern in default parameters)
- Fixed DateInterval property access in tests (use `d`, `h`, `i` not `days`)
- Fixed Pest 3.x compatibility (`->not->toBeNull()` replaced with `->toBeInstanceOf()`)
- Test coverage improved from 15% → 20%
-**OAuth Token Refresh Tests Completed** (84 tests, 100% pass rate)
- Created comprehensive OAuth token management test suite
- Architecture improvements:
- Created OAuthTokenRepositoryInterface for testability of final readonly classes
- Implemented InMemoryOAuthTokenRepository (no mocking needed)
- Fixed Timestamp API: added fromTimestamp(), standardized toTimestamp()
- Fixed ErrorCode constants: SYSTEM_CONFIG_MISSING, ENTITY_NOT_FOUND
- Coverage: All Value Objects (AccessToken, RefreshToken, TokenType, TokenScope), composite objects (OAuthToken, StoredOAuthToken), and OAuthService integration
- Test coverage improved from 20% → 25%
### Key Findings
- **Shell Commands**: Already secure with `escapeshellarg()` throughout
- **WAF System**: Professional 6-layer implementation
- **Security Features**: Already production-ready
- **Next Priority**: Exception handling refactoring (Week 2)
### Performance Baseline
- WAF Latency: <5ms per request
- Security Detection Rate: >99.5% (OWASP Top 10)
- Test Coverage: Only 10% - **Major gap for Week 3**
---
**Last Updated**: 2025-10-12
**Next Review**: Start of Week 2
**Status**: On track for 4-5 week production readiness

125
deployment/CLEANUP_LOG.md Normal file
View File

@@ -0,0 +1,125 @@
# Deployment System Cleanup Log
**Datum:** 2025-01-31
**Ziel:** Redundanzen entfernen, Struktur verbessern
---
## Gelöschte Dateien
### Root-Level Docker Compose (Veraltet)
-`docker-compose.prod.yml` - Docker Swarm (nicht mehr verwendet, Deployment läuft über `deployment/stacks/`)
-`docker-compose.prod.yml.backup` - Backup-Datei
### Root-Level Dokumentation (Veraltet)
-`DEPLOYMENT_PLAN.md` - Veraltet, durch `deployment/` System ersetzt
-`PRODUCTION-DEPLOYMENT-TODO.md` - Veraltet, durch `deployment/DEPLOYMENT-TODO.md` ersetzt
### Deployment Dokumentation (Veraltet)
-`deployment/NATIVE-WORKFLOW-README.md` - Durch CI/CD Pipeline ersetzt
### docs/deployment/ Dokumentation (Veraltet)
-`docs/deployment/docker-swarm-deployment.md` - Swarm nicht mehr verwendet
-`docs/deployment/DEPLOYMENT_RESTRUCTURE.md` - Historisch
-`docs/deployment/quick-deploy.md` - Referenziert gelöschte `docker-compose.prod.yml`
-`docs/deployment/troubleshooting-checklist.md` - Referenziert veraltete Konfigurationen
-`docs/deployment/production-deployment-guide.md` - Referenziert veraltete Workflows
-`docs/deployment/DEPLOYMENT.md` - Veraltet
-`docs/deployment/DEPLOYMENT_SUMMARY.md` - Redundant zu `deployment/DEPLOYMENT_SUMMARY.md`
-`docs/deployment/QUICKSTART.md` - Redundant zu `deployment/QUICK_START.md`
-`docs/deployment/PRODUCTION_DEPLOYMENT.md` - Veraltet
-`docs/deployment/DEPLOYMENT_WORKFLOW.md` - Veraltet
-`docs/deployment/DEPLOYMENT_CHECKLIST.md` - Veraltet
-`docs/deployment/docker-compose-production.md` - Referenziert veraltete Konfigurationen
### Docker Ordner
-`docker/DOCKER-TODO.md` - Veraltet, Punkte größtenteils umgesetzt
### Leere Ordner
-`deployment/stacks/postgres/` - Leer, `postgresql/` wird verwendet
-`deployment/scripts/` - Alle Scripts entfernt (nur Ansible jetzt)
---
## Konsolidierte Playbooks
### Troubleshooting Playbooks → `troubleshoot.yml`
-`check-container-health.yml` → Tags: `health,check`
-`diagnose-404.yml` → Tags: `404,diagnose`
-`fix-container-health-checks.yml` → Tags: `health,fix`
-`fix-nginx-404.yml` → Tags: `nginx,404,fix`
---
## Erstellt
### Zentrale Konfiguration
-`deployment/ansible/group_vars/production.yml` - Zentrale Variablen
- Alle Playbooks verwenden jetzt zentrale Variablen
- Redundante Variablendefinitionen entfernt
### Dokumentation
-`deployment/DEPLOYMENT_COMMANDS.md` - Command-Referenz
-`deployment/IMPROVEMENTS.md` - Verbesserungsvorschläge
-`deployment/CLEANUP_LOG.md` - Dieser Log
---
## Wichtige Hinweise
### Docker Compose Files
- **BEHALTEN:** `docker-compose.yml` (Development)
- **BEHALTEN:** `docker-compose.production.yml` (kann noch für lokales Testing verwendet werden)
- **BEHALTEN:** `docker-compose.security.yml` (Security Override)
- **PRODUCTION:** Verwendet jetzt `deployment/stacks/*/docker-compose.yml`
### docs/deployment/ Dateien (BEHALTEN)
Die folgenden Dateien in `docs/deployment/` bleiben erhalten, da sie spezifische Themen behandeln:
- **VPN:** `WIREGUARD-SETUP.md`, `WIREGUARD-FUTURE-SECURITY.md`
- **Security:** `PRODUCTION-SECURITY-UPDATES.md`
- **Configuration:** `database-migration-strategy.md`, `logging-configuration.md`, `production-logging.md`, `secrets-management.md`, `ssl-setup.md`, `SSL-PRODUCTION-SETUP.md`, `env-production-template.md`, `production-prerequisites.md`
- **⚠️ Möglicherweise veraltet:** `ANSIBLE_DEPLOYMENT.md`, `deployment-automation.md` (sollten auf neue Ansible-Struktur verweisen)
### Deployment Archive
- `.deployment-archive-20251030-111806/` - Backup, bleibt für Referenz (sollte in `.gitignore`)
---
## Verbleibende Dateien
### docs/deployment/ (Relevante Dateien behalten)
-`WIREGUARD-SETUP.md` - Aktuell
-`WIREGUARD-FUTURE-SECURITY.md` - Aktuell
-`database-migration-strategy.md` - Relevante Strategie-Dokumentation
-`logging-configuration.md` - Relevante Logging-Dokumentation
-`production-logging.md` - Aktuelle Logging-Dokumentation
-`secrets-management.md` - Relevante Secrets-Dokumentation
-`ssl-setup.md` - Relevante SSL-Dokumentation
-`SSL-PRODUCTION-SETUP.md` - Aktuell
-`env-production-template.md` - Template-Dokumentation
-`production-prerequisites.md` - Relevante Prerequisites
-`PRODUCTION-SECURITY-UPDATES.md` - Relevante Security-Updates
- ⚠️ `ANSIBLE_DEPLOYMENT.md` - Veraltet, mit Warnung markiert
- ⚠️ `deployment-automation.md` - Veraltet, mit Warnung markiert
-`README.md` - Aktualisiert, verweist auf deployment/
---
## Zusammenfassung
### Gelöscht:
- **11 veraltete Dateien** aus `docs/deployment/`
- **4 veraltete Dateien** aus Root-Level
- **4 redundante Playbooks** (konsolidiert)
- **Alle Deployment-Scripts** (durch Ansible ersetzt)
### Erstellt:
- **Zentrale Variablen** in `group_vars/production.yml`
- **Konsolidiertes Troubleshooting** Playbook mit Tags
- **Aktualisierte Dokumentation** (README, Commands, etc.)
### Ergebnis:
-**Redundanzen entfernt**
-**Zentrale Konfiguration**
-**Klarere Struktur**
-**Einfacher zu warten**

View File

@@ -0,0 +1,176 @@
# Deployment System - Cleanup Summary
**Datum:** 2025-01-31
**Status:** ✅ Abgeschlossen
---
## 📊 Zusammenfassung
### Gelöschte Dateien: **25+ Dateien**
#### Root-Level (6 Dateien)
-`docker-compose.prod.yml` - Docker Swarm (veraltet)
-`docker-compose.prod.yml.backup` - Backup
-`DEPLOYMENT_PLAN.md` - Veraltet
-`PRODUCTION-DEPLOYMENT-TODO.md` - Veraltet
-`docker/DOCKER-TODO.md` - Veraltet
#### Deployment/ Ordner (7 Dateien)
-`deployment/NATIVE-WORKFLOW-README.md` - Durch CI/CD ersetzt
-`deployment/scripts/` - Ordner gelöscht (alle Scripts entfernt)
-`deployment/stacks/postgres/` - Leerer Ordner
- ✅ 4 redundante Status/TODO Dateien
#### docs/deployment/ (12 Dateien)
-`docker-swarm-deployment.md` - Swarm nicht mehr verwendet
-`DEPLOYMENT_RESTRUCTURE.md` - Historisch
-`quick-deploy.md` - Referenziert gelöschte Dateien
-`troubleshooting-checklist.md` - Veraltet
-`production-deployment-guide.md` - Veraltet
-`DEPLOYMENT.md` - Veraltet
-`DEPLOYMENT_SUMMARY.md` - Redundant
-`QUICKSTART.md` - Redundant
-`PRODUCTION_DEPLOYMENT.md` - Veraltet
-`DEPLOYMENT_WORKFLOW.md` - Veraltet
-`DEPLOYMENT_CHECKLIST.md` - Veraltet
-`docker-compose-production.md` - Veraltet
#### Ansible Playbooks (4 Playbooks konsolidiert)
-`check-container-health.yml``tasks/check-health.yml`
-`diagnose-404.yml``tasks/diagnose-404.yml`
-`fix-container-health-checks.yml``tasks/fix-health-checks.yml`
-`fix-nginx-404.yml``tasks/fix-nginx-404.yml`
---
## ✨ Verbesserungen
### 1. Zentrale Variablen
-`deployment/ansible/group_vars/production.yml` erstellt
- ✅ Alle Playbooks verwenden jetzt zentrale Variablen
- ✅ Redundante Variablendefinitionen entfernt
### 2. Konsolidierte Playbooks
-`troubleshoot.yml` - Unified Troubleshooting Playbook
- ✅ Tag-basiertes Execution für spezifische Tasks
- ✅ Modularisierte Tasks in `tasks/` Ordner
### 3. Bereinigte Dokumentation
-`deployment/DEPLOYMENT_COMMANDS.md` - Command-Referenz
-`deployment/IMPROVEMENTS.md` - Verbesserungsvorschläge
-`deployment/CLEANUP_LOG.md` - Cleanup-Log
-`docs/deployment/README.md` - Aktualisiert mit Verweis auf deployment/
### 4. Git-basiertes Deployment
- ✅ Container klont/pullt automatisch aus Git Repository
-`entrypoint.sh` erweitert für Git-Funktionalität
-`sync-code.yml` Playbook für Code-Sync
---
## 📁 Aktuelle Struktur
### deployment/ (Haupt-Dokumentation)
```
deployment/
├── README.md # Haupt-Dokumentation
├── QUICK_START.md # Schnellstart
├── DEPLOYMENT_COMMANDS.md # Command-Referenz
├── CODE_CHANGE_WORKFLOW.md # Workflow-Dokumentation
├── SETUP-GUIDE.md # Setup-Anleitung
├── DOCUMENTATION_INDEX.md # Dokumentations-Index
├── ansible/
│ ├── group_vars/
│ │ └── production.yml # ⭐ Zentrale Variablen
│ └── playbooks/
│ ├── troubleshoot.yml # ⭐ Unified Troubleshooting
│ └── tasks/ # ⭐ Modularisierte Tasks
│ ├── check-health.yml
│ ├── diagnose-404.yml
│ ├── fix-health-checks.yml
│ └── fix-nginx-404.yml
└── stacks/ # Docker Compose Stacks
```
### docs/deployment/ (Spezifische Themen)
```
docs/deployment/
├── README.md # ⭐ Aktualisiert
├── WIREGUARD-SETUP.md # ✅ Aktuell
├── database-migration-strategy.md # ✅ Relevant
├── logging-configuration.md # ✅ Relevant
├── secrets-management.md # ✅ Relevant
├── ssl-setup.md # ✅ Relevant
└── ... (weitere spezifische Themen)
```
---
## 🎯 Verbesserte Arbeitsweise
### Vorher:
```bash
# Viele Scripts mit Redundanz
./scripts/deploy.sh
./scripts/rollback.sh
./scripts/sync-code.sh
# Viele separate Playbooks
ansible-playbook ... check-container-health.yml
ansible-playbook ... diagnose-404.yml
ansible-playbook ... fix-container-health-checks.yml
```
### Jetzt:
```bash
# Nur Ansible Playbooks - direkt
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/deploy-update.yml
ansible-playbook -i inventory/production.yml playbooks/troubleshoot.yml --tags health,check
ansible-playbook -i inventory/production.yml playbooks/sync-code.yml
```
---
## ✅ Ergebnisse
### Redundanz entfernt:
- ❌ Scripts vs Playbooks → ✅ Nur Playbooks
- ❌ 4 separate Troubleshooting Playbooks → ✅ 1 Playbook mit Tags
- ❌ Redundante Variablen → ✅ Zentrale Variablen
- ❌ 25+ veraltete Dokumentationsdateien → ✅ Bereinigt
### Struktur verbessert:
- ✅ Klarere Trennung: deployment/ (aktuell) vs docs/deployment/ (spezifische Themen)
- ✅ Zentrale Konfiguration in `group_vars/`
- ✅ Modulare Tasks in `tasks/`
- ✅ Unified Troubleshooting mit Tags
### Einfacher zu warten:
- ✅ Einmalige Variablendefinition
- ✅ Weniger Dateien zu aktualisieren
- ✅ Konsistente Struktur
- ✅ Klare Dokumentation
---
## 📈 Metriken
**Vorher:**
- ~38 Dokumentationsdateien
- 8+ Playbooks mit redundanten Variablen
- 4 Deployment-Scripts
- 4 separate Troubleshooting-Playbooks
**Nachher:**
- ~20 relevante Dokumentationsdateien
- Zentrale Variablen
- Keine redundanten Scripts
- 1 konsolidiertes Troubleshooting-Playbook
**Reduktion:** ~50% weniger Dateien, ~70% weniger Redundanz
---
**Status:** ✅ Bereinigung abgeschlossen, Deployment-System optimiert!

View File

@@ -1,460 +0,0 @@
# Deployment Status - Gitea Actions Runner Setup
**Status**: ✅ Phase 3 Complete - Ready for Phase 1
**Last Updated**: 2025-10-31
**Target Server**: 94.16.110.151 (Netcup)
---
## Aktueller Status
### ✅ Phase 0: Git Repository SSH Access Setup - COMPLETE
1. ✅ Git SSH Key generiert (`~/.ssh/git_michaelschiemer`)
2. ✅ SSH Config konfiguriert für `git.michaelschiemer.de`
3. ✅ Public Key zu Gitea hinzugefügt
4. ✅ Git Remote auf SSH umgestellt
5. ✅ Push zu origin funktioniert ohne Credentials
### ✅ Phase 3: Production Server Initial Setup - COMPLETE
**Infrastructure Stacks deployed via Ansible:**
1.**Traefik** - Reverse Proxy & SSL (healthy)
- HTTPS funktioniert, Let's Encrypt SSL aktiv
- Dashboard: https://traefik.michaelschiemer.de
2.**PostgreSQL** - Database Stack (healthy)
- Database läuft und ist bereit
3.**Docker Registry** - Private Registry (running, accessible)
- Authentication konfiguriert
- Zugriff erfolgreich getestet
4.**Gitea** - Git Server (healthy)
- HTTPS erreichbar: https://git.michaelschiemer.de ✅
- SSH Port 2222 aktiv
- PostgreSQL Database verbunden
5.**Monitoring** - Monitoring Stack (deployed)
- Grafana: https://grafana.michaelschiemer.de
- Prometheus: https://prometheus.michaelschiemer.de
- Portainer: https://portainer.michaelschiemer.de
**Deployment Method:** Ansible Playbook `setup-infrastructure.yml`
**Deployment Date:** 2025-10-31
### ⏳ Phase 1: Gitea Runner Setup - READY TO START
**Prerequisites erfüllt:**
- ✅ Gitea deployed und erreichbar
- ✅ Runner-Verzeichnisstruktur vorhanden
-`.env.example` Template analysiert
-`.env` Datei erstellt
**Nächste Schritte:**
1. ⏳ Gitea Admin Panel öffnen: https://git.michaelschiemer.de/admin/actions/runners
2. ⏳ Actions in Gitea aktivieren (falls noch nicht geschehen)
3. ⏳ Registration Token abrufen
4. ⏳ Token in `.env` eintragen
5. ⏳ Runner registrieren und starten
---
## Dateistatus
### `/home/michael/dev/michaelschiemer/deployment/gitea-runner/.env`
**Status**: ✅ Erstellt (diese Session)
**Quelle**: Kopie von `.env.example`
**Problem**: `GITEA_RUNNER_REGISTRATION_TOKEN` ist leer
**Aktueller Inhalt**:
```bash
# Gitea Actions Runner Configuration
# Gitea Instance URL (must be accessible from runner)
GITEA_INSTANCE_URL=https://git.michaelschiemer.de
# Runner Registration Token (get from Gitea: Admin > Actions > Runners)
# To generate: Gitea UI > Site Administration > Actions > Runners > Create New Runner
GITEA_RUNNER_REGISTRATION_TOKEN= # ← LEER - BLOCKIERT durch 404
# Runner Name (appears in Gitea UI)
GITEA_RUNNER_NAME=dev-runner-01
# Runner Labels (comma-separated)
# Format: label:image
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye,debian-latest:docker://debian:bullseye
# Optional: Custom Docker registry for job images
# DOCKER_REGISTRY_MIRROR=https://registry.michaelschiemer.de
# Optional: Runner capacity (max concurrent jobs)
# GITEA_RUNNER_CAPACITY=1
```
---
## Fehleranalyse: 404 auf Gitea Admin Panel
### Wahrscheinliche Ursachen (nach Priorität)
#### 1. Gitea noch nicht deployed ⚠️ **HÖCHSTE WAHRSCHEINLICHKEIT**
**Problem**: Phasen-Reihenfolge-Konflikt in SETUP-GUIDE.md
- Phase 1 erfordert Gitea erreichbar
- Phase 3 deployed Gitea auf Production Server
- Klassisches Henne-Ei-Problem
**Beweis**: SETUP-GUIDE.md Phase 3, Step 3.1 zeigt:
```markdown
# 4. Gitea (Git Server + MySQL + Redis)
cd ../gitea
docker compose up -d
docker compose logs -f
# Wait for "Listen: http://0.0.0.0:3000"
```
**Lösung**: Phase 3 VOR Phase 1 ausführen
#### 2. Gitea Actions Feature deaktiviert
**Problem**: Actions in `app.ini` nicht enabled
**Check benötigt**:
```bash
ssh deploy@94.16.110.151
cat ~/deployment/stacks/gitea/data/gitea/conf/app.ini | grep -A 5 "[actions]"
```
**Erwartetes Ergebnis**:
```ini
[actions]
ENABLED = true
```
#### 3. Falsche URL (andere Gitea Version)
**Mögliche alternative URLs**:
- `https://git.michaelschiemer.de/admin`
- `https://git.michaelschiemer.de/user/settings/actions`
- `https://git.michaelschiemer.de/admin/runners`
#### 4. Authentication/Authorization Problem
**Mögliche Ursachen**:
- User nicht eingeloggt in Gitea
- User hat keine Admin-Rechte
- Session abgelaufen
#### 5. Gitea Service nicht gestartet
**Check benötigt**:
```bash
ssh deploy@94.16.110.151
cd ~/deployment/stacks/gitea
docker compose ps
```
---
## Untersuchungsplan
### Step 1: Base Gitea Accessibility prüfen
```bash
# Test ob Gitea überhaupt läuft
curl -I https://git.michaelschiemer.de
```
**Erwartetes Ergebnis**:
- HTTP 200 → Gitea läuft
- Connection Error → Gitea nicht deployed
### Step 2: Browser Verification
1. `https://git.michaelschiemer.de` direkt öffnen
2. Homepage-Load verifizieren
3. Login-Status prüfen
4. Admin-Rechte verifizieren
### Step 3: Alternative Admin Panel URLs testen
```bash
# Try different paths
https://git.michaelschiemer.de/admin
https://git.michaelschiemer.de/user/settings/actions
https://git.michaelschiemer.de/admin/runners
```
### Step 4: Gitea Configuration prüfen (SSH benötigt)
```bash
ssh deploy@94.16.110.151
cat ~/deployment/stacks/gitea/data/gitea/conf/app.ini | grep -A 5 "\[actions\]"
```
### Step 5: Gitea Stack Status prüfen (SSH benötigt)
```bash
ssh deploy@94.16.110.151
cd ~/deployment/stacks/gitea
docker compose ps
docker compose logs gitea --tail 50
```
---
## Alternative Lösungsansätze
### Option A: Phasen-Reihenfolge ändern ⭐ **EMPFOHLEN**
**Ansatz**: Phase 3 zuerst ausführen, dann Phase 1
**Begründung**:
- Gitea muss deployed sein bevor Runner registriert werden kann
- Phase 3 deployed komplette Infrastructure (Traefik, PostgreSQL, Registry, **Gitea**, Monitoring)
- Danach kann Phase 1 normal durchgeführt werden
**Ablauf**:
1. Phase 3 komplett ausführen (Infrastructure deployment)
2. Gitea Accessibility verifizieren
3. Gitea Actions in UI enablen
4. Zurück zu Phase 1 für Runner Setup
5. Weiter mit Phasen 2, 4-8
### Option B: CLI-basierte Runner Registration
**Ansatz**: Runner über Gitea CLI registrieren statt Web UI
```bash
# Auf Production Server
ssh deploy@94.16.110.151
docker exec gitea gitea admin actions generate-runner-token
# Token zurück zu Dev Machine kopieren
# In .env eintragen
```
### Option C: Manual Token Generation
**Ansatz**: Token direkt in Gitea Database generieren (nur als letzter Ausweg)
**WARNUNG**: Nur verwenden wenn alle anderen Optionen fehlschlagen
---
## Docker-in-Docker Architektur (Referenz)
### Services
**gitea-runner**:
- Image: `gitea/act_runner:latest`
- Purpose: Hauptrunner-Service
- Volumes:
- `./data:/data` (Runner-Daten)
- `/var/run/docker.sock:/var/run/docker.sock` (Host Docker Socket)
- `./config.yaml:/config.yaml:ro` (Konfiguration)
- Environment: Variablen aus `.env` File
- Network: `gitea-runner` Bridge Network
**docker-dind**:
- Image: `docker:dind`
- Purpose: Isolierte Docker-Daemon für Job-Execution
- Privileged: `true` (benötigt für nested containerization)
- TLS: `DOCKER_TLS_CERTDIR=/certs`
- Volumes:
- `docker-certs:/certs` (TLS Zertifikate)
- `docker-data:/var/lib/docker` (Docker Layer Storage)
- Command: `dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2376 --tlsverify`
### Networks
**gitea-runner** Bridge Network:
- Isoliert Runner-Infrastructure vom Host
- Secure TLS Communication zwischen Services
### Volumes
- `docker-certs`: Shared TLS Certificates für runner ↔ dind
- `docker-data`: Persistent Docker Layer Storage
---
## 8-Phasen Deployment Prozess (Übersicht)
### Phase 1: Gitea Runner Setup (Development Machine) - **⚠️ BLOCKIERT**
**Status**: Kann nicht starten wegen 404 auf Admin Panel
**Benötigt**: Gitea erreichbar und Actions enabled
### Phase 2: Ansible Vault Secrets Setup - **⏳ WARTET**
**Status**: Kann nicht starten bis Phase 1 komplett
**Tasks**:
- Vault Password erstellen (`.vault_pass`)
- `production.vault.yml` mit Secrets erstellen
- Encryption Keys generieren
- Vault File verschlüsseln
### Phase 3: Production Server Initial Setup - **⏳ KÖNNTE ZUERST AUSGEFÜHRT WERDEN**
**Status**: Sollte möglicherweise VOR Phase 1 ausgeführt werden
**Tasks**:
- SSH zu Production Server
- Deploy Infrastructure Stacks:
1. Traefik (Reverse Proxy & SSL)
2. PostgreSQL (Database)
3. Docker Registry (Private Registry)
4. **Gitea (Git Server + MySQL + Redis)** ← Benötigt für Phase 1!
5. Monitoring (Portainer + Grafana + Prometheus)
### Phase 4: Application Secrets Deployment - **⏳ WARTET**
**Status**: Wartet auf Phase 1-3
**Tasks**: Secrets via Ansible zu Production deployen
### Phase 5: Gitea CI/CD Secrets Configuration - **✅ ERLEDIGT**
**Status**: Wartet auf Phase 1-4
**Tasks**: Repository Secrets in Gitea konfigurieren
### Phase 6: First Deployment Test - **⏳ WARTET**
**Status**: Wartet auf Phase 1-5
**Tasks**: CI/CD Pipeline triggern und testen
### Phase 7: Monitoring & Health Checks - **⏳ WARTET**
**Status**: Wartet auf Phase 1-6
**Tasks**: Monitoring Tools konfigurieren und Alerting einrichten
### Phase 8: Backup & Rollback Testing - **⏳ WARTET**
**Status**: Wartet auf Phase 1-7
**Tasks**: Backup-Mechanismus und Rollback testen
---
## Empfohlener Nächster Schritt
### ⭐ Option A: Phase 3 zuerst ausführen (Empfohlen)
**Begründung**:
- Behebt die Grundursache (Gitea nicht deployed)
- Folgt logischer Abhängigkeitskette
- Erlaubt normalen Fortschritt durch alle Phasen
**Ablauf**:
```bash
# 1. SSH zu Production Server
ssh deploy@94.16.110.151
# 2. Navigate zu stacks
cd ~/deployment/stacks
# 3. Deploy Traefik
cd traefik
docker compose up -d
docker compose logs -f # Warten auf "Configuration loaded"
# 4. Deploy PostgreSQL
cd ../postgresql
docker compose up -d
docker compose logs -f # Warten auf "database system is ready"
# 5. Deploy Registry
cd ../registry
docker compose up -d
docker compose logs -f # Warten auf "listening on [::]:5000"
# 6. Deploy Gitea ← KRITISCH für Phase 1
cd ../gitea
docker compose up -d
docker compose logs -f # Warten auf "Listen: http://0.0.0.0:3000"
# 7. Deploy Monitoring
cd ../monitoring
docker compose up -d
docker compose logs -f
# 8. Verify all stacks
docker ps
# 9. Test Gitea Accessibility
curl -I https://git.michaelschiemer.de
```
**Nach Erfolg**:
1. Gitea Web UI öffnen: `https://git.michaelschiemer.de`
2. Initial Setup Wizard durchlaufen
3. Admin Account erstellen
4. Actions in Settings enablen
5. **Zurück zu Phase 1**: Jetzt kann Admin Panel erreicht werden
6. Registration Token holen
7. `.env` komplettieren
8. Runner registrieren und starten
---
## Technische Details
### Gitea Actions Architecture
**Components**:
- **act_runner**: Gitea's self-hosted runner (basiert auf nektos/act)
- **Docker-in-Docker**: Isolierte Job-Execution Environment
- **TLS Communication**: Secure runner ↔ dind via certificates
**Runner Registration**:
1. Generate Token in Gitea Admin Panel
2. Add Token zu `.env`: `GITEA_RUNNER_REGISTRATION_TOKEN=<token>`
3. Run `./register.sh` (registriert runner mit Gitea instance)
4. Start services: `docker compose up -d`
5. Verify in Gitea UI: Runner shows as "Idle" or "Active"
**Runner Labels**:
Define welche Execution Environments unterstützt werden:
```bash
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye,debian-latest:docker://debian:bullseye
```
Format: `label:docker://image`
---
## Dateireferenzen
### Wichtige Dateien
| Datei | Status | Beschreibung |
|-------|--------|--------------|
| `SETUP-GUIDE.md` | ✅ Vorhanden | Komplette 8-Phasen Deployment Anleitung (708 Zeilen) |
| `deployment/gitea-runner/.env.example` | ✅ Vorhanden | Template für Runner Configuration (23 Zeilen) |
| `deployment/gitea-runner/.env` | ✅ Erstellt | Active Configuration - **Token fehlt** |
| `deployment/gitea-runner/docker-compose.yml` | ✅ Vorhanden | Two-Service Architecture Definition (47 Zeilen) |
### Code Snippets Location
**Runner Configuration** (`.env`):
- Zeilen 1-23: Komplette Environment Variables Definition
- Zeile 8: `GITEA_RUNNER_REGISTRATION_TOKEN=`**KRITISCH: LEER**
**Docker Compose** (`docker-compose.yml`):
- Zeilen 4-20: `gitea-runner` Service Definition
- Zeilen 23-34: `docker-dind` Service Definition
- Zeilen 37-40: Network Configuration
- Zeilen 43-47: Volume Definitions
**Setup Guide** (SETUP-GUIDE.md):
- Zeilen 36-108: Phase 1 Komplette Anleitung
- Zeilen 236-329: Phase 3 Infrastructure Deployment (inkl. Gitea)
---
## Support Kontakte
**Bei Problemen**:
- Framework Issues: Siehe `docs/claude/troubleshooting.md`
- Gitea Documentation: https://docs.gitea.io/
- act_runner Documentation: https://docs.gitea.io/en-us/usage/actions/act-runner/
---
**Erstellt**: 2025-10-30
**Letzte Änderung**: 2025-10-30
**Status**: BLOCKED - Awaiting Gitea Deployment (Phase 3)

View File

@@ -162,7 +162,6 @@ Siehe `deployment/CI_CD_STATUS.md` für komplette Checkliste und Setup-Anleitung
**Dateien:**
- `deployment/ansible/playbooks/rollback.yml` ✅ Vorhanden
- `deployment/scripts/rollback.sh` ✅ Vorhanden
- `deployment/stacks/postgresql/scripts/backup.sh` ✅ Vorhanden
- `deployment/ansible/playbooks/backup.yml` ❌ Fehlt
@@ -174,25 +173,26 @@ Siehe `deployment/CI_CD_STATUS.md` für komplette Checkliste und Setup-Anleitung
---
### 6. Deployment Scripts finalisieren
### 6. Deployment Automation (Erledigt ✅)
**Status**: ⚠️ Vorhanden, aber muss angepasst werden
**Status**: ✅ Abgeschlossen
**Was fehlt:**
- [ ] `deployment/scripts/deploy.sh` testen und anpassen
- [ ] `deployment/scripts/rollback.sh` testen und anpassen
- [ ] `deployment/scripts/setup-production.sh` finalisieren
- [ ] Scripts für alle Stacks (nicht nur Application)
**Was erledigt:**
- [x] Alle Deployment-Operationen über Ansible Playbooks ✅
- [x] Redundante Scripts entfernt ✅
- [x] Dokumentation aktualisiert ✅
**Dateien:**
- `deployment/scripts/deploy.sh` ✅ Vorhanden (aber Docker Swarm statt Compose?)
- `deployment/scripts/rollback.sh` ✅ Vorhanden
- `deployment/scripts/setup-production.sh` ✅ Vorhanden
- `deployment/ansible/playbooks/deploy-update.yml` ✅ Vorhanden
- `deployment/ansible/playbooks/rollback.yml` ✅ Vorhanden
- `deployment/ansible/playbooks/sync-code.yml` ✅ Vorhanden
- `deployment/DEPLOYMENT_COMMANDS.md` ✅ Command-Referenz erstellt
**Nächste Schritte:**
1. Scripts prüfen und anpassen (Docker Compose statt Swarm)
2. Scripts testen
3. Integration mit Ansible Playbooks
**Alle Deployment-Operationen werden jetzt direkt über Ansible durchgeführt:**
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/<playbook>.yml
```
---

View File

@@ -0,0 +1,151 @@
# Deployment Commands - Quick Reference
Alle Deployment-Operationen werden über **Ansible Playbooks** durchgeführt.
---
## 🚀 Häufig verwendete Commands
### Code deployen (Image-basiert)
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml \
playbooks/deploy-update.yml \
-e "image_tag=abc1234-1696234567" \
-e "git_commit_sha=$(git rev-parse HEAD)"
```
### Code synchen (Git-basiert)
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml \
playbooks/sync-code.yml \
-e "git_branch=main"
```
### Rollback zu vorheriger Version
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml \
playbooks/rollback.yml
```
### Infrastructure Setup (einmalig)
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml \
playbooks/setup-infrastructure.yml
```
---
## 📋 Alle verfügbaren Playbooks
### Deployment & Updates
- **`playbooks/deploy-update.yml`** - Deployt neues Docker Image
- **`playbooks/sync-code.yml`** - Synchronisiert Code aus Git Repository
- **`playbooks/rollback.yml`** - Rollback zu vorheriger Version
### Infrastructure Setup
- **`playbooks/setup-infrastructure.yml`** - Deployed alle Stacks (Traefik, PostgreSQL, Registry, Gitea, Monitoring, Application)
- **`playbooks/setup-production-secrets.yml`** - Deployed Secrets zu Production
- **`playbooks/setup-ssl-certificates.yml`** - SSL Certificate Setup
- **`playbooks/sync-stacks.yml`** - Synchronisiert Stack-Konfigurationen
### Troubleshooting & Maintenance
- **`playbooks/troubleshoot.yml`** - Unified Troubleshooting Playbook mit Tags
```bash
# Nur Diagnose
ansible-playbook ... troubleshoot.yml --tags diagnose
# Health Check prüfen
ansible-playbook ... troubleshoot.yml --tags health,check
# Health Checks fixen
ansible-playbook ... troubleshoot.yml --tags health,fix
# Nginx 404 fixen
ansible-playbook ... troubleshoot.yml --tags nginx,404,fix
# Alles ausführen
ansible-playbook ... troubleshoot.yml --tags all
```
### VPN
- **`playbooks/setup-wireguard.yml`** - WireGuard VPN Setup
- **`playbooks/add-wireguard-client.yml`** - WireGuard Client hinzufügen
### CI/CD
- **`playbooks/setup-gitea-runner-ci.yml`** - Gitea Runner CI Setup
---
## 🔧 Ansible Variablen
### Häufig verwendete Extra Variables
```bash
# Image Tag für Deployment
-e "image_tag=abc1234-1696234567"
# Git Branch für Code Sync
-e "git_branch=main"
-e "git_repo_url=https://git.michaelschiemer.de/michael/michaelschiemer.git"
# Registry Credentials (wenn nicht im Vault)
-e "docker_registry_username=admin"
-e "docker_registry_password=secret"
# Dry Run (Check Mode)
--check
# Verbose Output
-v # oder -vv, -vvv für mehr Details
```
---
## 📖 Vollständige Dokumentation
- **[README.md](README.md)** - Haupt-Dokumentation
- **[QUICK_START.md](QUICK_START.md)** - Schnellstart-Guide
- **[CODE_CHANGE_WORKFLOW.md](CODE_CHANGE_WORKFLOW.md)** - Codeänderungen workflow
---
## 💡 Tipps
### Vault Passwort setzen
```bash
export ANSIBLE_VAULT_PASSWORD_FILE=~/.ansible/vault_pass
# oder
ansible-playbook ... --vault-password-file ~/.ansible/vault_pass
```
### Nur bestimmte Tasks ausführen
```bash
ansible-playbook ... --tags "deploy,restart"
```
### Check Mode (Dry Run)
```bash
ansible-playbook ... --check --diff
```
### Inventory prüfen
```bash
ansible -i inventory/production.yml production -m ping
```

View File

@@ -1,163 +0,0 @@
# Deployment Pre-Flight Check
**Bevor du Code pushen kannst, prüfe diese Checkliste!**
---
## ✅ Kritische Prüfungen
### 1. Application Stack muss deployed sein
**Warum kritisch:**
- `deploy-update.yml` erwartet, dass `docker-compose.yml` bereits existiert
- `.env` File muss vorhanden sein für Container-Konfiguration
**Prüfen:**
```bash
ssh deploy@94.16.110.151
# Prüfe docker-compose.yml
test -f ~/deployment/stacks/application/docker-compose.yml && echo "✅ OK" || echo "❌ FEHLT"
# Prüfe .env
test -f ~/deployment/stacks/application/.env && echo "✅ OK" || echo "❌ FEHLT"
# Prüfe Container
cd ~/deployment/stacks/application
docker compose ps
```
**Falls fehlend:**
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
```
### 2. Docker Registry muss erreichbar sein
**Prüfen:**
```bash
# Vom Production-Server
ssh deploy@94.16.110.151
docker login git.michaelschiemer.de:5000 -u admin -p <password>
# Oder Test-Pull
docker pull git.michaelschiemer.de:5000/framework:latest
```
### 3. Gitea Runner muss laufen
**Prüfen:**
```bash
cd deployment/gitea-runner
docker compose ps
# Sollte zeigen: gitea-runner "Up"
```
**In Gitea UI:**
```
https://git.michaelschiemer.de/admin/actions/runners
```
- Runner sollte als "Idle" oder "Active" angezeigt werden
### 4. Secrets müssen konfiguriert sein
**In Gitea:**
```
https://git.michaelschiemer.de/michael/michaelschiemer/settings/secrets/actions
```
**Prüfen:**
- [ ] `REGISTRY_USER` vorhanden
- [ ] `REGISTRY_PASSWORD` vorhanden
- [ ] `SSH_PRIVATE_KEY` vorhanden
### 5. SSH-Zugriff muss funktionieren
**Prüfen:**
```bash
# Test SSH-Verbindung
ssh -i ~/.ssh/production deploy@94.16.110.151 "echo 'SSH OK'"
```
---
## 🧪 Pre-Deployment Test
### Test 1: Ansible-Verbindung
```bash
cd deployment/ansible
ansible -i inventory/production.yml production -m ping
# Sollte: production | SUCCESS
```
### Test 2: Application Stack Status
```bash
cd deployment/ansible
ansible -i inventory/production.yml production -a "test -f ~/deployment/stacks/application/docker-compose.yml && echo 'OK' || echo 'MISSING'"
# Sollte: "OK"
```
### Test 3: Docker Registry Login (vom Runner aus)
```bash
# Vom Development-Machine (wo Runner läuft)
docker login git.michaelschiemer.de:5000 -u <registry-user> -p <registry-password>
# Sollte: Login Succeeded
```
---
## ⚠️ Häufige Probleme
### Problem: "Application Stack nicht deployed"
**Symptom:**
- `docker-compose.yml not found` Fehler
**Lösung:**
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
```
### Problem: "Registry Login fehlschlägt"
**Symptom:**
- `unauthorized: authentication required`
**Lösung:**
1. Prüfe Secrets in Gitea
2. Prüfe Registry-Credentials
3. Teste manuell: `docker login git.michaelschiemer.de:5000`
### Problem: "SSH-Verbindung fehlschlägt"
**Symptom:**
- Ansible kann nicht zum Server verbinden
**Lösung:**
1. Prüfe SSH Key: `~/.ssh/production`
2. Prüfe SSH Config
3. Teste manuell: `ssh -i ~/.ssh/production deploy@94.16.110.151`
---
## ✅ Alles OK? Dann los!
```bash
git add .
git commit -m "feat: Add feature"
git push origin main
```
**Pipeline-Status:**
```
https://git.michaelschiemer.de/michael/michaelschiemer/actions
```
---
**Viel Erfolg!** 🚀

View File

@@ -1,2 +0,0 @@
# Deployment test 2025-10-31T01:46:51Z
# Fri Oct 31 02:56:51 AM CET 2025

View File

@@ -1,246 +0,0 @@
# Finale Deployment Checklist - Code deployen
**Stand:** 2025-10-31
**Status:** ✅ Bereit für Code-Deployments!
---
## ✅ Was ist bereits fertig?
### Infrastructure (100%)
- ✅ Traefik (Reverse Proxy & SSL)
- ✅ PostgreSQL (Database)
- ✅ Docker Registry (Private Registry)
- ✅ Gitea (Git Server)
- ✅ Monitoring (Portainer, Grafana, Prometheus)
- ✅ WireGuard VPN
### Application Stack (100%)
- ✅ Integration in `setup-infrastructure.yml`
-`.env` Template (`application.env.j2`)
- ✅ Database-Migration nach Deployment
- ✅ Health-Checks nach Deployment
-`docker-compose.yml` wird automatisch kopiert
- ✅ Nginx-Konfiguration wird automatisch kopiert
### CI/CD Pipeline (100%)
- ✅ Workflows vorhanden (production-deploy.yml)
- ✅ Gitea Runner läuft und ist registriert
- ✅ Secrets konfiguriert (REGISTRY_USER, REGISTRY_PASSWORD, SSH_PRIVATE_KEY)
- ✅ Ansible Playbooks vorhanden
- ✅ Deployment-Playbook mit Pre-Flight Checks
### Dokumentation (100%)
- ✅ Umfangreiche Guides vorhanden
- ✅ Quick Start Guide
- ✅ Deployment-Dokumentation
- ✅ Troubleshooting-Guides
---
## 🚀 Code deployen - So geht's!
### Einfachste Methode
```bash
# 1. Code ändern
# ... Dateien bearbeiten ...
# 2. Committen
git add .
git commit -m "feat: Add new feature"
# 3. Pushen → Automatisches Deployment!
git push origin main
```
**Pipeline-Status:** `https://git.michaelschiemer.de/michael/michaelschiemer/actions`
**Zeit:** ~8-15 Minuten
---
## ⚠️ Wichtiger Hinweis: Erstmalige Deployment
**Falls Application Stack noch nicht deployed ist:**
Der `deploy-update.yml` Playbook prüft automatisch, ob `docker-compose.yml` existiert. Falls nicht, gibt es eine klare Fehlermeldung.
**Vor dem ersten Code-Push (falls Stack noch nicht deployed):**
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
```
Dieses Playbook:
- ✅ Deployed alle Infrastructure Stacks
-**Deployed Application Stack** (inkl. docker-compose.yml, .env, nginx config)
- ✅ Führt Database-Migration aus
- ✅ Prüft Health-Checks
**Nach diesem Setup:** Ab jetzt funktioniert `git push origin main` automatisch!
---
## 📋 Pre-Deployment Check
### Automatische Checks
Das `deploy-update.yml` Playbook prüft automatisch:
- ✅ Docker Service läuft
- ✅ Application Stack Verzeichnis existiert
-`docker-compose.yml` existiert (mit klarer Fehlermeldung falls nicht)
- ✅ Backup-Verzeichnis kann erstellt werden
### Manuelle Checks (Optional)
**Application Stack Status prüfen:**
```bash
ssh deploy@94.16.110.151
# Prüfe docker-compose.yml
test -f ~/deployment/stacks/application/docker-compose.yml && echo "✅ OK" || echo "❌ Fehlt - Führe setup-infrastructure.yml aus"
# Prüfe .env
test -f ~/deployment/stacks/application/.env && echo "✅ OK" || echo "❌ Fehlt - Führe setup-infrastructure.yml aus"
# Prüfe Container
cd ~/deployment/stacks/application
docker compose ps
```
**Gitea Runner Status:**
```bash
cd deployment/gitea-runner
docker compose ps
# Sollte zeigen: gitea-runner "Up"
```
**Secrets prüfen:**
```
https://git.michaelschiemer.de/michael/michaelschiemer/settings/secrets/actions
```
- REGISTRY_USER ✅
- REGISTRY_PASSWORD ✅
- SSH_PRIVATE_KEY ✅
---
## 🔍 Was passiert beim Deployment?
### Automatischer Ablauf
**1. CI/CD Pipeline startet** (bei Push zu `main`)
- Tests (~2-5 Min)
- Build (~3-5 Min)
- Push zur Registry (~1-2 Min)
**2. Ansible Deployment** (~2-4 Min)
- Pre-Flight Checks (Docker läuft, docker-compose.yml existiert)
- Backup erstellen
- Registry Login
- Neues Image pullen
- docker-compose.yml aktualisieren (Image-Tag ersetzen)
- Stack neu starten (`--force-recreate`)
- Health-Checks warten
**3. Health-Check** (~1 Min)
- Application Health-Check
- Bei Fehler: Automatischer Rollback
**Gesamtzeit:** ~8-15 Minuten
---
## ✅ Erfolgreiches Deployment erkennen
### In Gitea Actions
```
https://git.michaelschiemer.de/michael/michaelschiemer/actions
```
**Erfolg:**
- 🟢 Alle Jobs grün
- ✅ "Deploy via Ansible" erfolgreich
- ✅ Health-Check erfolgreich
### Auf Production-Server
```bash
# Container-Status
ssh deploy@94.16.110.151 "cd ~/deployment/stacks/application && docker compose ps"
# Application Health-Check
curl https://michaelschiemer.de/health
```
---
## 🆘 Troubleshooting
### Problem: "docker-compose.yml not found"
**Fehlermeldung:**
```
Application Stack docker-compose.yml not found at /home/deploy/deployment/stacks/application/docker-compose.yml
The Application Stack must be deployed first via:
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
```
**Lösung:**
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
```
### Problem: "Failed to pull image"
**Lösung:**
1. Prüfe Registry-Credentials in Gitea Secrets
2. Teste manuell: `docker login git.michaelschiemer.de:5000`
3. Prüfe ob Image in Registry vorhanden ist
### Problem: "Health-Check failed"
**Lösung:**
- Automatischer Rollback wird ausgeführt
- Logs prüfen: `docker compose logs app`
- Manueller Rollback: `ansible-playbook -i inventory/production.yml playbooks/rollback.yml`
---
## 📚 Weitere Dokumentation
- **[QUICK_START.md](QUICK_START.md)** - Schnellstart
- **[CODE_CHANGE_WORKFLOW.md](CODE_CHANGE_WORKFLOW.md)** - Codeänderungen pushen
- **[APPLICATION_STACK_DEPLOYMENT.md](APPLICATION_STACK_DEPLOYMENT.md)** - Deployment-Details
- **[DEPLOYMENT_PREFLIGHT_CHECK.md](DEPLOYMENT_PREFLIGHT_CHECK.md)** - Pre-Flight Checks
- **[CI_CD_STATUS.md](CI_CD_STATUS.md)** - CI/CD Status
---
## 🎉 Ready to Deploy!
**Alles ist bereit!**
**Nächster Schritt:**
1. **Prüfe ob Application Stack deployed ist** (siehe Pre-Deployment Check oben)
2. **Falls nicht:** `setup-infrastructure.yml` ausführen
3. **Dann:** Code pushen und Deployment genießen! 🚀
```bash
git add .
git commit -m "feat: Add feature"
git push origin main
```
**Pipeline-Status:**
```
https://git.michaelschiemer.de/michael/michaelschiemer/actions
```
---
**Viel Erfolg beim ersten Deployment!** 🎉

View File

@@ -0,0 +1,239 @@
# Git-basiertes Deployment - Test Plan
**Datum:** 2025-01-31
**Status:** ⏳ Ready to Test
---
## 🎯 Ziel
Testen, ob Container automatisch Code aus Git-Repository klont/pullt beim Start.
---
## ✅ Vorbereitung
### 1. Prüfen ob alles korrekt konfiguriert ist
#### Docker Entrypoint
-`docker/entrypoint.sh` - Git Clone/Pull implementiert
-`Dockerfile.production` - Git + Composer installiert
#### Docker Compose
-`deployment/stacks/application/docker-compose.yml` - Git Environment Variables vorhanden
#### Ansible Playbook
-`deployment/ansible/playbooks/sync-code.yml` - Erstellt
---
## 🧪 Test-Plan
### Test 1: Git-Variablen in .env setzen
**Ziel:** Prüfen ob Git-Konfiguration in .env gesetzt werden kann
```bash
# SSH zum Production-Server
ssh deploy@94.16.110.151
# Prüfen ob .env existiert
cd ~/deployment/stacks/application
test -f .env && echo "✅ OK" || echo "❌ Fehlt"
# Git-Variablen manuell hinzufügen (für Test)
cat >> .env << 'EOF'
# Git Repository Configuration
GIT_REPOSITORY_URL=https://git.michaelschiemer.de/michael/michaelschiemer.git
GIT_BRANCH=main
GIT_TOKEN=
EOF
```
**Erwartetes Ergebnis:**
- ✅ .env existiert
- ✅ Git-Variablen können hinzugefügt werden
---
### Test 2: Container mit Git-Variablen starten
**Ziel:** Prüfen ob Container Git Clone/Pull beim Start ausführt
```bash
# Auf Production-Server
cd ~/deployment/stacks/application
# Container neu starten
docker compose restart app
# Logs prüfen (sollte Git Clone/Pull zeigen)
docker logs app --tail 100 | grep -E "(Git|Clone|Pull|✅|❌)"
```
**Erwartetes Ergebnis:**
- ✅ Logs zeigen "📥 Cloning/Pulling code from Git repository..."
- ✅ Logs zeigen "📥 Cloning repository from ..." oder "🔄 Pulling latest changes..."
- ✅ Logs zeigen "✅ Git sync completed"
**Falls Fehler:**
- ❌ Logs zeigen Fehler → Troubleshooting nötig
- ❌ Keine Git-Logs → Entrypoint nicht korrekt oder Git-Variablen nicht gesetzt
---
### Test 3: Code-Verifikation im Container
**Ziel:** Prüfen ob Code tatsächlich im Container ist
```bash
# Auf Production-Server
docker exec app ls -la /var/www/html/ | head -20
docker exec app test -f /var/www/html/composer.json && echo "✅ composer.json vorhanden" || echo "❌ Fehlt"
docker exec app test -d /var/www/html/src && echo "✅ src/ vorhanden" || echo "❌ Fehlt"
docker exec app test -d /var/www/html/.git && echo "✅ .git vorhanden" || echo "❌ Fehlt"
```
**Erwartetes Ergebnis:**
- ✅ Dateien sind im Container
-`.git` Verzeichnis existiert (zeigt dass Git Clone funktioniert hat)
---
### Test 4: Code-Update testen (Git Pull)
**Ziel:** Prüfen ob `sync-code.yml` Playbook funktioniert
```bash
# Lokal (auf Dev-Machine)
cd deployment/ansible
# Sync-Code Playbook ausführen
ansible-playbook -i inventory/production.yml \
playbooks/sync-code.yml \
-e "git_branch=main"
# Container-Logs prüfen (auf Production-Server)
ssh deploy@94.16.110.151
docker logs app --tail 50 | grep -E "(Git|Pull|✅)"
```
**Erwartetes Ergebnis:**
- ✅ Playbook führt erfolgreich aus
- ✅ Container wird neu gestartet
- ✅ Logs zeigen "🔄 Pulling latest changes..."
- ✅ Code wird aktualisiert
---
### Test 5: Application Health Check
**Ziel:** Prüfen ob Application nach Git-Sync noch funktioniert
```bash
# Health Check
curl -f https://michaelschiemer.de/health || echo "❌ Health Check fehlgeschlagen"
# Application Test
curl -f https://michaelschiemer.de/ || echo "❌ Application fehlgeschlagen"
```
**Erwartetes Ergebnis:**
- ✅ Health Check erfolgreich
- ✅ Application läuft
---
## 🔧 Troubleshooting
### Problem: Container zeigt keine Git-Logs
**Mögliche Ursachen:**
1. `GIT_REPOSITORY_URL` nicht in .env gesetzt
2. Entrypoint Script nicht korrekt
3. Git nicht im Container installiert
**Lösung:**
```bash
# Prüfen .env
cd ~/deployment/stacks/application
grep GIT_REPOSITORY_URL .env
# Prüfen Entrypoint
docker exec app cat /usr/local/bin/entrypoint.sh | grep -A 10 "GIT_REPOSITORY_URL"
# Prüfen Git Installation
docker exec app which git
docker exec app git --version
```
---
### Problem: Git Clone fehlgeschlagen
**Mögliche Ursachen:**
1. Repository nicht erreichbar vom Server
2. Falsche Credentials
3. Branch nicht existiert
**Lösung:**
```bash
# Prüfen Repository-Zugriff
docker exec app git clone --branch main --depth 1 https://git.michaelschiemer.de/michael/michaelschiemer.git /tmp/test-clone
# Logs prüfen
docker logs app --tail 100 | grep -i "error\|fail"
```
---
### Problem: Composer Install fehlgeschlagen
**Mögliche Ursachen:**
1. Composer nicht installiert
2. Network-Probleme beim Dependency-Download
**Lösung:**
```bash
# Composer prüfen
docker exec app which composer
docker exec app composer --version
# Manuell testen
docker exec app sh -c "cd /var/www/html && composer install --no-dev --optimize-autoloader --no-interaction"
```
---
## 📋 Checkliste für Test
### Vor Test:
- [ ] Git-Repository ist erreichbar vom Production-Server
- [ ] Git-Credentials sind verfügbar (falls private Repository)
- [ ] .env Datei existiert auf Production-Server
- [ ] Container-Image wurde neu gebaut (mit Git + Composer)
### Während Test:
- [ ] Test 1: Git-Variablen in .env setzen
- [ ] Test 2: Container mit Git-Variablen starten
- [ ] Test 3: Code-Verifikation im Container
- [ ] Test 4: Code-Update testen (Git Pull)
- [ ] Test 5: Application Health Check
### Nach Test:
- [ ] Alle Tests erfolgreich
- [ ] Application läuft korrekt
- [ ] Code ist aktuell
---
## 🚀 Nächste Schritte nach erfolgreichem Test
1. ✅ Git-basiertes Deployment dokumentieren
2. ✅ In CI/CD Pipeline integrieren (optional)
3. ✅ Dokumentation aktualisieren
---
**Bereit zum Testen!** 🧪

View File

@@ -1,155 +0,0 @@
# Native Workflow ohne GitHub Actions
## Problem
Der aktuelle Workflow (`production-deploy.yml`) verwendet GitHub Actions wie:
- `actions/checkout@v4`
- `shivammathur/setup-php@v2`
- `actions/cache@v3`
- `docker/setup-buildx-action@v3`
- `docker/build-push-action@v5`
Diese Actions müssen von GitHub geladen werden, was zu Abbrüchen führen kann wenn:
- GitHub nicht erreichbar ist
- Actions nicht geladen werden können
- Timeouts auftreten
## Lösung: Native Workflow
Die Datei `.gitea/workflows/production-deploy-native.yml` verwendet **nur Shell-Commands** und keine GitHub Actions:
### Vorteile
1. **Keine GitHub-Abhängigkeit**: Funktioniert komplett offline
2. **Schneller**: Keine Action-Downloads
3. **Weniger Fehlerquellen**: Direkte Shell-Commands statt Actions
4. **Einfacher zu debuggen**: Standard-Bash-Scripts
### Änderungen
#### 1. Checkout
**Vorher:**
```yaml
- uses: actions/checkout@v4
```
**Nachher:**
```bash
git clone --depth 1 --branch "$REF_NAME" \
"https://git.michaelschiemer.de/${{ github.repository }}.git" \
/workspace/repo
```
#### 2. PHP Setup
**Vorher:**
```yaml
- uses: shivammathur/setup-php@v2
with:
php-version: '8.3'
```
**Nachher:**
```bash
apt-get update
apt-get install -y php8.3 php8.3-cli php8.3-mbstring ...
```
#### 3. Cache
**Vorher:**
```yaml
- uses: actions/cache@v3
```
**Nachher:**
```bash
# Einfaches Datei-basiertes Caching
if [ -d "/tmp/composer-cache/vendor" ]; then
cp -r /tmp/composer-cache/vendor /workspace/repo/vendor
fi
```
#### 4. Docker Buildx
**Vorher:**
```yaml
- uses: docker/setup-buildx-action@v3
```
**Nachher:**
```bash
docker buildx create --name builder --use || docker buildx use builder
docker buildx inspect --bootstrap
```
#### 5. Docker Build/Push
**Vorher:**
```yaml
- uses: docker/build-push-action@v5
```
**Nachher:**
```bash
docker buildx build \
--file ./Dockerfile.production \
--tag $REGISTRY/$IMAGE_NAME:latest \
--push \
.
```
## Verwendung
### Option 1: Native Workflow aktivieren
1. **Benenne um:**
```bash
mv .gitea/workflows/production-deploy.yml .gitea/workflows/production-deploy-with-actions.yml.bak
mv .gitea/workflows/production-deploy-native.yml .gitea/workflows/production-deploy.yml
```
2. **Commite und pushe:**
```bash
git add .gitea/workflows/production-deploy.yml
git commit -m "chore: switch to native workflow without GitHub Actions"
git push origin main
```
### Option 2: Beide parallel testen
Lass beide Workflows parallel laufen:
- `production-deploy.yml` - Mit Actions (aktuell)
- `production-deploy-native.yml` - Native (neue Version)
## Gitea Actions Konfiguration
**Wichtig:** Wenn wir die native Version verwenden, brauchen wir `DEFAULT_ACTIONS_URL` **nicht mehr** in der Gitea-Konfiguration.
Aber es schadet auch nicht, es drin zu lassen für zukünftige Workflows.
## Debugging
Wenn der native Workflow nicht funktioniert:
1. **Prüfe Git Clone:**
```bash
# Im Runner Container
git clone --depth 1 https://git.michaelschiemer.de/michael/michaelschiemer.git /tmp/test
```
2. **Prüfe Docker Buildx:**
```bash
docker buildx version
docker buildx ls
```
3. **Prüfe PHP Installation:**
```bash
php --version
php -m # Zeigt installierte Module
```
## Empfehlung
**Für Stabilität:** Verwende die native Version (`production-deploy-native.yml`)
**Für Kompatibilität:** Bleib bei der Actions-Version (`production-deploy.yml`)
Die native Version sollte stabiler sein, da sie keine externen Dependencies benötigt.

45
deployment/QUICK_TEST.md Normal file
View File

@@ -0,0 +1,45 @@
# Quick Git Deployment Test
## ✅ Verifikation ohne Container
Alle Dateien sind korrekt konfiguriert:
### 1. Entrypoint Script (`docker/entrypoint.sh`)
-`GIT_REPOSITORY_URL` Check vorhanden (Zeile 30)
- ✅ Git Clone/Pull Funktionalität implementiert
- ✅ Composer Install integriert
### 2. Docker Compose (`deployment/stacks/application/docker-compose.yml`)
-`GIT_REPOSITORY_URL` Environment Variable vorhanden (Zeile 17)
-`GIT_BRANCH`, `GIT_TOKEN`, `GIT_USERNAME`, `GIT_PASSWORD` vorhanden
### 3. Ansible Template (`deployment/ansible/templates/application.env.j2`)
- ✅ Git-Variablen im Template definiert
### 4. Dockerfile (`Dockerfile.production`)
- ✅ Git installiert
- ✅ Composer installiert
- ✅ Entrypoint kopiert
---
## 🧪 Schneller Test (ohne Container-Start)
```bash
# Prüfen ob GIT_REPOSITORY_URL überall vorhanden ist
grep -c "GIT_REPOSITORY_URL" docker/entrypoint.sh
grep -c "GIT_REPOSITORY_URL" deployment/stacks/application/docker-compose.yml
grep -c "GIT_REPOSITORY_URL" deployment/ansible/templates/application.env.j2
```
**Erwartetes Ergebnis:** Alle sollten > 0 zurückgeben
---
## 🚀 Nächste Schritte zum Testen
1. **Image ist bereits gebaut**
2. **Git-Variablen in .env setzen** (auf Production Server)
3. **Container neu starten** und Logs prüfen
Siehe `TEST_GIT_DEPLOYMENT.md` für detaillierte Anleitung.

View File

@@ -54,8 +54,6 @@ deployment/
│ ├── playbooks/ # Deployment automation
│ ├── inventory/ # Server inventory
│ └── secrets/ # Ansible Vault secrets
├── runner/ # Gitea Actions runner (dev machine)
├── scripts/ # Helper scripts
└── docs/ # Deployment documentation
```
@@ -120,16 +118,31 @@ Each stack has its own README with detailed configuration:
## Deployment Commands
### Manual Deployment
### Code deployen (Image-basiert)
```bash
./scripts/deploy.sh
cd deployment/ansible
ansible-playbook -i inventory/production.yml \
playbooks/deploy-update.yml \
-e "image_tag=abc1234-1696234567"
```
### Rollback to Previous Version
### Code synchen (Git-basiert)
```bash
./scripts/rollback.sh
cd deployment/ansible
ansible-playbook -i inventory/production.yml \
playbooks/sync-code.yml \
-e "git_branch=main"
```
### Rollback zu vorheriger Version
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml \
playbooks/rollback.yml
```
**📖 Vollständige Command-Referenz:** Siehe [DEPLOYMENT_COMMANDS.md](DEPLOYMENT_COMMANDS.md)
### Update Specific Stack
```bash
cd stacks/<stack-name>

View File

@@ -1,257 +0,0 @@
# ✅ Ready to Deploy - Checklist
**Stand:** 2025-10-31
**Status:** ✅ Bereit für Code-Deployments!
---
## ✅ Vollständig konfiguriert
### Infrastructure
- ✅ Traefik (Reverse Proxy & SSL)
- ✅ PostgreSQL (Database)
- ✅ Docker Registry (Private Registry)
- ✅ Gitea (Git Server)
- ✅ Monitoring (Portainer, Grafana, Prometheus)
- ✅ WireGuard VPN
### Application Stack
- ✅ Integration in `setup-infrastructure.yml`
-`.env` Template (`application.env.j2`)
- ✅ Database-Migration nach Deployment
- ✅ Health-Checks nach Deployment
### CI/CD Pipeline
- ✅ Workflows vorhanden (production-deploy.yml)
- ✅ Gitea Runner läuft und ist registriert
- ✅ Secrets konfiguriert (REGISTRY_USER, REGISTRY_PASSWORD, SSH_PRIVATE_KEY)
- ✅ Ansible Playbooks vorhanden
### Dokumentation
- ✅ Umfangreiche Guides vorhanden
- ✅ Quick Start Guide
- ✅ Deployment-Dokumentation
---
## 🚀 Code deployen - So geht's
### Einfachste Methode
```bash
# 1. Code ändern
# ... Dateien bearbeiten ...
# 2. Committen
git add .
git commit -m "feat: Add new feature"
# 3. Pushen → Automatisches Deployment!
git push origin main
```
**Pipeline-Status:** `https://git.michaelschiemer.de/michael/michaelschiemer/actions`
---
## ⚠️ Wichtiger Hinweis: Erstmalige Deployment
**Wenn Application Stack noch nicht deployed ist:**
Der `deploy-update.yml` Playbook erwartet, dass der Application Stack bereits existiert.
**Vor dem ersten Code-Push:**
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
```
Dieses Playbook deployed:
- Alle Infrastructure Stacks (Traefik, PostgreSQL, Registry, Gitea, Monitoring)
- **Application Stack** (mit docker-compose.yml und .env)
**Nach diesem Setup:** Ab jetzt funktioniert `git push origin main` automatisch!
---
## 📋 Pre-Deployment Checklist
### ✅ Alles sollte bereits erledigt sein, aber hier zur Sicherheit:
- [x] Infrastructure Stacks deployed ✅
- [ ] **Application Stack deployed** ⚠️ Prüfen!
- [x] Gitea Runner läuft ✅
- [x] Secrets konfiguriert ✅
- [x] Workflows vorhanden ✅
### Application Stack Deployment prüfen
```bash
# SSH zum Production-Server
ssh deploy@94.16.110.151
# Prüfe ob Application Stack existiert
test -f ~/deployment/stacks/application/docker-compose.yml && echo "✅ Vorhanden" || echo "❌ Fehlt"
# Prüfe ob .env existiert
test -f ~/deployment/stacks/application/.env && echo "✅ Vorhanden" || echo "❌ Fehlt"
# Prüfe Container-Status
cd ~/deployment/stacks/application
docker compose ps
```
**Falls fehlend:** Siehe "Wichtiger Hinweis: Erstmalige Deployment" oben.
---
## 🎯 Erster Code-Push
### Option 1: Direkt pushen (wenn Stack bereits deployed)
```bash
# Test-Commit
echo "# Deployment Test $(date)" >> README.md
git add README.md
git commit -m "test: First deployment via CI/CD pipeline"
git push origin main
# Pipeline beobachten:
# → https://git.michaelschiemer.de/michael/michaelschiemer/actions
```
### Option 2: Application Stack zuerst deployen
```bash
# Application Stack deployen (inkl. alle Infrastructure Stacks)
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
# Danach: Ersten Code-Push
git add .
git commit -m "feat: Initial application deployment"
git push origin main
```
---
## 🔍 Was passiert beim Deployment
### Pipeline-Ablauf (automatisch):
1. **Tests** (~2-5 Min)
- PHP Pest Tests
- PHPStan Code Quality
- Code Style Check
2. **Build** (~3-5 Min)
- Docker Image Build
- Image wird getaggt: `<short-sha>-<timestamp>`
- Image wird zur Registry gepusht
3. **Deploy** (~2-4 Min)
- SSH zum Production-Server
- Ansible Playbook wird ausgeführt:
- Backup erstellen
- Registry Login
- Neues Image pullen
- docker-compose.yml aktualisieren
- Stack neu starten
- Health-Checks warten
4. **Health-Check** (~1 Min)
- Application Health-Check
- Bei Fehler: Automatischer Rollback
**Gesamtzeit:** ~8-15 Minuten
---
## ✅ Erfolgreiches Deployment erkennen
### In Gitea Actions
```
https://git.michaelschiemer.de/michael/michaelschiemer/actions
```
**Erfolg:**
- 🟢 Alle Jobs grün
- ✅ "Deploy via Ansible" erfolgreich
- ✅ Health-Check erfolgreich
### Auf Production-Server
```bash
# SSH zum Server
ssh deploy@94.16.110.151
# Container-Status prüfen
cd ~/deployment/stacks/application
docker compose ps
# Alle Container sollten "healthy" sein
# Application prüfen
curl https://michaelschiemer.de/health
# Sollte "healthy" zurückgeben
```
---
## 🆘 Troubleshooting
### Problem: "docker-compose.yml not found"
**Lösung:**
```bash
# Application Stack zuerst deployen
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
```
### Problem: Pipeline schlägt fehl
**Tests fehlgeschlagen:**
- Tests lokal ausführen und Fehler beheben
- `./vendor/bin/pest`
- `composer cs`
**Build fehlgeschlagen:**
- Docker Build lokal testen
- `docker build -f Dockerfile.production -t test .`
**Deployment fehlgeschlagen:**
- Logs prüfen: Workflow-Logs in Gitea Actions
- Server-Logs prüfen: `ssh deploy@94.16.110.151 "cd ~/deployment/stacks/application && docker compose logs"`
### Problem: Health-Check fehlgeschlagen
**Automatischer Rollback:**
- Pipeline führt automatisch Rollback durch
- Alte Version wird wiederhergestellt
**Manueller Rollback:**
```bash
cd deployment/ansible
ansible-playbook -i inventory/production.yml playbooks/rollback.yml
```
---
## 📚 Weitere Dokumentation
- **[QUICK_START.md](QUICK_START.md)** - Schnellstart-Guide
- **[CODE_CHANGE_WORKFLOW.md](CODE_CHANGE_WORKFLOW.md)** - Codeänderungen pushen
- **[APPLICATION_STACK_DEPLOYMENT.md](APPLICATION_STACK_DEPLOYMENT.md)** - Deployment-Details
- **[CI_CD_STATUS.md](CI_CD_STATUS.md)** - CI/CD Status
---
## 🎉 Ready!
**Alles ist bereit für Code-Deployments!**
**Nächster Schritt:**
1. Prüfe ob Application Stack deployed ist (siehe oben)
2. Falls nicht: `setup-infrastructure.yml` ausführen
3. Dann: Code pushen und Deployment genießen! 🚀

View File

@@ -17,7 +17,8 @@ Sollte enthalten:
```ini
[actions]
ENABLED = true
DEFAULT_ACTIONS_URL = https://github.com
# Do NOT set DEFAULT_ACTIONS_URL - it will automatically use Gitea's own instance
# Setting DEFAULT_ACTIONS_URL to custom URLs is no longer supported by Gitea
```
#### 2. Timeouts bei langen Steps
@@ -43,11 +44,11 @@ docker compose restart gitea-runner
**Lösung:** Prüfe, ob Buildx richtig läuft. Alternativ: Direktes Docker Build verwenden.
#### 4. GitHub-Variablen in Gitea
#### 4. Gitea-Variablen (früher GitHub-kompatibel)
**Symptom:** `${{ github.sha }}` ist leer oder falsch
**Lösung:** Gitea Actions sollte `github.*` Variablen unterstützen, aber manchmal funktioniert `gitea.*` besser.
**Lösung:** Gitea Actions unterstützt `github.*` Variablen für Kompatibilität, aber `gitea.*` ist die native Variante.
**Test:** Prüfe in Workflow-Logs, welche Variablen verfügbar sind:
```yaml

View File

@@ -13,7 +13,9 @@ docker_registry: "localhost:5000"
docker_registry_url: "localhost:5000"
docker_registry_external: "registry.michaelschiemer.de"
docker_registry_username_default: "admin"
docker_registry_password_default: "registry-secure-password-2025"
# docker_registry_password_default should be set in vault as vault_docker_registry_password
# If not using vault, override via -e docker_registry_password_default="your-password"
registry_auth_path: "{{ stacks_base_path }}/registry/auth"
# Application Configuration
app_name: "framework"
@@ -21,6 +23,18 @@ app_domain: "michaelschiemer.de"
app_image: "{{ docker_registry }}/{{ app_name }}"
app_image_external: "{{ docker_registry_external }}/{{ app_name }}"
# Domain Configuration
gitea_domain: "git.michaelschiemer.de"
# Email Configuration
mail_from_address: "noreply@{{ app_domain }}"
acme_email: "kontakt@{{ app_domain }}"
# SSL Certificate Domains
ssl_domains:
- "{{ gitea_domain }}"
- "{{ app_domain }}"
# Health Check Configuration
health_check_url: "https://{{ app_domain }}/health"
health_check_retries: 10
@@ -34,14 +48,26 @@ rollback_timeout: 300
wait_timeout: 60
# Git Configuration (for sync-code.yml)
git_repository_url_default: "https://git.michaelschiemer.de/michael/michaelschiemer.git"
git_repository_url_default: "https://{{ gitea_domain }}/michael/michaelschiemer.git"
git_branch_default: "main"
git_token: "{{ vault_git_token | default('') }}"
git_username: "{{ vault_git_username | default('') }}"
git_password: "{{ vault_git_password | default('') }}"
# Database Configuration
db_user_default: "postgres"
db_name_default: "michaelschiemer"
# MinIO Object Storage Configuration
minio_root_user: "{{ vault_minio_root_user | default('minioadmin') }}"
minio_root_password: "{{ vault_minio_root_password | default('') }}"
minio_api_domain: "minio-api.michaelschiemer.de"
minio_console_domain: "minio.michaelschiemer.de"
# WireGuard Configuration
wireguard_interface: "wg0"
wireguard_config_path: "/etc/wireguard"
wireguard_port_default: 51820
wireguard_network_default: "10.8.0.0/24"
wireguard_server_ip_default: "10.8.0.1"
wireguard_enable_ip_forwarding: true

View File

@@ -0,0 +1,7 @@
---
# Local inventory for running Ansible playbooks on localhost
all:
hosts:
localhost:
ansible_connection: local
ansible_python_interpreter: "{{ ansible_playbook_python }}"

View File

@@ -1,43 +1,16 @@
---
all:
hosts:
children:
production:
hosts:
server:
ansible_host: 94.16.110.151
ansible_user: deploy
ansible_python_interpreter: /usr/bin/python3
ansible_ssh_private_key_file: ~/.ssh/production
vars:
# Docker Registry
# Use localhost for internal access (registry only binds to 127.0.0.1:5000)
# External access via Traefik: registry.michaelschiemer.de
docker_registry: localhost:5000
docker_registry_url: localhost:5000
# Registry credentials (can be overridden via -e or vault)
# Defaults are set here, can be overridden by extra vars or vault
docker_registry_username_default: 'admin'
docker_registry_password_default: 'registry-secure-password-2025'
# Note: Centralized variables are defined in group_vars/production.yml
# Only override-specific variables should be here
# Application Configuration
app_name: framework
app_domain: michaelschiemer.de
app_image: "{{ docker_registry }}/{{ app_name }}"
# Docker Stack
stack_name: app
compose_file: /home/deploy/docker-compose.prod.yml
# Deployment Paths
deploy_user_home: /home/deploy
app_base_path: "{{ deploy_user_home }}/app"
secrets_path: "{{ deploy_user_home }}/secrets"
backups_path: "{{ deploy_user_home }}/backups"
# Health Check
health_check_url: "https://{{ app_domain }}/health"
health_check_retries: 10
health_check_delay: 10
# Rollback Configuration
max_rollback_versions: 5
rollback_timeout: 300
# Legacy compose_file reference (deprecated - stacks now use deployment/stacks/)
compose_file: "{{ stacks_base_path }}/application/docker-compose.yml"

View File

@@ -0,0 +1,68 @@
---
- name: Check Git Deployment Logs
hosts: production
gather_facts: yes
become: no
tasks:
- name: Get full container logs
shell: |
docker logs app --tail 100
args:
executable: /bin/bash
register: container_logs
changed_when: false
- name: Get Git-related logs
shell: |
docker logs app --tail 100 | grep -E "(Git|Clone|Pull|✅|❌|📥|📦|🔄|🗑️)" || echo "No Git-related logs found"
args:
executable: /bin/bash
register: git_logs
changed_when: false
- name: Check GIT_REPOSITORY_URL environment variable
shell: |
docker exec app env | grep GIT_REPOSITORY_URL || echo "GIT_REPOSITORY_URL not set"
args:
executable: /bin/bash
register: git_env
changed_when: false
ignore_errors: yes
- name: Check if .git directory exists
shell: |
docker exec app test -d /var/www/html/.git && echo "✅ Git repo vorhanden" || echo "❌ Git repo fehlt"
args:
executable: /bin/bash
register: git_repo_check
changed_when: false
ignore_errors: yes
- name: Check entrypoint script for Git functionality
shell: |
docker exec app cat /usr/local/bin/entrypoint.sh | grep -A 5 "GIT_REPOSITORY_URL" | head -10 || echo "Entrypoint script not found or no Git functionality"
args:
executable: /bin/bash
register: entrypoint_check
changed_when: false
ignore_errors: yes
- name: Display Git-related logs
debug:
msg:
- "=== Git-Related Logs ==="
- "{{ git_logs.stdout }}"
- ""
- "=== Git Environment Variable ==="
- "{{ git_env.stdout }}"
- ""
- "=== Git Repository Check ==="
- "{{ git_repo_check.stdout }}"
- ""
- "=== Entrypoint Git Check ==="
- "{{ entrypoint_check.stdout }}"
- name: Display full logs (last 50 lines)
debug:
msg: "{{ container_logs.stdout_lines[-50:] | join('\n') }}"

View File

@@ -22,8 +22,8 @@
- name: Derive docker registry credentials from vault when not provided
set_fact:
docker_registry_username: "{{ docker_registry_username | default(vault_docker_registry_username | default(docker_registry_username_default | default('admin'))) }}"
docker_registry_password: "{{ docker_registry_password | default(vault_docker_registry_password | default(docker_registry_password_default | default('registry-secure-password-2025'))) }}"
docker_registry_username: "{{ docker_registry_username | default(vault_docker_registry_username | default(docker_registry_username_default)) }}"
docker_registry_password: "{{ docker_registry_password | default(vault_docker_registry_password | default(docker_registry_password_default)) }}"
- name: Verify Docker is running
systemd:

View File

@@ -0,0 +1,81 @@
---
- name: Fix Gitea Actions Configuration (non-destructive)
hosts: production
become: no
gather_facts: yes
tasks:
- name: Check current Gitea Actions configuration
shell: |
docker exec gitea cat /data/gitea/conf/app.ini 2>/dev/null | grep -A 5 "\[actions\]" || echo "No actions section found"
register: current_config
changed_when: false
ignore_errors: yes
- name: Backup existing app.ini
shell: |
docker exec gitea cp /data/gitea/conf/app.ini /data/gitea/conf/app.ini.backup.$(date +%Y%m%d_%H%M%S)
changed_when: false
ignore_errors: yes
- name: Copy app.ini from container for editing
shell: |
docker cp gitea:/data/gitea/conf/app.ini /tmp/gitea_app_ini_$$
register: copy_result
- name: Update app.ini Actions section
shell: |
# Remove DEFAULT_ACTIONS_URL line if it exists in [actions] section
sed -i '/^\[actions\]/,/^\[/{ /^DEFAULT_ACTIONS_URL/d; }' /tmp/gitea_app_ini_$$
# Ensure ENABLED = true in [actions] section
if grep -q "^\[actions\]" /tmp/gitea_app_ini_$$; then
# Section exists - ensure ENABLED = true
sed -i '/^\[actions\]/,/^\[/{ s/^ENABLED.*/ENABLED = true/; }' /tmp/gitea_app_ini_$$
# If ENABLED line doesn't exist, add it
if ! grep -A 10 "^\[actions\]" /tmp/gitea_app_ini_$$ | grep -q "^ENABLED"; then
sed -i '/^\[actions\]/a ENABLED = true' /tmp/gitea_app_ini_$$
fi
else
# Section doesn't exist - add it
echo "" >> /tmp/gitea_app_ini_$$
echo "[actions]" >> /tmp/gitea_app_ini_$$
echo "ENABLED = true" >> /tmp/gitea_app_ini_$$
fi
args:
executable: /bin/bash
register: config_updated
- name: Copy updated app.ini back to container
shell: |
docker cp /tmp/gitea_app_ini_$$ gitea:/data/gitea/conf/app.ini
rm -f /tmp/gitea_app_ini_$$
when: config_updated.changed | default(false)
- name: Verify Actions configuration after update
shell: |
docker exec gitea cat /data/gitea/conf/app.ini | grep -A 5 "\[actions\]"
register: updated_config
changed_when: false
- name: Restart Gitea to apply configuration
shell: |
cd {{ stacks_base_path }}/gitea
docker compose restart gitea
when: config_updated.changed | default(false)
- name: Wait for Gitea to be ready
wait_for:
timeout: 60
when: config_updated.changed | default(false)
- name: Display configuration result
debug:
msg:
- "=== Gitea Actions Configuration Fixed ==="
- ""
- "Current [actions] configuration:"
- "{{ updated_config.stdout }}"
- ""
- "Configuration updated: {{ 'Yes' if config_updated.changed else 'No changes needed' }}"
- "Gitea restarted: {{ 'Yes' if config_updated.changed else 'No' }}"

View File

@@ -0,0 +1,49 @@
---
- name: Remove DEFAULT_ACTIONS_URL from Gitea configuration
hosts: production
become: no
gather_facts: yes
tasks:
- name: Check if DEFAULT_ACTIONS_URL exists in app.ini
shell: |
docker exec gitea cat /data/gitea/conf/app.ini 2>/dev/null | grep -q "DEFAULT_ACTIONS_URL" && echo "exists" || echo "not_found"
register: url_check
changed_when: false
ignore_errors: yes
- name: Remove DEFAULT_ACTIONS_URL from app.ini
shell: |
docker exec gitea sh -c 'sed -i "/^DEFAULT_ACTIONS_URL/d" /data/gitea/conf/app.ini'
when: url_check.stdout == "exists"
register: url_removed
- name: Restart Gitea to apply configuration changes
shell: |
cd {{ stacks_base_path }}/gitea
docker compose restart gitea
when: url_removed.changed | default(false)
- name: Wait for Gitea to be ready
wait_for:
timeout: 60
when: url_removed.changed | default(false)
- name: Verify Gitea Actions configuration
shell: |
docker exec gitea cat /data/gitea/conf/app.ini 2>/dev/null | grep -A 3 "\[actions\]" || echo "Config not accessible"
register: gitea_config
changed_when: false
ignore_errors: yes
- name: Display Gitea Actions configuration
debug:
msg:
- "=== Gitea Configuration Fix Complete ==="
- "DEFAULT_ACTIONS_URL removed: {{ 'Yes' if url_removed.changed else 'No (not found or already removed)' }}"
- "Container restarted: {{ 'Yes' if url_removed.changed else 'No' }}"
- ""
- "Current Actions configuration:"
- "{{ gitea_config.stdout if gitea_config.stdout else 'Could not read config' }}"
- ""
- "Gitea will now use its own instance for actions by default (no GitHub fallback)."

View File

@@ -0,0 +1,165 @@
---
- name: Remove framework-production Stack from Production Server
hosts: production
become: no
gather_facts: yes
vars:
stack_name: framework-production
stack_path: "~/framework-production"
tasks:
- name: Check if Docker is running
systemd:
name: docker
state: started
register: docker_service
become: yes
- name: Fail if Docker is not running
fail:
msg: "Docker service is not running"
when: docker_service.status.ActiveState != 'active'
- name: Check if framework-production stack directory exists
stat:
path: "{{ stack_path }}"
register: stack_dir
- name: Check if framework-production containers exist (all states)
shell: |
docker ps -a --filter "name={{ stack_name }}" --format "{{ '{{' }}.Names{{ '}}' }}"
args:
executable: /bin/bash
register: all_containers
changed_when: false
failed_when: false
- name: Display all containers found
debug:
msg: "Found containers: {{ all_containers.stdout_lines if all_containers.stdout_lines | length > 0 else 'None' }}"
- name: List all containers to find framework-production related ones
shell: |
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}\t{{ '{{' }}.Status{{ '}}' }}"
args:
executable: /bin/bash
register: all_containers_list
changed_when: false
failed_when: false
- name: Display all containers
debug:
msg: "{{ all_containers_list.stdout_lines }}"
- name: Check for containers with framework-production in name or image
shell: |
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}" | grep -iE "(framework-production|^db$|^php$|^web$)" || echo ""
args:
executable: /bin/bash
register: matching_containers
changed_when: false
failed_when: false
- name: Check for containers with framework-production images
shell: |
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}" | grep -i "framework-production" | cut -f1 || echo ""
args:
executable: /bin/bash
register: image_based_containers
changed_when: false
failed_when: false
- name: Display found containers
debug:
msg:
- "Containers by name pattern: {{ matching_containers.stdout_lines if matching_containers.stdout_lines | length > 0 else 'None' }}"
- "Containers by image: {{ image_based_containers.stdout_lines if image_based_containers.stdout_lines | length > 0 else 'None' }}"
- name: Stop and remove containers using docker-compose if stack directory exists
shell: |
cd {{ stack_path }}
docker-compose down -v
args:
executable: /bin/bash
when: stack_dir.stat.exists
register: compose_down_result
changed_when: true
ignore_errors: yes
- name: Stop and remove containers by name pattern and image
shell: |
REMOVED_CONTAINERS=""
# Method 1: Remove containers with framework-production in image name
while IFS=$'\t' read -r container image; do
if [[ "$image" == *"framework-production"* ]]; then
echo "Stopping and removing container '$container' (image: $image)"
docker stop "$container" 2>/dev/null || true
docker rm "$container" 2>/dev/null || true
REMOVED_CONTAINERS="$REMOVED_CONTAINERS $container"
fi
done < <(docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}")
# Method 2: Remove containers with specific names that match the pattern
for container_name in "db" "php" "web"; do
# Check if container exists and has framework-production image
container_info=$(docker ps -a --filter "name=^${container_name}$" --format "{{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}" 2>/dev/null || echo "")
if [[ -n "$container_info" ]]; then
image=$(echo "$container_info" | cut -f2)
if [[ "$image" == *"framework-production"* ]] || [[ "$image" == *"mariadb"* ]]; then
echo "Stopping and removing container '$container_name' (image: $image)"
docker stop "$container_name" 2>/dev/null || true
docker rm "$container_name" 2>/dev/null || true
REMOVED_CONTAINERS="$REMOVED_CONTAINERS $container_name"
fi
fi
done
# Method 3: Remove containers with framework-production in name
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}" | grep -i framework-production | while read container; do
if [ ! -z "$container" ]; then
echo "Stopping and removing container '$container'"
docker stop "$container" 2>/dev/null || true
docker rm "$container" 2>/dev/null || true
REMOVED_CONTAINERS="$REMOVED_CONTAINERS $container"
fi
done
# Output removed containers
if [[ -n "$REMOVED_CONTAINERS" ]]; then
echo "Removed containers:$REMOVED_CONTAINERS"
else
echo "No containers were removed"
fi
args:
executable: /bin/bash
register: direct_remove_result
changed_when: "'Removed containers' in direct_remove_result.stdout"
failed_when: false
- name: Remove stack directory if it exists
file:
path: "{{ stack_path }}"
state: absent
when: stack_dir.stat.exists
register: dir_removed
- name: Verify all framework-production containers are removed
shell: |
docker ps -a --format "{{ '{{' }}.Names{{ '}}' }}" | grep -i framework-production || echo ""
args:
executable: /bin/bash
register: remaining_containers
changed_when: false
failed_when: false
- name: Display removal status
debug:
msg:
- "=== framework-production Stack Removal Complete ==="
- "Stack directory removed: {{ 'Yes' if dir_removed.changed else 'No (did not exist)' }}"
- "Containers removed: {{ 'Yes' if (compose_down_result.changed or direct_remove_result.changed) else 'No (none found)' }}"
- "Remaining containers: {{ remaining_containers.stdout if remaining_containers.stdout else 'None' }}"
- ""
- "Stack '{{ stack_name }}' has been successfully removed."

View File

@@ -6,7 +6,7 @@
vars:
rollback_to_version: "{{ rollback_to_version | default('previous') }}"
app_stack_path: "{{ deploy_user_home }}/deployment/stacks/application"
# app_stack_path is now defined in group_vars/production.yml
pre_tasks:
- name: Optionally load registry credentials from encrypted vault
@@ -19,8 +19,8 @@
- name: Derive docker registry credentials from vault when not provided
set_fact:
docker_registry_username: "{{ docker_registry_username | default(vault_docker_registry_username | default(docker_registry_username_default | default('admin'))) }}"
docker_registry_password: "{{ docker_registry_password | default(vault_docker_registry_password | default(docker_registry_password_default | default('registry-secure-password-2025'))) }}"
docker_registry_username: "{{ docker_registry_username | default(vault_docker_registry_username | default(docker_registry_username_default)) }}"
docker_registry_password: "{{ docker_registry_password | default(vault_docker_registry_password | default(docker_registry_password_default)) }}"
- name: Check Docker service
systemd:

View File

@@ -11,7 +11,7 @@
vars:
project_root: "{{ lookup('env', 'PWD') | default(playbook_dir + '/../..', true) }}"
ci_image_name: "php-ci:latest"
ci_image_registry: "{{ ci_registry | default('registry.michaelschiemer.de') }}"
ci_image_registry: "{{ ci_registry | default(docker_registry_external) }}"
ci_image_registry_path: "{{ ci_registry }}/ci/php-ci:latest"
gitea_runner_dir: "{{ project_root }}/deployment/gitea-runner"
docker_dind_container: "gitea-runner-dind"

View File

@@ -5,10 +5,17 @@
gather_facts: yes
vars:
stacks_base_path: "~/deployment/stacks"
wait_timeout: 60
# All deployment variables are now defined in group_vars/production.yml
# Variables can be overridden via -e flag if needed
tasks:
- name: Debug - Show variables
debug:
msg:
- "stacks_base_path: {{ stacks_base_path | default('NOT SET') }}"
- "deploy_user_home: {{ deploy_user_home | default('NOT SET') }}"
when: false # Only enable for debugging
- name: Check if deployment stacks directory exists
stat:
path: "{{ stacks_base_path }}"
@@ -83,22 +90,42 @@
# 3. Deploy Docker Registry (Private Registry)
- name: Ensure Registry auth directory exists
file:
path: "{{ stacks_base_path }}/registry/auth"
path: "{{ registry_auth_path }}"
state: directory
mode: '0755'
become: yes
- name: Optionally load registry credentials from vault
include_vars:
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
no_log: yes
ignore_errors: yes
delegate_to: localhost
become: no
- name: Set registry credentials from vault or defaults
set_fact:
registry_username: "{{ vault_docker_registry_username | default(docker_registry_username_default) }}"
registry_password: "{{ vault_docker_registry_password | default(docker_registry_password_default) }}"
no_log: true
- name: Fail if registry password is not set
fail:
msg: "Registry password must be set in vault or docker_registry_password_default"
when: registry_password is not defined or registry_password == ""
- name: Create Registry htpasswd file if missing
shell: |
if [ ! -f {{ stacks_base_path }}/registry/auth/htpasswd ]; then
docker run --rm --entrypoint htpasswd httpd:2 -Bbn admin registry-secure-password-2025 > {{ stacks_base_path }}/registry/auth/htpasswd
chmod 644 {{ stacks_base_path }}/registry/auth/htpasswd
if [ ! -f {{ registry_auth_path }}/htpasswd ]; then
docker run --rm --entrypoint htpasswd httpd:2 -Bbn {{ registry_username }} {{ registry_password }} > {{ registry_auth_path }}/htpasswd
chmod 644 {{ registry_auth_path }}/htpasswd
fi
args:
executable: /bin/bash
become: yes
changed_when: true
register: registry_auth_created
no_log: true
- name: Deploy Docker Registry stack
community.docker.docker_compose_v2:
@@ -126,19 +153,95 @@
- name: Verify Registry is accessible
uri:
url: "http://127.0.0.1:5000/v2/_catalog"
user: admin
password: registry-secure-password-2025
user: "{{ registry_username }}"
password: "{{ registry_password }}"
status_code: 200
timeout: 5
register: registry_check
ignore_errors: yes
changed_when: false
no_log: true
- name: Display Registry status
debug:
msg: "Registry accessibility: {{ 'SUCCESS' if registry_check.status == 200 else 'FAILED - may need manual check' }}"
# 4. Deploy Gitea (CRITICAL - Git Server + MySQL + Redis)
# 4. Deploy MinIO (Object Storage)
- name: Optionally load MinIO secrets from vault
include_vars:
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
no_log: yes
ignore_errors: yes
delegate_to: localhost
become: no
- name: Set MinIO root password from vault or generate
set_fact:
minio_password: "{{ vault_minio_root_password | default(lookup('password', '/dev/null length=32 chars=ascii_letters,digits,punctuation')) }}"
no_log: yes
- name: Set MinIO root user from vault or use default
set_fact:
minio_user: "{{ vault_minio_root_user | default('minioadmin') }}"
- name: Ensure MinIO stack directory exists
file:
path: "{{ stacks_base_path }}/minio"
state: directory
mode: '0755'
- name: Create MinIO stack .env file
template:
src: "{{ playbook_dir }}/../templates/minio.env.j2"
dest: "{{ stacks_base_path }}/minio/.env"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0600'
vars:
minio_root_user: "{{ minio_user }}"
minio_root_password: "{{ minio_password }}"
minio_api_domain: "{{ minio_api_domain }}"
minio_console_domain: "{{ minio_console_domain }}"
no_log: yes
- name: Deploy MinIO stack
community.docker.docker_compose_v2:
project_src: "{{ stacks_base_path }}/minio"
state: present
pull: always
register: minio_output
- name: Wait for MinIO to be ready
wait_for:
timeout: "{{ wait_timeout }}"
when: minio_output.changed
- name: Check MinIO logs for readiness
shell: docker compose logs minio 2>&1 | grep -Ei "(API:|WebUI:|MinIO Object Storage Server)" || true
args:
chdir: "{{ stacks_base_path }}/minio"
register: minio_logs
until: minio_logs.stdout != ""
retries: 6
delay: 10
changed_when: false
ignore_errors: yes
- name: Verify MinIO health endpoint
uri:
url: "http://127.0.0.1:9000/minio/health/live"
method: GET
status_code: [200, 404, 502, 503]
timeout: 5
register: minio_health_check
ignore_errors: yes
changed_when: false
- name: Display MinIO status
debug:
msg: "MinIO health check: {{ 'SUCCESS' if minio_health_check.status == 200 else 'FAILED - Status: ' + (minio_health_check.status|string) }}"
# 5. Deploy Gitea (CRITICAL - Git Server + MySQL + Redis)
- name: Deploy Gitea stack
community.docker.docker_compose_v2:
project_src: "{{ stacks_base_path }}/gitea"
@@ -162,7 +265,7 @@
changed_when: false
ignore_errors: yes
# 5. Deploy Monitoring (Portainer + Grafana + Prometheus)
# 6. Deploy Monitoring (Portainer + Grafana + Prometheus)
- name: Optionally load monitoring secrets from vault
include_vars:
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
@@ -229,7 +332,7 @@
- name: Verify Gitea accessibility via HTTPS
uri:
url: https://git.michaelschiemer.de
url: "https://{{ gitea_domain }}"
method: GET
validate_certs: no
status_code: 200
@@ -241,7 +344,7 @@
debug:
msg: "Gitea HTTPS check: {{ 'SUCCESS' if gitea_http_check.status == 200 else 'FAILED - Status: ' + (gitea_http_check.status|string) }}"
# 6. Deploy Application Stack
# 7. Deploy Application Stack
- name: Optionally load application secrets from vault
include_vars:
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
@@ -320,10 +423,10 @@
mode: '0600'
vars:
db_password: "{{ app_db_password }}"
db_user: "{{ db_user | default('postgres') }}"
db_name: "{{ db_name | default('michaelschiemer') }}"
db_user: "{{ db_user | default(db_user_default) }}"
db_name: "{{ db_name | default(db_name_default) }}"
redis_password: "{{ app_redis_password }}"
app_domain: "{{ app_domain | default('michaelschiemer.de') }}"
app_domain: "{{ app_domain }}"
no_log: yes
- name: Deploy Application stack
@@ -391,7 +494,7 @@
- name: Verify application accessibility via HTTPS
uri:
url: "https://{{ app_domain | default('michaelschiemer.de') }}/health"
url: "{{ health_check_url }}"
method: GET
validate_certs: no
status_code: [200, 404, 502, 503]
@@ -412,13 +515,14 @@
- "Traefik: {{ 'Deployed' if traefik_output.changed else 'Already running' }}"
- "PostgreSQL: {{ 'Deployed' if postgres_output.changed else 'Already running' }}"
- "Docker Registry: {{ 'Deployed' if registry_output.changed else 'Already running' }}"
- "MinIO: {{ 'Deployed' if minio_output.changed else 'Already running' }}"
- "Gitea: {{ 'Deployed' if gitea_output.changed else 'Already running' }}"
- "Monitoring: {{ 'Deployed' if monitoring_output.changed else 'Already running' }}"
- "Application: {{ 'Deployed' if application_output.changed else 'Already running' }}"
- ""
- "Next Steps:"
- "1. Access Gitea at: https://git.michaelschiemer.de"
- "1. Access Gitea at: https://{{ gitea_domain }}"
- "2. Complete Gitea setup wizard if first-time deployment"
- "3. Navigate to Admin > Actions > Runners to get registration token"
- "4. Continue with Phase 1 - Gitea Runner Setup"
- "5. Access Application at: https://{{ app_domain | default('michaelschiemer.de') }}"
- "5. Access Application at: https://{{ app_domain }}"

View File

@@ -5,10 +5,9 @@
gather_facts: yes
vars:
domains:
- git.michaelschiemer.de
- michaelschiemer.de
acme_email: kontakt@michaelschiemer.de
# ssl_domains and acme_email are defined in group_vars/production.yml
# Can be overridden via -e flag if needed
domains: "{{ ssl_domains | default([gitea_domain, app_domain]) }}"
tasks:
- name: Check if acme.json exists and is a file
@@ -70,7 +69,7 @@
- name: Check if acme.json contains certificates
stat:
path: "{{ deploy_user_home }}/deployment/stacks/traefik/acme.json"
path: "{{ stacks_base_path }}/traefik/acme.json"
register: acme_file
- name: Display certificate status
@@ -79,8 +78,9 @@
Certificate setup triggered.
Traefik will request Let's Encrypt certificates for:
{{ domains | join(', ') }}
ACME Email: {{ acme_email }}
Check Traefik logs to see certificate generation progress:
docker compose -f {{ deploy_user_home }}/deployment/stacks/traefik/docker-compose.yml logs traefik | grep -i acme
docker compose -f {{ stacks_base_path }}/traefik/docker-compose.yml logs traefik | grep -i acme
Certificates should be ready within 1-2 minutes.

View File

@@ -5,20 +5,13 @@
gather_facts: yes
vars:
wireguard_interface: "wg0"
wireguard_config_path: "/etc/wireguard"
wireguard_config_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}.conf"
wireguard_private_key_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}_private.key"
wireguard_public_key_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}_public.key"
wireguard_client_configs_path: "{{ wireguard_config_path }}/clients"
wireguard_enable_ip_forwarding: true
# WireGuard variables are defined in group_vars/production.yml
# Can be overridden via -e flag if needed
wireguard_port: "{{ wireguard_port | default(wireguard_port_default) }}"
wireguard_network: "{{ wireguard_network | default(wireguard_network_default) }}"
wireguard_server_ip: "{{ wireguard_server_ip | default(wireguard_server_ip_default) }}"
pre_tasks:
- name: Set WireGuard variables with defaults
set_fact:
wireguard_port: "{{ wireguard_port | default(51820) }}"
wireguard_network: "{{ wireguard_network | default('10.8.0.0/24') }}"
wireguard_server_ip: "{{ wireguard_server_ip | default('10.8.0.1') }}"
- name: Optionally load wireguard secrets from vault
include_vars:

View File

@@ -0,0 +1,102 @@
---
- name: Sync Code from Git Repository to Application Container
hosts: production
gather_facts: yes
become: no
vars:
# git_repository_url and git_branch are defined in group_vars/production.yml
# Can be overridden via -e flag if needed
git_repository_url: "{{ git_repo_url | default(git_repository_url_default) }}"
git_branch: "{{ git_branch | default(git_branch_default) }}"
pre_tasks:
- name: Optionally load secrets from vault
include_vars:
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
no_log: yes
ignore_errors: yes
delegate_to: localhost
become: no
tasks:
- name: Verify application stack directory exists
stat:
path: "{{ app_stack_path }}"
register: app_stack_dir
- name: Fail if application stack directory doesn't exist
fail:
msg: "Application stack directory not found at {{ app_stack_path }}"
when: not app_stack_dir.stat.exists
- name: Check if docker-compose.yml exists
stat:
path: "{{ app_stack_path }}/docker-compose.yml"
register: compose_file_exists
- name: Fail if docker-compose.yml doesn't exist
fail:
msg: "docker-compose.yml not found. Run setup-infrastructure.yml first."
when: not compose_file_exists.stat.exists
- name: Read current .env file
slurp:
src: "{{ app_stack_path }}/.env"
register: env_file_content
failed_when: false
changed_when: false
- name: Check if Git configuration exists in .env
set_fact:
has_git_config: "{{ env_file_content.content | b64decode | regex_search('GIT_REPOSITORY_URL=') is not none }}"
when: env_file_content.content is defined
- name: Update .env with Git configuration
lineinfile:
path: "{{ app_stack_path }}/.env"
regexp: "{{ item.regex }}"
line: "{{ item.line }}"
state: present
loop:
- { regex: '^GIT_REPOSITORY_URL=', line: 'GIT_REPOSITORY_URL={{ git_repository_url }}' }
- { regex: '^GIT_BRANCH=', line: 'GIT_BRANCH={{ git_branch }}' }
- { regex: '^GIT_TOKEN=', line: 'GIT_TOKEN={{ git_token | default("") }}' }
- { regex: '^GIT_USERNAME=', line: 'GIT_USERNAME={{ git_username | default("") }}' }
- { regex: '^GIT_PASSWORD=', line: 'GIT_PASSWORD={{ git_password | default("") }}' }
when: not has_git_config | default(true)
- name: Restart application container to trigger Git pull
shell: |
cd {{ app_stack_path }}
docker compose restart app
args:
executable: /bin/bash
register: container_restart
- name: Wait for container to be ready
wait_for:
timeout: 60
when: container_restart.changed
- name: Check container logs for Git operations
shell: |
cd {{ app_stack_path }}
docker compose logs app --tail 50 | grep -E "(Git|Clone|Pull|✅|❌)" || echo "No Git-related logs found"
args:
executable: /bin/bash
register: git_logs
changed_when: false
- name: Display Git sync result
debug:
msg:
- "=== Code Sync Summary ==="
- "Repository: {{ git_repository_url }}"
- "Branch: {{ git_branch }}"
- "Container restarted: {{ 'Yes' if container_restart.changed else 'No' }}"
- ""
- "Git Logs:"
- "{{ git_logs.stdout }}"
- ""
- "Next: Check application logs to verify code was synced"

View File

@@ -0,0 +1,62 @@
---
# Check Container Health Status
- name: Check nginx container logs
shell: |
docker logs nginx --tail 50 2>&1
args:
executable: /bin/bash
register: nginx_logs
failed_when: false
- name: Display nginx logs
debug:
msg: "{{ nginx_logs.stdout_lines }}"
- name: Test nginx health check manually
shell: |
docker exec nginx wget --spider -q http://localhost/health 2>&1 || echo "Health check failed"
args:
executable: /bin/bash
register: nginx_health_test
failed_when: false
- name: Display nginx health check result
debug:
msg: "{{ nginx_health_test.stdout }}"
- name: Check queue-worker container logs
shell: |
docker logs queue-worker --tail 50 2>&1
args:
executable: /bin/bash
register: queue_worker_logs
failed_when: false
- name: Display queue-worker logs
debug:
msg: "{{ queue_worker_logs.stdout_lines }}"
- name: Check scheduler container logs
shell: |
docker logs scheduler --tail 50 2>&1
args:
executable: /bin/bash
register: scheduler_logs
failed_when: false
- name: Display scheduler logs
debug:
msg: "{{ scheduler_logs.stdout_lines }}"
- name: Check container status
shell: |
cd {{ app_stack_path }}
docker compose ps
args:
executable: /bin/bash
register: container_status
failed_when: false
- name: Display container status
debug:
msg: "{{ container_status.stdout_lines }}"

View File

@@ -0,0 +1,73 @@
---
# Diagnose 404 Errors
- name: Check nginx logs for errors
shell: |
docker logs nginx --tail 100 2>&1
args:
executable: /bin/bash
register: nginx_logs
failed_when: false
- name: Display nginx logs
debug:
msg: "{{ nginx_logs.stdout_lines }}"
- name: Check app container logs
shell: |
docker logs app --tail 100 2>&1
args:
executable: /bin/bash
register: app_logs
failed_when: false
- name: Display app container logs
debug:
msg: "{{ app_logs.stdout_lines }}"
- name: Test nginx health endpoint directly
shell: |
docker exec nginx wget -q -O - http://127.0.0.1/health 2>&1 || echo "Health check failed"
args:
executable: /bin/bash
register: nginx_health_test
failed_when: false
- name: Display nginx health check result
debug:
msg: "{{ nginx_health_test.stdout }}"
- name: Check nginx configuration
shell: |
docker exec nginx cat /etc/nginx/conf.d/default.conf 2>&1
args:
executable: /bin/bash
register: nginx_config
failed_when: false
- name: Display nginx configuration
debug:
msg: "{{ nginx_config.stdout_lines }}"
- name: Check if app container has files in /var/www/html
shell: |
docker exec app ls -la /var/www/html/ 2>&1 | head -20
args:
executable: /bin/bash
register: app_files
failed_when: false
- name: Display app container files
debug:
msg: "{{ app_files.stdout_lines }}"
- name: Check container network connectivity
shell: |
docker exec nginx ping -c 1 app 2>&1 | head -5
args:
executable: /bin/bash
register: network_check
failed_when: false
- name: Display network connectivity
debug:
msg: "{{ network_check.stdout }}"

View File

@@ -0,0 +1,71 @@
---
# Fix Container Health Checks
- name: Check if application stack directory exists
stat:
path: "{{ app_stack_path }}"
register: app_stack_dir
- name: Fail if application stack directory doesn't exist
fail:
msg: "Application stack directory not found at {{ app_stack_path }}"
when: not app_stack_dir.stat.exists
- name: Copy updated docker-compose.yml to production
copy:
src: "{{ playbook_dir }}/../../stacks/application/docker-compose.yml"
dest: "{{ app_stack_path }}/docker-compose.yml"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
register: compose_updated
- name: Recreate containers with new health checks
shell: |
cd {{ app_stack_path }}
docker compose up -d --force-recreate nginx queue-worker scheduler
args:
executable: /bin/bash
when: compose_updated.changed
register: containers_recreated
- name: Wait for containers to be healthy
shell: |
cd {{ app_stack_path }}
timeout=120
elapsed=0
while [ $elapsed -lt $timeout ]; do
healthy=$(docker compose ps --format json | jq -r '[.[] | select(.Name=="nginx" or .Name=="queue-worker" or .Name=="scheduler") | .Health] | all(.=="healthy" or .=="")')
if [ "$healthy" = "true" ]; then
echo "All containers are healthy"
exit 0
fi
sleep 5
elapsed=$((elapsed + 5))
done
echo "Timeout waiting for containers to become healthy"
docker compose ps
exit 1
args:
executable: /bin/bash
register: health_wait
failed_when: false
changed_when: false
- name: Check final container status
shell: |
cd {{ app_stack_path }}
docker compose ps
args:
executable: /bin/bash
register: final_status
- name: Display final container status
debug:
msg: "{{ final_status.stdout_lines }}"
- name: Display summary
debug:
msg:
- "=== Health Check Fix Complete ==="
- "Containers recreated: {{ 'Yes' if containers_recreated.changed else 'No (no changes)' }}"
- "Health wait result: {{ 'SUCCESS' if health_wait.rc == 0 else 'TIMEOUT or ERROR - check logs' }}"

View File

@@ -0,0 +1,71 @@
---
# Fix Nginx 404 by setting up shared app-code volume
- name: Check if application stack directory exists
stat:
path: "{{ app_stack_path }}"
register: app_stack_dir
- name: Fail if application stack directory doesn't exist
fail:
msg: "Application stack directory not found at {{ app_stack_path }}"
when: not app_stack_dir.stat.exists
- name: Copy updated docker-compose.yml to production
copy:
src: "{{ playbook_dir }}/../../stacks/application/docker-compose.yml"
dest: "{{ app_stack_path }}/docker-compose.yml"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
register: compose_updated
- name: Initialize app-code volume with files from app image
shell: |
# Stop containers first
cd {{ app_stack_path }}
docker compose down nginx || true
# Create and initialize app-code volume
docker volume create app-code || true
# Copy files from app image to volume using temporary container
docker run --rm \
-v app-code:/target \
{{ app_image_external }}:latest \
sh -c "cp -r /var/www/html/* /target/ 2>/dev/null || true"
args:
executable: /bin/bash
register: volume_init
changed_when: true
failed_when: false
- name: Start containers
shell: |
cd {{ app_stack_path }}
docker compose up -d
args:
executable: /bin/bash
register: containers_started
- name: Wait for containers to be healthy
pause:
seconds: 15
- name: Check container status
shell: |
cd {{ app_stack_path }}
docker compose ps
args:
executable: /bin/bash
register: final_status
- name: Display container status
debug:
msg: "{{ final_status.stdout_lines }}"
- name: Display summary
debug:
msg:
- "=== Nginx 404 Fix Complete ==="
- "Volume initialized: {{ 'Yes' if volume_init.changed else 'No' }}"
- "Containers restarted: {{ 'Yes' if containers_started.changed else 'No' }}"

View File

@@ -0,0 +1,47 @@
---
- name: Application Troubleshooting
hosts: production
gather_facts: yes
become: no
# All variables are now defined in group_vars/production.yml
tasks:
- name: Check container health
include_tasks: tasks/check-health.yml
tags: ['health', 'check', 'all']
- name: Diagnose 404 errors
include_tasks: tasks/diagnose-404.yml
tags: ['404', 'diagnose', 'all']
- name: Fix container health checks
include_tasks: tasks/fix-health-checks.yml
tags: ['health', 'fix', 'all']
- name: Fix nginx 404
include_tasks: tasks/fix-nginx-404.yml
tags: ['nginx', '404', 'fix', 'all']
- name: Display usage information
debug:
msg:
- "=== Troubleshooting Playbook ==="
- ""
- "Usage examples:"
- " # Check health only:"
- " ansible-playbook troubleshoot.yml --tags health,check"
- ""
- " # Diagnose 404 only:"
- " ansible-playbook troubleshoot.yml --tags 404,diagnose"
- ""
- " # Fix health checks:"
- " ansible-playbook troubleshoot.yml --tags health,fix"
- ""
- " # Fix nginx 404:"
- " ansible-playbook troubleshoot.yml --tags nginx,404,fix"
- ""
- " # Run all checks:"
- " ansible-playbook troubleshoot.yml --tags all"
when: true
tags: ['never']

View File

@@ -0,0 +1,49 @@
---
- name: Update Gitea Configuration and Restart
hosts: production
become: no
gather_facts: yes
vars:
gitea_stack_path: "{{ stacks_base_path }}/gitea"
tasks:
- name: Copy updated docker-compose.yml to production server
copy:
src: "{{ playbook_dir }}/../../stacks/gitea/docker-compose.yml"
dest: "{{ gitea_stack_path }}/docker-compose.yml"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
- name: Restart Gitea stack with updated configuration
community.docker.docker_compose_v2:
project_src: "{{ gitea_stack_path }}"
state: present
pull: never
recreate: always
remove_orphans: no
register: gitea_restart
- name: Wait for Gitea to be ready
wait_for:
timeout: 60
when: gitea_restart.changed
- name: Verify Gitea Actions configuration
shell: |
docker exec gitea cat /data/gitea/conf/app.ini 2>/dev/null | grep -A 3 "\[actions\]" || echo "Config not accessible"
register: gitea_config
changed_when: false
ignore_errors: yes
- name: Display Gitea Actions configuration
debug:
msg:
- "=== Gitea Configuration Update Complete ==="
- "Container restarted: {{ 'Yes' if gitea_restart.changed else 'No' }}"
- ""
- "Current Actions configuration:"
- "{{ gitea_config.stdout if gitea_config.stdout else 'Could not read config (container may still be starting)' }}"
- ""
- "The DEFAULT_ACTIONS_URL should now point to your Gitea instance instead of GitHub."

View File

@@ -0,0 +1,41 @@
#!/bin/bash
# Script to set Git credentials in production .env file
set -e
cd "$(dirname "$0")/.."
echo "Setting Git credentials in production .env file..."
echo ""
echo "Choose authentication method:"
echo "1) Personal Access Token (recommended)"
echo "2) Username/Password"
read -p "Enter choice (1 or 2): " choice
case $choice in
1)
read -p "Enter Gitea Personal Access Token: " token
ansible production -i inventory/production.yml -m lineinfile \
-a "path=~/deployment/stacks/application/.env regexp='^GIT_TOKEN=' line='GIT_TOKEN=$token' state=present" 2>&1
echo "✅ GIT_TOKEN set successfully"
;;
2)
read -p "Enter Gitea Username: " username
read -s -p "Enter Gitea Password: " password
echo ""
ansible production -i inventory/production.yml -m lineinfile \
-a "path=~/deployment/stacks/application/.env regexp='^GIT_USERNAME=' line='GIT_USERNAME=$username' state=present" 2>&1
ansible production -i inventory/production.yml -m lineinfile \
-a "path=~/deployment/stacks/application/.env regexp='^GIT_PASSWORD=' line='GIT_PASSWORD=$password' state=present" 2>&1
echo "✅ GIT_USERNAME and GIT_PASSWORD set successfully"
;;
*)
echo "❌ Invalid choice"
exit 1
;;
esac
echo ""
echo "Next steps:"
echo "1. Restart the nginx container: docker compose restart nginx"
echo "2. Check logs: docker compose logs nginx"

View File

@@ -21,6 +21,13 @@ vault_mail_password: "change-me-mail-password"
vault_docker_registry_username: "gitea-user"
vault_docker_registry_password: "change-me-registry-password"
# Git Repository Credentials (for code cloning in containers)
# Option 1: Use Personal Access Token (recommended)
vault_git_token: "change-me-gitea-personal-access-token"
# Option 2: Use username/password (less secure)
# vault_git_username: "your-gitea-username"
# vault_git_password: "your-gitea-password"
# Optional: Additional Secrets
vault_encryption_key: "change-me-encryption-key"
vault_session_secret: "change-me-session-secret"
@@ -28,3 +35,7 @@ vault_session_secret: "change-me-session-secret"
# Monitoring Stack Credentials
vault_grafana_admin_password: "change-me-secure-grafana-password"
vault_prometheus_password: "change-me-secure-prometheus-password"
# MinIO Object Storage Credentials
vault_minio_root_user: "minioadmin"
vault_minio_root_password: "change-me-secure-minio-password"

View File

@@ -0,0 +1,20 @@
#!/bin/bash
# Script to show nginx volumes configuration from docker compose
set -e
cd "$(dirname "$0")"
echo "Fetching nginx volumes configuration..."
echo ""
# Solution 1: Use sed to extract the full nginx block, then grep for volumes
timeout 90 ansible production -i inventory/production.yml -m shell -a \
"cd ~/deployment/stacks/application && docker compose config 2>&1 | sed -n '/^ nginx:/,/^ [a-z]/p' | grep -A 10 'volumes:'" \
2>&1 || {
echo ""
echo "⚠️ Alternative method: Using larger grep context..."
timeout 90 ansible production -i inventory/production.yml -m shell -a \
"cd ~/deployment/stacks/application && docker compose config 2>&1 | grep -A 50 'nginx:' | grep -A 10 'volumes:'" \
2>&1
}

View File

@@ -43,7 +43,7 @@ RATE_LIMIT_WINDOW={{ rate_limit_window | default('60') }}
ADMIN_ALLOWED_IPS={{ admin_allowed_ips | default('127.0.0.1,::1') }}
# App domain
APP_DOMAIN={{ app_domain | default('michaelschiemer.de') }}
APP_DOMAIN={{ app_domain }}
# Production Environment Configuration
# Generated by Ansible - DO NOT EDIT MANUALLY
# Last Updated: {{ ansible_date_time.iso8601 }}

View File

@@ -5,19 +5,19 @@
TZ={{ timezone | default('Europe/Berlin') }}
# Application Domain
APP_DOMAIN={{ app_domain | default('michaelschiemer.de') }}
APP_DOMAIN={{ app_domain }}
# Application Settings
APP_ENV={{ app_env | default('production') }}
APP_DEBUG={{ app_debug | default('false') }}
APP_URL=https://{{ app_domain | default('michaelschiemer.de') }}
APP_URL=https://{{ app_domain }}
# Database Configuration
# Using PostgreSQL from postgres stack
DB_HOST=postgres
DB_PORT={{ db_port | default('5432') }}
DB_NAME={{ db_name | default('michaelschiemer') }}
DB_USER={{ db_user | default('postgres') }}
DB_NAME={{ db_name | default(db_name_default) }}
DB_USER={{ db_user | default(db_user_default) }}
DB_PASS={{ db_password }}
# Redis Configuration
@@ -38,3 +38,10 @@ QUEUE_CONNECTION={{ queue_connection | default('default') }}
QUEUE_WORKER_SLEEP={{ queue_worker_sleep | default('3') }}
QUEUE_WORKER_TRIES={{ queue_worker_tries | default('3') }}
QUEUE_WORKER_TIMEOUT={{ queue_worker_timeout | default('60') }}
# Git Repository Configuration (optional - if set, container will clone/pull code on start)
GIT_REPOSITORY_URL={{ git_repository_url | default('') }}
GIT_BRANCH={{ git_branch | default('main') }}
GIT_TOKEN={{ git_token | default('') }}
GIT_USERNAME={{ git_username | default('') }}
GIT_PASSWORD={{ git_password | default('') }}

View File

@@ -0,0 +1,81 @@
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Gitea Configuration File
;; Generated by Ansible - DO NOT EDIT MANUALLY
;; This file is based on the official Gitea example configuration
;; https://github.com/go-gitea/gitea/blob/main/custom/conf/app.example.ini
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; General Settings
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
APP_NAME = Gitea: Git with a cup of tea
RUN_MODE = prod
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Server Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[server]
PROTOCOL = http
DOMAIN = {{ gitea_domain }}
HTTP_ADDR = 0.0.0.0
HTTP_PORT = 3000
ROOT_URL = https://{{ gitea_domain }}/
PUBLIC_URL_DETECTION = auto
;; SSH Configuration
DISABLE_SSH = false
START_SSH_SERVER = true
SSH_DOMAIN = {{ gitea_domain }}
SSH_PORT = 22
SSH_LISTEN_PORT = 22
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Database Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[database]
DB_TYPE = postgres
HOST = postgres:5432
NAME = {{ postgres_db | default('gitea') }}
USER = {{ postgres_user | default('gitea') }}
PASSWD = {{ postgres_password | default('gitea_password') }}
SSL_MODE = disable
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Cache Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[cache]
ENABLED = false
ADAPTER = memory
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Session Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[session]
PROVIDER = file
PROVIDER_CONFIG = data/sessions
COOKIE_SECURE = true
COOKIE_NAME = i_like_gitea
GC_INTERVAL_TIME = 86400
SESSION_LIFE_TIME = 86400
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Queue Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[queue]
TYPE = channel
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Service Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[service]
DISABLE_REGISTRATION = {{ disable_registration | default(true) | lower }}
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Actions Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[actions]
ENABLED = true
;; Use "self" to use the current Gitea instance for actions (not GitHub)
;; Do NOT set DEFAULT_ACTIONS_URL to a custom URL - it's not supported
;; Leaving it unset or setting to "self" will use the current instance
;DEFAULT_ACTIONS_URL = self

View File

@@ -0,0 +1,16 @@
# MinIO Object Storage Stack Environment Configuration
# Generated by Ansible - DO NOT EDIT MANUALLY
# Timezone
TZ={{ timezone | default('Europe/Berlin') }}
# MinIO Root Credentials
MINIO_ROOT_USER={{ minio_root_user }}
MINIO_ROOT_PASSWORD={{ minio_root_password }}
# Domain Configuration
# API endpoint (S3-compatible)
MINIO_API_DOMAIN={{ minio_api_domain }}
# Console endpoint (Web UI)
MINIO_CONSOLE_DOMAIN={{ minio_console_domain }}

View File

@@ -2,7 +2,7 @@
# Generated by Ansible - DO NOT EDIT MANUALLY
# Domain Configuration
DOMAIN={{ app_domain | default('michaelschiemer.de') }}
DOMAIN={{ app_domain }}
# Grafana Configuration
GRAFANA_ADMIN_USER={{ grafana_admin_user | default('admin') }}

View File

@@ -0,0 +1,17 @@
#!/bin/bash
# Test script to check nginx volumes configuration with timeout
set -e
cd "$(dirname "$0")"
echo "Testing nginx volumes configuration..."
echo "Timeout: 90 seconds"
echo ""
timeout 90 ansible production -i inventory/production.yml -m shell -a "cd ~/deployment/stacks/application && docker compose config 2>&1 | sed -n '/^ nginx:/,/^ [a-z]/p' | grep -A 10 'volumes:'" 2>&1 || {
echo ""
echo "⚠️ Command timed out or failed!"
echo "Checking if docker compose is working..."
timeout 30 ansible production -i inventory/production.yml -m shell -a "cd ~/deployment/stacks/application && docker compose config 2>&1 | head -5" || true
}

View File

@@ -13,7 +13,9 @@ GITEA_RUNNER_NAME=dev-runner-01
# Runner Labels (comma-separated)
# Format: label:image
# Example: ubuntu-latest:docker://node:16-bullseye
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye,debian-latest:docker://debian:bullseye
# php-ci: Uses optimized CI image with PHP 8.5, Composer, Ansible pre-installed
# Build the image first: ./build-ci-image.sh
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye,debian-latest:docker://debian:bullseye,php-ci:docker://php-ci:latest
# Optional: Custom Docker registry for job images
# DOCKER_REGISTRY_MIRROR=https://registry.michaelschiemer.de

View File

@@ -28,7 +28,9 @@ services:
networks:
- gitea-runner
- traefik-public # Zugriff auf Registry und andere Services
command: ["dockerd", "--host=unix:///var/run/docker.sock", "--host=tcp://0.0.0.0:2375", "--insecure-registry=94.16.110.151:5000", "--insecure-registry=registry.michaelschiemer.de:5000"]
command: ["dockerd", "--host=unix:///var/run/docker.sock", "--host=tcp://0.0.0.0:2375", "--insecure-registry=94.16.110.151:5000", "--insecure-registry=172.25.0.1:5000", "--insecure-registry=registry:5000", "--insecure-registry=host.docker.internal:5000"]
# HINWEIS: registry.michaelschiemer.de wird ?ber HTTPS (via Traefik) verwendet - KEINE insecure-registry n?tig!
# Die insecure-registry Flags sind nur f?r HTTP-Fallbacks (Port 5000) gedacht
networks:
gitea-runner:

View File

@@ -0,0 +1,92 @@
#!/bin/bash
# Complete setup script for PHP CI image and runner registration
# This builds the CI image, loads it into docker-dind, and updates runner labels
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
echo "🚀 Setting up PHP CI Image for Gitea Runner"
echo ""
# Step 1: Build CI Image
echo "📦 Step 1: Building CI Docker Image..."
cd "$PROJECT_ROOT"
./deployment/gitea-runner/build-ci-image.sh
# Step 2: Load image into docker-dind
echo ""
echo "📥 Step 2: Loading image into docker-dind..."
IMAGE_NAME="php-ci:latest"
docker save "${IMAGE_NAME}" | docker exec -i gitea-runner-dind docker load
echo ""
echo "✅ Image loaded into docker-dind"
echo ""
# Step 3: Check current .env
echo "📋 Step 3: Checking runner configuration..."
cd "$SCRIPT_DIR"
if [ ! -f .env ]; then
echo "⚠️ .env file not found, copying from .env.example"
cp .env.example .env
echo ""
echo "⚠️ IMPORTANT: Please edit .env and add:"
echo " - GITEA_RUNNER_REGISTRATION_TOKEN"
echo " - Update GITEA_RUNNER_LABELS to include php-ci:docker://php-ci:latest"
echo ""
read -p "Press Enter after updating .env to continue..."
fi
# Check if php-ci label is already in .env
if grep -q "php-ci:docker://php-ci:latest" .env; then
echo "✅ php-ci label already in .env"
else
echo "⚠️ php-ci label not found in .env"
echo ""
read -p "Add php-ci label to GITEA_RUNNER_LABELS? (y/N) " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
# Add php-ci to labels
sed -i 's|GITEA_RUNNER_LABELS=\(.*\)|GITEA_RUNNER_LABELS=\1,php-ci:docker://php-ci:latest|' .env
echo "✅ Added php-ci label to .env"
fi
fi
# Step 4: Re-register runner
echo ""
echo "🔄 Step 4: Re-registering runner with new labels..."
read -p "Re-register runner now? This will unregister the current runner. (y/N) " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
if [ -f ./unregister.sh ]; then
./unregister.sh
fi
if [ -f ./register.sh ]; then
./register.sh
else
echo "⚠️ register.sh not found, please register manually"
fi
else
echo ""
echo " To register manually:"
echo " cd deployment/gitea-runner"
echo " ./unregister.sh"
echo " ./register.sh"
fi
echo ""
echo "✅ Setup complete!"
echo ""
echo "📝 Summary:"
echo " - CI Image built: php-ci:latest"
echo " - Image loaded into docker-dind"
echo " - Runner labels updated (restart runner to apply)"
echo ""
echo "🎯 Next steps:"
echo " 1. Verify runner is registered with php-ci label in Gitea UI"
echo " 2. Test workflows using runs-on: php-ci"
echo ""

View File

@@ -1,284 +0,0 @@
#!/bin/bash
set -e
# Deploy Application to Production
# This script deploys application updates using Ansible
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEPLOYMENT_DIR="$(dirname "$SCRIPT_DIR")"
ANSIBLE_DIR="$DEPLOYMENT_DIR/ansible"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored messages
print_success() {
echo -e "${GREEN}$1${NC}"
}
print_error() {
echo -e "${RED}$1${NC}"
}
print_warning() {
echo -e "${YELLOW}⚠️ $1${NC}"
}
print_info() {
echo -e "${BLUE} $1${NC}"
}
# Show usage
usage() {
echo "Usage: $0 <image-tag> [options]"
echo ""
echo "Arguments:"
echo " image-tag Docker image tag to deploy (e.g., sha-abc123, v1.2.3, latest)"
echo ""
echo "Options:"
echo " --registry-user USER Docker registry username (default: from vault)"
echo " --registry-pass PASS Docker registry password (default: from vault)"
echo " --commit SHA Git commit SHA (default: auto-detect)"
echo " --dry-run Run in check mode without making changes"
echo " --help Show this help message"
echo ""
echo "Examples:"
echo " $0 sha-abc123"
echo " $0 v1.2.3 --commit abc123def"
echo " $0 latest --dry-run"
exit 1
}
# Parse arguments
IMAGE_TAG=""
REGISTRY_USER=""
REGISTRY_PASS=""
GIT_COMMIT=""
DRY_RUN=""
while [[ $# -gt 0 ]]; do
case $1 in
--registry-user)
REGISTRY_USER="$2"
shift 2
;;
--registry-pass)
REGISTRY_PASS="$2"
shift 2
;;
--commit)
GIT_COMMIT="$2"
shift 2
;;
--dry-run)
DRY_RUN="--check"
shift
;;
--help)
usage
;;
*)
if [ -z "$IMAGE_TAG" ]; then
IMAGE_TAG="$1"
shift
else
print_error "Unknown argument: $1"
usage
fi
;;
esac
done
# Validate image tag
if [ -z "$IMAGE_TAG" ]; then
print_error "Image tag is required"
usage
fi
echo ""
echo "🚀 Deploy Application to Production"
echo "===================================="
echo ""
# Check if running from correct directory
if [ ! -f "$ANSIBLE_DIR/ansible.cfg" ]; then
print_error "Error: Must run from deployment/scripts directory"
exit 1
fi
cd "$ANSIBLE_DIR"
# Auto-detect git commit if not provided
if [ -z "$GIT_COMMIT" ]; then
if command -v git &> /dev/null && [ -d "$(git rev-parse --git-dir 2>/dev/null)" ]; then
GIT_COMMIT=$(git rev-parse --short HEAD)
print_info "Auto-detected git commit: $GIT_COMMIT"
else
GIT_COMMIT="unknown"
print_warning "Could not auto-detect git commit"
fi
fi
# Generate deployment timestamp
DEPLOYMENT_TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ)
# Check Prerequisites
echo "Checking Prerequisites..."
echo "------------------------"
# Check Ansible
if ! command -v ansible &> /dev/null; then
print_error "Ansible is not installed"
exit 1
fi
print_success "Ansible installed"
# Check vault password
if [ ! -f "$ANSIBLE_DIR/secrets/.vault_pass" ]; then
print_error "Vault password file not found"
exit 1
fi
print_success "Vault password file found"
# Check playbook
if [ ! -f "$ANSIBLE_DIR/playbooks/deploy-update.yml" ]; then
print_error "Deploy playbook not found"
exit 1
fi
print_success "Deploy playbook found"
# Test connection
print_info "Testing connection to production..."
if ansible production -m ping > /dev/null 2>&1; then
print_success "Connection successful"
else
print_error "Connection to production failed"
exit 1
fi
echo ""
# Deployment Summary
echo "Deployment Summary"
echo "-----------------"
echo " Image Tag: $IMAGE_TAG"
echo " Git Commit: $GIT_COMMIT"
echo " Timestamp: $DEPLOYMENT_TIMESTAMP"
echo " Registry: git.michaelschiemer.de:5000"
if [ -n "$DRY_RUN" ]; then
echo " Mode: DRY RUN (no changes will be made)"
else
echo " Mode: PRODUCTION DEPLOYMENT"
fi
echo ""
# Confirmation
if [ -z "$DRY_RUN" ]; then
read -p "Proceed with deployment? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
print_warning "Deployment cancelled"
exit 0
fi
fi
echo ""
# Build ansible-playbook command
ANSIBLE_CMD="ansible-playbook $ANSIBLE_DIR/playbooks/deploy-update.yml"
ANSIBLE_CMD="$ANSIBLE_CMD --vault-password-file $ANSIBLE_DIR/secrets/.vault_pass"
ANSIBLE_CMD="$ANSIBLE_CMD -e image_tag=$IMAGE_TAG"
ANSIBLE_CMD="$ANSIBLE_CMD -e git_commit_sha=$GIT_COMMIT"
ANSIBLE_CMD="$ANSIBLE_CMD -e deployment_timestamp=$DEPLOYMENT_TIMESTAMP"
# Add registry credentials if provided
if [ -n "$REGISTRY_USER" ]; then
ANSIBLE_CMD="$ANSIBLE_CMD -e docker_registry_username=$REGISTRY_USER"
fi
if [ -n "$REGISTRY_PASS" ]; then
ANSIBLE_CMD="$ANSIBLE_CMD -e docker_registry_password=$REGISTRY_PASS"
fi
# Add dry-run flag if set
if [ -n "$DRY_RUN" ]; then
ANSIBLE_CMD="$ANSIBLE_CMD $DRY_RUN"
fi
# Execute deployment
print_info "Starting deployment..."
echo ""
if eval "$ANSIBLE_CMD"; then
echo ""
if [ -z "$DRY_RUN" ]; then
print_success "Deployment completed successfully!"
else
print_success "Dry run completed successfully!"
fi
else
echo ""
print_error "Deployment failed!"
exit 1
fi
echo ""
# Post-deployment checks
if [ -z "$DRY_RUN" ]; then
echo "Post-Deployment Checks"
echo "---------------------"
SSH_KEY="$HOME/.ssh/production"
# Check service status
print_info "Checking service status..."
ssh -i "$SSH_KEY" deploy@94.16.110.151 "docker service ls --filter name=app_" || true
echo ""
# Show recent logs
print_info "Recent application logs:"
ssh -i "$SSH_KEY" deploy@94.16.110.151 "docker service logs --tail 20 app_app" || true
echo ""
# Health check URL
HEALTH_URL="https://michaelschiemer.de/health"
print_info "Health check URL: $HEALTH_URL"
echo ""
# Summary
echo "✅ Deployment Complete!"
echo "======================"
echo ""
echo "Deployed:"
echo " Image: git.michaelschiemer.de:5000/framework:$IMAGE_TAG"
echo " Commit: $GIT_COMMIT"
echo " Timestamp: $DEPLOYMENT_TIMESTAMP"
echo ""
echo "Next Steps:"
echo ""
echo "1. Monitor application:"
echo " ssh -i $SSH_KEY deploy@94.16.110.151 'docker service logs -f app_app'"
echo ""
echo "2. Check service status:"
echo " ssh -i $SSH_KEY deploy@94.16.110.151 'docker service ps app_app'"
echo ""
echo "3. Test application:"
echo " curl $HEALTH_URL"
echo ""
echo "4. Rollback if needed:"
echo " $SCRIPT_DIR/rollback.sh"
echo ""
else
echo "Dry Run Summary"
echo "==============="
echo ""
echo "This was a dry run. No changes were made to production."
echo ""
echo "To deploy for real, run:"
echo " $0 $IMAGE_TAG"
echo ""
fi

View File

@@ -1,330 +0,0 @@
#!/bin/bash
set -e
# Rollback Application Deployment
# This script rolls back to a previous deployment using Ansible
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEPLOYMENT_DIR="$(dirname "$SCRIPT_DIR")"
ANSIBLE_DIR="$DEPLOYMENT_DIR/ansible"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored messages
print_success() {
echo -e "${GREEN}$1${NC}"
}
print_error() {
echo -e "${RED}$1${NC}"
}
print_warning() {
echo -e "${YELLOW}⚠️ $1${NC}"
}
print_info() {
echo -e "${BLUE} $1${NC}"
}
# Show usage
usage() {
echo "Usage: $0 [version] [options]"
echo ""
echo "Arguments:"
echo " version Backup version to rollback to (e.g., 2025-01-28T15-30-00)"
echo " If not provided, will rollback to previous version"
echo ""
echo "Options:"
echo " --list List available backup versions"
echo " --dry-run Run in check mode without making changes"
echo " --help Show this help message"
echo ""
echo "Examples:"
echo " $0 # Rollback to previous version"
echo " $0 2025-01-28T15-30-00 # Rollback to specific version"
echo " $0 --list # List available backups"
echo " $0 2025-01-28T15-30-00 --dry-run # Test rollback"
exit 1
}
# Parse arguments
ROLLBACK_VERSION=""
LIST_BACKUPS=false
DRY_RUN=""
while [[ $# -gt 0 ]]; do
case $1 in
--list)
LIST_BACKUPS=true
shift
;;
--dry-run)
DRY_RUN="--check"
shift
;;
--help)
usage
;;
*)
if [ -z "$ROLLBACK_VERSION" ]; then
ROLLBACK_VERSION="$1"
shift
else
print_error "Unknown argument: $1"
usage
fi
;;
esac
done
echo ""
echo "🔄 Rollback Application Deployment"
echo "==================================="
echo ""
# Check if running from correct directory
if [ ! -f "$ANSIBLE_DIR/ansible.cfg" ]; then
print_error "Error: Must run from deployment/scripts directory"
exit 1
fi
cd "$ANSIBLE_DIR"
SSH_KEY="$HOME/.ssh/production"
DEPLOY_USER="deploy"
DEPLOY_HOST="94.16.110.151"
# List available backups
list_backups() {
print_info "Fetching available backups from production server..."
echo ""
if ! ssh -i "$SSH_KEY" "$DEPLOY_USER@$DEPLOY_HOST" \
"ls -lt /home/deploy/backups/ 2>/dev/null" | tail -n +2; then
print_error "Failed to list backups"
print_info "Make sure backups exist on production server: /home/deploy/backups/"
exit 1
fi
echo ""
print_info "To rollback to a specific version, run:"
echo " $0 <version>"
echo ""
print_info "To rollback to previous version, run:"
echo " $0"
exit 0
}
# Check Prerequisites
check_prerequisites() {
echo "Checking Prerequisites..."
echo "------------------------"
# Check Ansible
if ! command -v ansible &> /dev/null; then
print_error "Ansible is not installed"
exit 1
fi
print_success "Ansible installed"
# Check vault password
if [ ! -f "$ANSIBLE_DIR/secrets/.vault_pass" ]; then
print_error "Vault password file not found"
exit 1
fi
print_success "Vault password file found"
# Check playbook
if [ ! -f "$ANSIBLE_DIR/playbooks/rollback.yml" ]; then
print_error "Rollback playbook not found"
exit 1
fi
print_success "Rollback playbook found"
# Test connection
print_info "Testing connection to production..."
if ansible production -m ping > /dev/null 2>&1; then
print_success "Connection successful"
else
print_error "Connection to production failed"
exit 1
fi
echo ""
}
# Get current deployment info
get_current_deployment() {
print_info "Fetching current deployment information..."
CURRENT_IMAGE=$(ssh -i "$SSH_KEY" "$DEPLOY_USER@$DEPLOY_HOST" \
"docker service inspect app_app --format '{{.Spec.TaskTemplate.ContainerSpec.Image}}' 2>/dev/null" || echo "unknown")
if [ "$CURRENT_IMAGE" != "unknown" ]; then
print_info "Current deployment: $CURRENT_IMAGE"
else
print_warning "Could not determine current deployment"
fi
echo ""
}
# Main logic
if [ "$LIST_BACKUPS" = true ]; then
list_backups
fi
check_prerequisites
get_current_deployment
# Show available backups
echo "Available Backups"
echo "----------------"
ssh -i "$SSH_KEY" "$DEPLOY_USER@$DEPLOY_HOST" \
"ls -lt /home/deploy/backups/ 2>/dev/null | tail -n +2 | head -10" || {
print_warning "No backups found on production server"
echo ""
print_info "Backups are created automatically during deployments"
print_info "You need at least one previous deployment to rollback"
exit 1
}
echo ""
# Rollback Summary
echo "Rollback Summary"
echo "---------------"
if [ -n "$ROLLBACK_VERSION" ]; then
echo " Target Version: $ROLLBACK_VERSION"
echo " Current Image: $CURRENT_IMAGE"
else
echo " Target Version: Previous deployment (most recent backup)"
echo " Current Image: $CURRENT_IMAGE"
fi
if [ -n "$DRY_RUN" ]; then
echo " Mode: DRY RUN (no changes will be made)"
else
echo " Mode: PRODUCTION ROLLBACK"
fi
echo ""
# Confirmation
if [ -z "$DRY_RUN" ]; then
print_warning "⚠️ WARNING: This will rollback your production deployment!"
echo ""
read -p "Are you sure you want to proceed with rollback? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
print_warning "Rollback cancelled"
exit 0
fi
fi
echo ""
# Build ansible-playbook command
ANSIBLE_CMD="ansible-playbook $ANSIBLE_DIR/playbooks/rollback.yml"
ANSIBLE_CMD="$ANSIBLE_CMD --vault-password-file $ANSIBLE_DIR/secrets/.vault_pass"
# Add version if specified
if [ -n "$ROLLBACK_VERSION" ]; then
ANSIBLE_CMD="$ANSIBLE_CMD -e rollback_to_version=$ROLLBACK_VERSION"
fi
# Add dry-run flag if set
if [ -n "$DRY_RUN" ]; then
ANSIBLE_CMD="$ANSIBLE_CMD $DRY_RUN"
fi
# Execute rollback
print_info "Starting rollback..."
echo ""
if eval "$ANSIBLE_CMD"; then
echo ""
if [ -z "$DRY_RUN" ]; then
print_success "Rollback completed successfully!"
else
print_success "Dry run completed successfully!"
fi
else
echo ""
print_error "Rollback failed!"
echo ""
print_info "Check the Ansible output above for error details"
exit 1
fi
echo ""
# Post-rollback checks
if [ -z "$DRY_RUN" ]; then
echo "Post-Rollback Checks"
echo "-------------------"
# Check service status
print_info "Checking service status..."
ssh -i "$SSH_KEY" "$DEPLOY_USER@$DEPLOY_HOST" "docker service ls --filter name=app_" || true
echo ""
# Get new image
NEW_IMAGE=$(ssh -i "$SSH_KEY" "$DEPLOY_USER@$DEPLOY_HOST" \
"docker service inspect app_app --format '{{.Spec.TaskTemplate.ContainerSpec.Image}}' 2>/dev/null" || echo "unknown")
if [ "$NEW_IMAGE" != "unknown" ]; then
print_info "Rolled back to: $NEW_IMAGE"
fi
echo ""
# Show recent logs
print_info "Recent application logs:"
ssh -i "$SSH_KEY" "$DEPLOY_USER@$DEPLOY_HOST" "docker service logs --tail 20 app_app" || true
echo ""
# Summary
echo "✅ Rollback Complete!"
echo "===================="
echo ""
echo "Rollback Details:"
echo " From: $CURRENT_IMAGE"
echo " To: $NEW_IMAGE"
if [ -n "$ROLLBACK_VERSION" ]; then
echo " Version: $ROLLBACK_VERSION"
else
echo " Version: Previous deployment"
fi
echo ""
echo "Next Steps:"
echo ""
echo "1. Monitor application:"
echo " ssh -i $SSH_KEY $DEPLOY_USER@$DEPLOY_HOST 'docker service logs -f app_app'"
echo ""
echo "2. Check service status:"
echo " ssh -i $SSH_KEY $DEPLOY_USER@$DEPLOY_HOST 'docker service ps app_app'"
echo ""
echo "3. Test application:"
echo " curl https://michaelschiemer.de/health"
echo ""
echo "4. If rollback didn't fix the issue, check available backups:"
echo " $0 --list"
echo ""
else
echo "Dry Run Summary"
echo "==============="
echo ""
echo "This was a dry run. No changes were made to production."
echo ""
if [ -n "$ROLLBACK_VERSION" ]; then
echo "To rollback for real, run:"
echo " $0 $ROLLBACK_VERSION"
else
echo "To rollback for real, run:"
echo " $0"
fi
echo ""
fi

View File

@@ -1,211 +0,0 @@
#!/bin/bash
set -e
# Setup Production Server
# This script performs initial production server setup with Ansible
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEPLOYMENT_DIR="$(dirname "$SCRIPT_DIR")"
ANSIBLE_DIR="$DEPLOYMENT_DIR/ansible"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
echo ""
echo "🚀 Production Server Setup"
echo "=========================="
echo ""
# Function to print colored messages
print_success() {
echo -e "${GREEN}$1${NC}"
}
print_error() {
echo -e "${RED}$1${NC}"
}
print_warning() {
echo -e "${YELLOW}⚠️ $1${NC}"
}
print_info() {
echo -e "${BLUE} $1${NC}"
}
# Check if running from correct directory
if [ ! -f "$ANSIBLE_DIR/ansible.cfg" ]; then
print_error "Error: Must run from deployment/scripts directory"
exit 1
fi
cd "$ANSIBLE_DIR"
# Step 1: Check Prerequisites
echo "Step 1: Checking Prerequisites"
echo "------------------------------"
# Check Ansible installed
if ! command -v ansible &> /dev/null; then
print_error "Ansible is not installed"
echo ""
echo "Install Ansible:"
echo " pip install ansible"
exit 1
fi
print_success "Ansible is installed: $(ansible --version | head -n1)"
# Check Ansible playbooks exist
if [ ! -f "$ANSIBLE_DIR/playbooks/setup-production-secrets.yml" ]; then
print_error "Ansible playbooks not found"
exit 1
fi
print_success "Ansible playbooks found"
# Check SSH key
SSH_KEY="$HOME/.ssh/production"
if [ ! -f "$SSH_KEY" ]; then
print_warning "SSH key not found: $SSH_KEY"
echo ""
read -p "Do you want to create SSH key now? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
ssh-keygen -t ed25519 -f "$SSH_KEY" -C "ansible-deploy"
chmod 600 "$SSH_KEY"
chmod 644 "$SSH_KEY.pub"
print_success "SSH key created"
echo ""
echo "📋 Public key:"
cat "$SSH_KEY.pub"
echo ""
print_warning "You must add this public key to the production server:"
echo " ssh-copy-id -i $SSH_KEY.pub deploy@94.16.110.151"
echo ""
read -p "Press ENTER after adding SSH key to server..."
else
print_error "SSH key is required for Ansible"
exit 1
fi
else
print_success "SSH key found: $SSH_KEY"
fi
echo ""
# Step 2: Setup Ansible Secrets
echo "Step 2: Setup Ansible Secrets"
echo "-----------------------------"
# Check if vault file exists
if [ ! -f "$ANSIBLE_DIR/secrets/production.vault.yml" ]; then
print_warning "Vault file not found"
echo ""
read -p "Do you want to run init-secrets.sh now? (Y/n): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Nn]$ ]]; then
"$ANSIBLE_DIR/scripts/init-secrets.sh"
else
print_error "Vault file is required"
exit 1
fi
else
print_success "Vault file exists"
fi
# Check vault password file
if [ ! -f "$ANSIBLE_DIR/secrets/.vault_pass" ]; then
print_error "Vault password file not found: secrets/.vault_pass"
echo ""
echo "Run init-secrets.sh to create vault password file:"
echo " $ANSIBLE_DIR/scripts/init-secrets.sh"
exit 1
fi
print_success "Vault password file found"
# Verify vault can be decrypted
if ! ansible-vault view "$ANSIBLE_DIR/secrets/production.vault.yml" \
--vault-password-file "$ANSIBLE_DIR/secrets/.vault_pass" > /dev/null 2>&1; then
print_error "Failed to decrypt vault file"
echo "Check your vault password in: secrets/.vault_pass"
exit 1
fi
print_success "Vault file can be decrypted"
echo ""
# Step 3: Test Connection
echo "Step 3: Test Connection to Production"
echo "-------------------------------------"
if ansible production -m ping 2>&1 | grep -q "SUCCESS"; then
print_success "Connection to production server successful"
else
print_error "Connection to production server failed"
echo ""
echo "Troubleshooting steps:"
echo "1. Test SSH manually: ssh -i $SSH_KEY deploy@94.16.110.151"
echo "2. Verify SSH key is added: ssh-copy-id -i $SSH_KEY.pub deploy@94.16.110.151"
echo "3. Check inventory file: cat $ANSIBLE_DIR/inventory/production.yml"
exit 1
fi
echo ""
# Step 4: Deploy Secrets to Production
echo "Step 4: Deploy Secrets to Production"
echo "------------------------------------"
read -p "Deploy secrets to production server? (Y/n): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Nn]$ ]]; then
print_info "Deploying secrets to production..."
echo ""
if ansible-playbook "$ANSIBLE_DIR/playbooks/setup-production-secrets.yml" \
--vault-password-file "$ANSIBLE_DIR/secrets/.vault_pass"; then
print_success "Secrets deployed successfully"
else
print_error "Failed to deploy secrets"
exit 1
fi
else
print_warning "Skipped secrets deployment"
fi
echo ""
# Step 5: Verify Docker Services
echo "Step 5: Verify Docker Services"
echo "------------------------------"
print_info "Checking Docker services on production..."
echo ""
ssh -i "$SSH_KEY" deploy@94.16.110.151 "docker node ls" || true
echo ""
ssh -i "$SSH_KEY" deploy@94.16.110.151 "docker service ls" || true
echo ""
# Summary
echo ""
echo "✅ Production Server Setup Complete!"
echo "===================================="
echo ""
echo "Next Steps:"
echo ""
echo "1. Verify secrets are deployed:"
echo " ssh -i $SSH_KEY deploy@94.16.110.151 'cat /home/deploy/secrets/.env'"
echo ""
echo "2. Deploy your application:"
echo " $SCRIPT_DIR/deploy.sh <image-tag>"
echo ""
echo "3. Monitor deployment:"
echo " ssh -i $SSH_KEY deploy@94.16.110.151 'docker service logs -f app_app'"
echo ""
echo "📖 For more information, see: $ANSIBLE_DIR/README.md"
echo ""

View File

@@ -38,3 +38,11 @@ QUEUE_CONNECTION=default
QUEUE_WORKER_SLEEP=3
QUEUE_WORKER_TRIES=3
QUEUE_WORKER_TIMEOUT=60
# Git Repository Configuration (optional - if set, container will clone/pull code on start)
# Uncomment to enable Git-based deployment:
# GIT_REPOSITORY_URL=https://git.michaelschiemer.de/michael/michaelschiemer.git
# GIT_BRANCH=main
# GIT_TOKEN=
# GIT_USERNAME=
# GIT_PASSWORD=

View File

@@ -1,10 +1,9 @@
version: '3.8'
# Docker Registry: registry.michaelschiemer.de (HTTPS via Traefik)
services:
# PHP-FPM Application Runtime
app:
image: registry.michaelschiemer.de/framework:latest
image: git.michaelschiemer.de:5000/framework:latest
container_name: app
restart: unless-stopped
networks:
@@ -55,8 +54,9 @@ services:
condition: service_started
# Nginx Web Server
# Uses same image as app - clones code from Git if GIT_REPOSITORY_URL is set, then runs nginx
nginx:
image: nginx:1.25-alpine
image: git.michaelschiemer.de:5000/framework:latest
container_name: nginx
restart: unless-stopped
networks:
@@ -64,12 +64,89 @@ services:
- app-internal
environment:
- TZ=Europe/Berlin
- APP_ENV=${APP_ENV:-production}
- APP_DEBUG=${APP_DEBUG:-false}
# Git Repository (same as app - will clone code on start)
- GIT_REPOSITORY_URL=${GIT_REPOSITORY_URL:-}
- GIT_BRANCH=${GIT_BRANCH:-main}
- GIT_TOKEN=${GIT_TOKEN:-}
- GIT_USERNAME=${GIT_USERNAME:-}
- GIT_PASSWORD=${GIT_PASSWORD:-}
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- app-code:/var/www/html:ro
- app-storage:/var/www/html/storage:ro
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
# Use custom entrypoint that ensures code is available then starts nginx only (no PHP-FPM)
entrypoint: ["/bin/sh", "-c"]
command:
- |
# Ensure code is available in /var/www/html (from image or Git)
GIT_TARGET_DIR="/var/www/html"
# If storage is mounted but code is missing, copy from image's original location
if [ ! -d "$$GIT_TARGET_DIR/public" ] && [ -d "/var/www/html.orig" ]; then
echo "?? [nginx] Copying code from image..."
# Copy everything except storage (which is a volume mount)
find /var/www/html.orig -mindepth 1 -maxdepth 1 ! -name "storage" -exec cp -r {} "$$GIT_TARGET_DIR/" \; 2>/dev/null || true
fi
if [ -n "$$GIT_REPOSITORY_URL" ]; then
# Configure Git to be non-interactive
export GIT_TERMINAL_PROMPT=0
export GIT_ASKPASS=echo
# Determine authentication method
if [ -n "$$GIT_TOKEN" ]; then
GIT_URL_WITH_AUTH=$$(echo "$$GIT_REPOSITORY_URL" | sed "s|https://|https://$${GIT_TOKEN}@|")
elif [ -n "$$GIT_USERNAME" ] && [ -n "$$GIT_PASSWORD" ]; then
GIT_URL_WITH_AUTH=$$(echo "$$GIT_REPOSITORY_URL" | sed "s|https://|https://$${GIT_USERNAME}:$${GIT_PASSWORD}@|")
else
echo "⚠️ [nginx] No Git credentials provided (GIT_TOKEN or GIT_USERNAME/GIT_PASSWORD). Using image contents."
GIT_URL_WITH_AUTH=""
fi
if [ -n "$$GIT_URL_WITH_AUTH" ] && [ ! -d "$$GIT_TARGET_DIR/.git" ]; then
echo "?? [nginx] Cloning repository from $$GIT_REPOSITORY_URL (branch: $${GIT_BRANCH:-main})..."
# Remove only files/dirs that are not storage (which is a volume mount)
# Clone into a temporary directory first, then move contents
TEMP_CLONE="$${GIT_TARGET_DIR}.tmp"
rm -rf "$$TEMP_CLONE" 2>/dev/null || true
if git clone --branch "$${GIT_BRANCH:-main}" --depth 1 "$$GIT_URL_WITH_AUTH" "$$TEMP_CLONE"; then
# Remove only files/dirs that are not storage (which is a volume mount)
find "$$GIT_TARGET_DIR" -mindepth 1 -maxdepth 1 ! -name "storage" -exec rm -rf {} \\; 2>/dev/null || true
# Move contents from temp directory to target (preserving storage)
find "$$TEMP_CLONE" -mindepth 1 -maxdepth 1 ! -name "." ! -name ".." -exec mv {} "$$GIT_TARGET_DIR/" \\; 2>/dev/null || true
rm -rf "$$TEMP_CLONE" 2>/dev/null || true
echo "✅ [nginx] Repository cloned successfully"
else
echo "? Git clone failed. Using image contents."
rm -rf "$$TEMP_CLONE" 2>/dev/null || true
fi
else
echo "?? [nginx] Pulling latest changes..."
cd "$$GIT_TARGET_DIR"
git fetch origin "$${GIT_BRANCH:-main}" || true
git reset --hard "origin/$${GIT_BRANCH:-main}" || true
git clean -fd || true
fi
if [ -f "$$GIT_TARGET_DIR/composer.json" ]; then
echo "?? [nginx] Installing dependencies..."
cd "$$GIT_TARGET_DIR"
composer install --no-dev --optimize-autoloader --no-interaction --no-scripts || true
composer dump-autoload --optimize --classmap-authoritative || true
fi
echo "? [nginx] Git sync completed"
else
echo "?? [nginx] GIT_REPOSITORY_URL not set, using code from image"
fi
# Start nginx only (no PHP-FPM)
echo "?? [nginx] Starting nginx..."
exec nginx -g "daemon off;"
labels:
- "traefik.enable=true"
# HTTP Router
@@ -84,7 +161,7 @@ services:
# Network
- "traefik.docker.network=traefik-public"
healthcheck:
test: ["CMD-SHELL", "wget --spider -q http://127.0.0.1/health || exit 1"]
test: ["CMD-SHELL", "curl -f http://127.0.0.1/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
@@ -125,7 +202,7 @@ services:
# Queue Worker (Background Jobs)
queue-worker:
image: registry.michaelschiemer.de/framework:latest
image: git.michaelschiemer.de:5000/framework:latest
container_name: queue-worker
restart: unless-stopped
networks:
@@ -170,7 +247,7 @@ services:
# Scheduler (Cron Jobs)
scheduler:
image: registry.michaelschiemer.de/framework:latest
image: git.michaelschiemer.de:5000/framework:latest
container_name: scheduler
restart: unless-stopped
networks:

View File

@@ -0,0 +1,286 @@
version: '3.8'
# Docker Registry: registry.michaelschiemer.de (HTTPS via Traefik)
services:
# PHP-FPM Application Runtime
app:
image: git.michaelschiemer.de:5000/framework:latest
container_name: app
restart: unless-stopped
networks:
- app-internal
environment:
- TZ=Europe/Berlin
- APP_ENV=${APP_ENV:-production}
- APP_DEBUG=${APP_DEBUG:-false}
- APP_URL=${APP_URL:-https://michaelschiemer.de}
# Git Repository (optional - if set, container will clone/pull code on start)
- GIT_REPOSITORY_URL=${GIT_REPOSITORY_URL:-}
- GIT_BRANCH=${GIT_BRANCH:-main}
- GIT_TOKEN=${GIT_TOKEN:-}
- GIT_USERNAME=${GIT_USERNAME:-}
- GIT_PASSWORD=${GIT_PASSWORD:-}
# Database
- DB_HOST=${DB_HOST:-postgres}
- DB_PORT=${DB_PORT:-5432}
- DB_DATABASE=${DB_DATABASE}
- DB_USERNAME=${DB_USERNAME}
- DB_PASSWORD=${DB_PASSWORD}
# Redis
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=${REDIS_PASSWORD}
# Cache
- CACHE_DRIVER=redis
- CACHE_PREFIX=${CACHE_PREFIX:-app}
# Session
- SESSION_DRIVER=redis
- SESSION_LIFETIME=${SESSION_LIFETIME:-120}
# Queue
- QUEUE_DRIVER=redis
- QUEUE_CONNECTION=default
volumes:
- app-storage:/var/www/html/storage
- app-logs:/var/www/html/storage/logs
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
healthcheck:
test: ["CMD-SHELL", "true"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
depends_on:
redis:
condition: service_started
# Nginx Web Server
# Uses same image as app - clones code from Git if GIT_REPOSITORY_URL is set, then runs nginx
nginx:
image: git.michaelschiemer.de:5000/framework:latest
container_name: nginx
restart: unless-stopped
networks:
- traefik-public
- app-internal
environment:
- TZ=Europe/Berlin
- APP_ENV=${APP_ENV:-production}
- APP_DEBUG=${APP_DEBUG:-false}
# Git Repository (same as app - will clone code on start)
- GIT_REPOSITORY_URL=${GIT_REPOSITORY_URL:-}
- GIT_BRANCH=${GIT_BRANCH:-main}
- GIT_TOKEN=${GIT_TOKEN:-}
- GIT_USERNAME=${GIT_USERNAME:-}
- GIT_PASSWORD=${GIT_PASSWORD:-}
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- app-storage:/var/www/html/storage:ro
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
# Use custom entrypoint that ensures code is available then starts nginx only (no PHP-FPM)
entrypoint: ["/bin/sh", "-c"]
command:
- |
# Ensure code is available in /var/www/html (from image or Git)
GIT_TARGET_DIR="/var/www/html"
# If storage is mounted but code is missing, copy from image's original location
if [ ! -d "$$GIT_TARGET_DIR/public" ] && [ -d "/var/www/html.orig" ]; then
echo "?? [nginx] Copying code from image..."
# Copy everything except storage (which is a volume mount)
find /var/www/html.orig -mindepth 1 -maxdepth 1 ! -name "storage" -exec cp -r {} "$$GIT_TARGET_DIR/" \; 2>/dev/null || true
fi
if [ -n "$$GIT_REPOSITORY_URL" ]; then
# Determine authentication method
if [ -n "$$GIT_TOKEN" ]; then
GIT_URL_WITH_AUTH=$$(echo "$$GIT_REPOSITORY_URL" | sed "s|https://|https://$${GIT_TOKEN}@|")
elif [ -n "$$GIT_USERNAME" ] && [ -n "$$GIT_PASSWORD" ]; then
GIT_URL_WITH_AUTH=$$(echo "$$GIT_REPOSITORY_URL" | sed "s|https://|https://$${GIT_USERNAME}:$${GIT_PASSWORD}@|")
else
GIT_URL_WITH_AUTH="$$GIT_REPOSITORY_URL"
fi
if [ ! -d "$$GIT_TARGET_DIR/.git" ]; then
echo "?? [nginx] Cloning repository from $$GIT_REPOSITORY_URL (branch: $${GIT_BRANCH:-main})..."
# Remove only files/dirs that are not storage (which is a volume mount)
find "$$GIT_TARGET_DIR" -mindepth 1 -maxdepth 1 ! -name "storage" -exec rm -rf {} \; 2>/dev/null || true
git clone --branch "$${GIT_BRANCH:-main}" --depth 1 "$$GIT_URL_WITH_AUTH" "$$GIT_TARGET_DIR" || {
echo "? Git clone failed. Using image contents."
}
else
echo "?? [nginx] Pulling latest changes..."
cd "$$GIT_TARGET_DIR"
git fetch origin "$${GIT_BRANCH:-main}" || true
git reset --hard "origin/$${GIT_BRANCH:-main}" || true
git clean -fd || true
fi
if [ -f "$$GIT_TARGET_DIR/composer.json" ]; then
echo "?? [nginx] Installing dependencies..."
cd "$$GIT_TARGET_DIR"
composer install --no-dev --optimize-autoloader --no-interaction --no-scripts || true
composer dump-autoload --optimize --classmap-authoritative || true
fi
echo "? [nginx] Git sync completed"
else
echo "?? [nginx] GIT_REPOSITORY_URL not set, using code from image"
fi
# Start nginx only (no PHP-FPM)
echo "?? [nginx] Starting nginx..."
exec nginx -g "daemon off;"
labels:
- "traefik.enable=true"
# HTTP Router
- "traefik.http.routers.app.rule=Host(`${APP_DOMAIN:-michaelschiemer.de}`)"
- "traefik.http.routers.app.entrypoints=websecure"
- "traefik.http.routers.app.tls=true"
- "traefik.http.routers.app.tls.certresolver=letsencrypt"
# Service
- "traefik.http.services.app.loadbalancer.server.port=80"
# Middleware
- "traefik.http.routers.app.middlewares=default-chain@file"
# Network
- "traefik.docker.network=traefik-public"
healthcheck:
test: ["CMD-SHELL", "curl -f http://127.0.0.1/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
depends_on:
app:
condition: service_started
# Redis Cache/Session/Queue Backend
redis:
image: redis:7-alpine
container_name: redis
restart: unless-stopped
networks:
- app-internal
environment:
- TZ=Europe/Berlin
command: >
redis-server
--requirepass ${REDIS_PASSWORD}
--maxmemory 512mb
--maxmemory-policy allkeys-lru
--save 900 1
--save 300 10
--save 60 10000
--appendonly yes
--appendfsync everysec
volumes:
- redis-data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
healthcheck:
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Queue Worker (Background Jobs)
queue-worker:
image: git.michaelschiemer.de:5000/framework:latest
container_name: queue-worker
restart: unless-stopped
networks:
- app-internal
environment:
- TZ=Europe/Berlin
- APP_ENV=${APP_ENV:-production}
- APP_DEBUG=${APP_DEBUG:-false}
# Database
- DB_HOST=${DB_HOST:-postgres}
- DB_PORT=${DB_PORT:-5432}
- DB_DATABASE=${DB_DATABASE}
- DB_USERNAME=${DB_USERNAME}
- DB_PASSWORD=${DB_PASSWORD}
# Redis
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=${REDIS_PASSWORD}
# Queue
- QUEUE_DRIVER=redis
- QUEUE_CONNECTION=default
- QUEUE_WORKER_SLEEP=${QUEUE_WORKER_SLEEP:-3}
- QUEUE_WORKER_TRIES=${QUEUE_WORKER_TRIES:-3}
- QUEUE_WORKER_TIMEOUT=${QUEUE_WORKER_TIMEOUT:-60}
volumes:
- app-storage:/var/www/html/storage
- app-logs:/var/www/html/storage/logs
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command: php console.php queue:work --queue=default --timeout=${QUEUE_WORKER_TIMEOUT:-60}
healthcheck:
test: ["CMD-SHELL", "php -r 'exit(0);' && test -f /var/www/html/console.php || exit 1"]
interval: 60s
timeout: 10s
retries: 3
start_period: 30s
depends_on:
app:
condition: service_started
redis:
condition: service_started
# Scheduler (Cron Jobs)
scheduler:
image: git.michaelschiemer.de:5000/framework:latest
container_name: scheduler
restart: unless-stopped
networks:
- app-internal
environment:
- TZ=Europe/Berlin
- APP_ENV=${APP_ENV:-production}
- APP_DEBUG=${APP_DEBUG:-false}
# Database
- DB_HOST=${DB_HOST:-postgres}
- DB_PORT=${DB_PORT:-5432}
- DB_DATABASE=${DB_DATABASE}
- DB_USERNAME=${DB_USERNAME}
- DB_PASSWORD=${DB_PASSWORD}
# Redis
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=${REDIS_PASSWORD}
volumes:
- app-storage:/var/www/html/storage
- app-logs:/var/www/html/storage/logs
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command: php console.php scheduler:run
healthcheck:
test: ["CMD-SHELL", "php -r 'exit(0);' && test -f /var/www/html/console.php || exit 1"]
interval: 60s
timeout: 10s
retries: 3
start_period: 30s
depends_on:
app:
condition: service_started
redis:
condition: service_started
volumes:
app-code:
name: app-code
app-storage:
name: app-storage
app-logs:
name: app-logs
redis-data:
name: redis-data
networks:
traefik-public:
external: true
app-internal:
external: true
name: app-internal

View File

@@ -119,6 +119,21 @@ docker compose ps
git clone ssh://git@git.michaelschiemer.de:2222/username/repo.git
```
### Configuration File
Gitea configuration is managed via `app.ini` file:
- **Local file**: `deployment/stacks/gitea/app.ini` (for local development)
- **Production**: Generated from Ansible template `deployment/ansible/templates/gitea-app.ini.j2`
- The `app.ini` is mounted read-only into the container at `/data/gitea/conf/app.ini`
- Configuration is based on the official Gitea example: https://github.com/go-gitea/gitea/blob/main/custom/conf/app.example.ini
**Key Configuration Sections:**
- `[server]`: Domain, ports, SSH settings
- `[database]`: PostgreSQL connection
- `[actions]`: Actions enabled, no GitHub dependency
- `[service]`: Registration settings
- `[cache]` / `[session]` / `[queue]`: Storage configuration
### Gitea Actions
Gitea Actions (GitHub Actions compatible) are enabled by default. To use them:

View File

@@ -0,0 +1,80 @@
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Gitea Configuration File
;; This file is based on the official Gitea example configuration
;; https://github.com/go-gitea/gitea/blob/main/custom/conf/app.example.ini
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; General Settings
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
APP_NAME = Gitea: Git with a cup of tea
RUN_MODE = prod
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Server Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[server]
PROTOCOL = http
DOMAIN = git.michaelschiemer.de
HTTP_ADDR = 0.0.0.0
HTTP_PORT = 3000
ROOT_URL = https://git.michaelschiemer.de/
PUBLIC_URL_DETECTION = auto
;; SSH Configuration
DISABLE_SSH = false
START_SSH_SERVER = true
SSH_DOMAIN = git.michaelschiemer.de
SSH_PORT = 22
SSH_LISTEN_PORT = 22
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Database Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[database]
DB_TYPE = postgres
HOST = postgres:5432
NAME = gitea
USER = gitea
PASSWD = gitea_password
SSL_MODE = disable
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Cache Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[cache]
ENABLED = false
ADAPTER = memory
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Session Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[session]
PROVIDER = file
PROVIDER_CONFIG = data/sessions
COOKIE_SECURE = true
COOKIE_NAME = i_like_gitea
GC_INTERVAL_TIME = 86400
SESSION_LIFE_TIME = 86400
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Queue Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[queue]
TYPE = channel
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Service Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[service]
DISABLE_REGISTRATION = true
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Actions Configuration
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
[actions]
ENABLED = true
;; Use "self" to use the current Gitea instance for actions (not GitHub)
;; Do NOT set DEFAULT_ACTIONS_URL to a custom URL - it's not supported
;; Leaving it unset or setting to "self" will use the current instance
;DEFAULT_ACTIONS_URL = self

View File

@@ -1,5 +1,3 @@
version: '3.8'
services:
gitea:
image: gitea/gitea:1.25

View File

@@ -0,0 +1,17 @@
# MinIO Object Storage Stack Configuration
# Copy this file to .env and adjust values
# Timezone
TZ=Europe/Berlin
# MinIO Root Credentials
# Generate secure password with: openssl rand -base64 32
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=<generate-with-openssl-rand-base64-32>
# Domain Configuration
# API endpoint (S3-compatible)
MINIO_API_DOMAIN=minio-api.michaelschiemer.de
# Console endpoint (Web UI)
MINIO_CONSOLE_DOMAIN=minio.michaelschiemer.de

View File

@@ -0,0 +1,657 @@
# MinIO Object Storage Stack - S3-compatible Object Storage
## Overview
MinIO ist ein hochperformanter, S3-kompatibler Object Storage Service für private Cloud- und Edge-Computing-Umgebungen.
**Features**:
- S3-kompatible API (Port 9000)
- Web-basierte Console für Management (Port 9001)
- SSL via Traefik
- Persistent storage
- Health checks und Monitoring
- Multi-Tenant Bucket Management
## Services
- **minio-api.michaelschiemer.de** - S3-kompatible API Endpoint
- **minio.michaelschiemer.de** - Web Console (Management UI)
## Prerequisites
1. **Traefik Stack Running**
```bash
cd ../traefik
docker compose up -d
```
2. **DNS Configuration**
Point these domains to your server IP (94.16.110.151):
- `minio-api.michaelschiemer.de`
- `minio.michaelschiemer.de`
## Configuration
### 1. Create Environment File
```bash
cp .env.example .env
```
### 2. Generate MinIO Root Password
```bash
openssl rand -base64 32
```
Update `.env`:
```env
MINIO_ROOT_PASSWORD=<generated-password>
```
**Important**: Change default `MINIO_ROOT_USER` in production!
### 3. Adjust Domains (Optional)
Edit `.env` to customize domains:
```env
MINIO_API_DOMAIN=storage-api.example.com
MINIO_CONSOLE_DOMAIN=storage.example.com
```
## Deployment
### Initial Setup
```bash
# Ensure Traefik is running
docker network inspect traefik-public
# Start MinIO
docker compose up -d
# Check logs
docker compose logs -f
# Verify health
docker compose ps
```
### Verify Deployment
```bash
# Test API endpoint
curl -I https://minio-api.michaelschiemer.de/minio/health/live
# Expected: HTTP/2 200
# Access Console
open https://minio.michaelschiemer.de
```
## Usage
### Web Console Access
1. Navigate to: https://minio.michaelschiemer.de
2. Login with:
- **Access Key**: Value of `MINIO_ROOT_USER`
- **Secret Key**: Value of `MINIO_ROOT_PASSWORD`
### Create Bucket via Console
1. Login to Console
2. Click "Create Bucket"
3. Enter bucket name (e.g., `my-bucket`)
4. Configure bucket settings:
- Versioning
- Object Locking
- Quota
- Retention policies
### S3 API Access
#### Using AWS CLI
```bash
# Install AWS CLI (if not installed)
pip install awscli
# Configure MinIO endpoint
aws configure set default.s3.signature_version s3v4
aws configure set default.s3.addressing_style virtual
# Set credentials
export AWS_ACCESS_KEY_ID=minioadmin
export AWS_SECRET_ACCESS_KEY=<MINIO_ROOT_PASSWORD>
# Test connection
aws --endpoint-url https://minio-api.michaelschiemer.de s3 ls
# Create bucket
aws --endpoint-url https://minio-api.michaelschiemer.de s3 mb s3://my-bucket
# Upload file
aws --endpoint-url https://minio-api.michaelschiemer.de s3 cp file.txt s3://my-bucket/
# Download file
aws --endpoint-url https://minio-api.michaelschiemer.de s3 cp s3://my-bucket/file.txt ./
# List objects
aws --endpoint-url https://minio-api.michaelschiemer.de s3 ls s3://my-bucket/
# Delete object
aws --endpoint-url https://minio-api.michaelschiemer.de s3 rm s3://my-bucket/file.txt
# Delete bucket
aws --endpoint-url https://minio-api.michaelschiemer.de s3 rb s3://my-bucket
```
#### Using MinIO Client (mc)
```bash
# Install MinIO Client
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
sudo mv mc /usr/local/bin/
# Configure alias
mc alias set minio https://minio-api.michaelschiemer.de minioadmin <MINIO_ROOT_PASSWORD>
# Test connection
mc admin info minio
# List buckets
mc ls minio
# Create bucket
mc mb minio/my-bucket
# Upload file
mc cp file.txt minio/my-bucket/
# Download file
mc cp minio/my-bucket/file.txt ./
# List objects
mc ls minio/my-bucket/
# Remove object
mc rm minio/my-bucket/file.txt
# Remove bucket
mc rb minio/my-bucket
```
#### Using cURL
```bash
# List buckets
curl -X GET https://minio-api.michaelschiemer.de \
-u "minioadmin:<MINIO_ROOT_PASSWORD>"
# Upload file (using presigned URL from Console or SDK)
```
### Programmatic Access
#### PHP (Using AWS SDK)
```php
use Aws\S3\S3Client;
use Aws\Exception\AwsException;
$s3Client = new S3Client([
'version' => 'latest',
'region' => 'us-east-1',
'endpoint' => 'https://minio-api.michaelschiemer.de',
'use_path_style_endpoint' => true,
'credentials' => [
'key' => 'minioadmin',
'secret' => '<MINIO_ROOT_PASSWORD>',
],
]);
// Create bucket
$s3Client->createBucket(['Bucket' => 'my-bucket']);
// Upload file
$s3Client->putObject([
'Bucket' => 'my-bucket',
'Key' => 'file.txt',
'Body' => fopen('/path/to/file.txt', 'r'),
]);
// Download file
$result = $s3Client->getObject([
'Bucket' => 'my-bucket',
'Key' => 'file.txt',
]);
echo $result['Body'];
```
#### JavaScript/Node.js
```javascript
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
endpoint: 'https://minio-api.michaelschiemer.de',
accessKeyId: 'minioadmin',
secretAccessKey: '<MINIO_ROOT_PASSWORD>',
s3ForcePathStyle: true,
signatureVersion: 'v4',
});
// Create bucket
s3.createBucket({ Bucket: 'my-bucket' }, (err, data) => {
if (err) console.error(err);
else console.log('Bucket created');
});
// Upload file
const params = {
Bucket: 'my-bucket',
Key: 'file.txt',
Body: require('fs').createReadStream('/path/to/file.txt'),
};
s3.upload(params, (err, data) => {
if (err) console.error(err);
else console.log('File uploaded:', data.Location);
});
```
#### Python (boto3)
```python
import boto3
from botocore.client import Config
s3_client = boto3.client(
's3',
endpoint_url='https://minio-api.michaelschiemer.de',
aws_access_key_id='minioadmin',
aws_secret_access_key='<MINIO_ROOT_PASSWORD>',
config=Config(signature_version='s3v4'),
region_name='us-east-1'
)
# Create bucket
s3_client.create_bucket(Bucket='my-bucket')
# Upload file
s3_client.upload_file('/path/to/file.txt', 'my-bucket', 'file.txt')
# Download file
s3_client.download_file('my-bucket', 'file.txt', '/path/to/downloaded.txt')
# List objects
response = s3_client.list_objects_v2(Bucket='my-bucket')
for obj in response.get('Contents', []):
print(obj['Key'])
```
## User Management
### Create Access Keys via Console
1. Login to Console: https://minio.michaelschiemer.de
2. Navigate to "Access Keys" → "Create Access Key"
3. Assign policies (read-only, read-write, admin)
4. Save Access Key and Secret Key (only shown once!)
### Create Access Keys via mc CLI
```bash
# Create new user
mc admin user add minio myuser mypassword
# Create access key for user
mc admin user svcacct add minio myuser --name my-access-key
# Output shows Access Key and Secret Key
```
### Policy Management
```bash
# List policies
mc admin policy list minio
# Create custom policy
mc admin policy add minio readwrite-policy /path/to/policy.json
# Assign policy to user
mc admin policy set minio readwrite-policy user=myuser
# Remove policy from user
mc admin policy remove minio readwrite-policy user=myuser
```
**Example Policy** (`policy.json`):
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::my-bucket/*"]
},
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::my-bucket"]
}
]
}
```
## Integration with Application Stack
### Environment Variables
Add to `application/.env`:
```env
# MinIO Object Storage
MINIO_ENDPOINT=https://minio-api.michaelschiemer.de
MINIO_ACCESS_KEY=<access-key>
MINIO_SECRET_KEY=<secret-key>
MINIO_BUCKET=<bucket-name>
MINIO_USE_SSL=true
MINIO_REGION=us-east-1
```
### PHP Integration
```php
// Use AWS SDK or MinIO PHP SDK
use Aws\S3\S3Client;
$s3Client = new S3Client([
'version' => 'latest',
'region' => $_ENV['MINIO_REGION'],
'endpoint' => $_ENV['MINIO_ENDPOINT'],
'use_path_style_endpoint' => true,
'credentials' => [
'key' => $_ENV['MINIO_ACCESS_KEY'],
'secret' => $_ENV['MINIO_SECRET_KEY'],
],
]);
```
## Backup & Recovery
### Manual Backup
```bash
#!/bin/bash
# backup-minio.sh
BACKUP_DIR="/backups/minio"
DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR
# Backup MinIO data
docker run --rm \
-v minio-data:/data \
-v $BACKUP_DIR:/backup \
alpine tar czf /backup/minio-data-$DATE.tar.gz -C /data .
echo "Backup completed: $BACKUP_DIR/minio-data-$DATE.tar.gz"
```
### Restore from Backup
```bash
# Stop MinIO
docker compose down
# Restore data
docker run --rm \
-v minio-data:/data \
-v /backups/minio:/backup \
alpine tar xzf /backup/minio-data-YYYYMMDD_HHMMSS.tar.gz -C /data
# Start MinIO
docker compose up -d
```
### Automated Backups
Add to crontab:
```bash
# Daily backup at 3 AM
0 3 * * * /path/to/backup-minio.sh
# Keep only last 30 days
0 4 * * * find /backups/minio -type f -mtime +30 -delete
```
### Bucket-Level Backup (Recommended)
Use MinIO's built-in replication or external tools:
```bash
# Sync bucket to external storage
mc mirror minio/my-bucket /backup/my-bucket/
# Or sync to another MinIO instance
mc mirror minio/my-bucket remote-minio/my-bucket/
```
## Monitoring
### Health Checks
```bash
# Check MinIO health
docker compose ps
# API health endpoint
curl -f https://minio-api.michaelschiemer.de/minio/health/live
# Check storage usage
docker exec minio du -sh /data
```
### Logs
```bash
# View logs
docker compose logs -f
# Check for errors
docker compose logs minio | grep -i error
# Monitor API access
docker compose logs -f minio | grep "GET /"
```
### Storage Statistics
```bash
# Check volume size
docker volume inspect minio-data
# Check disk usage
docker system df -v | grep minio
# List buckets via mc
mc ls minio
```
### Metrics (via mc)
```bash
# Server info
mc admin info minio
# Service status
mc admin service status minio
# Trace operations
mc admin trace minio
```
## Performance Tuning
### MinIO Configuration
For high-traffic scenarios, edit `docker-compose.yml`:
```yaml
environment:
# Increase concurrent operations
- MINIO_CACHE_DRIVES=/mnt/cache1,/mnt/cache2
- MINIO_CACHE_QUOTA=80
- MINIO_CACHE_AFTER=0
- MINIO_CACHE_WATERMARK_LOW=70
- MINIO_CACHE_WATERMARK_HIGH=90
```
### Storage Optimization
```bash
# Monitor storage growth
du -sh /var/lib/docker/volumes/minio-data/
# Enable compression (handled automatically by MinIO)
# Set bucket quotas via Console or mc
mc admin quota set minio/my-bucket --hard 100GB
```
## Troubleshooting
### Cannot Access Console
```bash
# Check service is running
docker compose ps
# Check Traefik routing
docker exec traefik cat /etc/traefik/traefik.yml
# Check network
docker network inspect traefik-public | grep minio
# Test from server
curl -k https://localhost:9001
```
### Authentication Failed
```bash
# Verify environment variables
docker exec minio env | grep MINIO_ROOT
# Check logs
docker compose logs minio | grep -i auth
# Reset root password (stop container, remove volume, restart)
```
### SSL Certificate Issues
```bash
# Verify Traefik certificate
docker exec traefik cat /acme.json | grep minio
# Test SSL
openssl s_client -connect minio-api.michaelschiemer.de:443 \
-servername minio-api.michaelschiemer.de < /dev/null
```
### Storage Issues
```bash
# Check volume mount
docker exec minio df -h /data
# Check for corrupted data
docker exec minio find /data -type f -name "*.json" | head
# Check disk space
df -h /var/lib/docker/volumes/minio-data/
```
### API Connection Errors
```bash
# Verify endpoint URL
curl -I https://minio-api.michaelschiemer.de/minio/health/live
# Test with credentials
curl -u minioadmin:<password> https://minio-api.michaelschiemer.de
# Check CORS settings (if needed for web apps)
mc admin config set minio api cors_allow_origin "https://yourdomain.com"
```
## Security
### Security Best Practices
1. **Strong Credentials**: Use strong passwords for root user and access keys
2. **Change Default Root User**: Don't use `minioadmin` in production
3. **SSL Only**: Always use HTTPS (enforced via Traefik)
4. **Access Key Rotation**: Regularly rotate access keys
5. **Policy-Based Access**: Use IAM policies to limit permissions
6. **Bucket Policies**: Configure bucket-level policies
7. **Audit Logging**: Enable audit logging for compliance
8. **Encryption**: Enable encryption at rest and in transit
### Enable Audit Logging
```bash
# Configure audit logging
mc admin config set minio audit_webhook \
endpoint=https://log-service.example.com/webhook \
auth_token=<token>
```
### Enable Encryption
```bash
# Set encryption keys
mc admin config set minio encryption \
sse-s3 enable \
kms endpoint=https://kms.example.com \
kms-key-id=<key-id>
```
### Update Stack
```bash
# Pull latest images
docker compose pull
# Recreate containers
docker compose up -d
# Verify
docker compose ps
```
### Security Headers
Security headers are applied via Traefik's `default-chain@file` middleware:
- HSTS
- Content-Type Nosniff
- XSS Protection
- Frame Deny
## Additional Resources
- **MinIO Documentation**: https://min.io/docs/
- **S3 API Compatibility**: https://min.io/docs/minio/linux/reference/minio-mc/mc.html
- **MinIO Client (mc)**: https://min.io/docs/minio/linux/reference/minio-mc.html
- **MinIO Erasure Coding**: https://min.io/docs/minio/linux/operations/concepts/erasure-coding.html
- **MinIO Security**: https://min.io/docs/minio/linux/operations/security.html

View File

@@ -0,0 +1,50 @@
services:
minio:
image: minio/minio:latest
container_name: minio
restart: unless-stopped
networks:
- traefik-public
environment:
- TZ=Europe/Berlin
- MINIO_ROOT_USER=${MINIO_ROOT_USER:-minioadmin}
- MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
command: server /data --console-address ":9001"
volumes:
- minio-data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
labels:
- "traefik.enable=true"
# API Router (S3-compatible endpoint)
- "traefik.http.routers.minio-api.rule=Host(`${MINIO_API_DOMAIN:-minio-api.michaelschiemer.de}`)"
- "traefik.http.routers.minio-api.entrypoints=websecure"
- "traefik.http.routers.minio-api.tls=true"
- "traefik.http.routers.minio-api.tls.certresolver=letsencrypt"
- "traefik.http.routers.minio-api.service=minio-api"
- "traefik.http.routers.minio-api.middlewares=default-chain@file"
- "traefik.http.services.minio-api.loadbalancer.server.port=9000"
# Console Router (Web UI)
- "traefik.http.routers.minio-console.rule=Host(`${MINIO_CONSOLE_DOMAIN:-minio.michaelschiemer.de}`)"
- "traefik.http.routers.minio-console.entrypoints=websecure"
- "traefik.http.routers.minio-console.tls=true"
- "traefik.http.routers.minio-console.tls.certresolver=letsencrypt"
- "traefik.http.routers.minio-console.service=minio-console"
- "traefik.http.routers.minio-console.middlewares=default-chain@file"
- "traefik.http.services.minio-console.loadbalancer.server.port=9001"
volumes:
minio-data:
name: minio-data
networks:
traefik-public:
external: true

View File

@@ -1,5 +1,3 @@
version: '3.8'
services:
# PostgreSQL Database
postgres:

View File

@@ -1,5 +1,3 @@
version: '3.8'
services:
registry:
image: registry:2.8
@@ -8,7 +6,7 @@ services:
networks:
- traefik-public
ports:
- "127.0.0.1:5000:5000"
- "0.0.0.0:5000:5000"
environment:
- TZ=Europe/Berlin
- REGISTRY_STORAGE_DELETE_ENABLED=true
@@ -39,7 +37,7 @@ services:
# Middleware
- "traefik.http.routers.registry.middlewares=default-chain@file"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:5000/v2/"]
test: ["CMD-SHELL", "wget --spider -q --header='Authorization: Basic YWRtaW46cmVnaXN0cnktc2VjdXJlLXBhc3N3b3JkLTIwMjU=' http://localhost:5000/v2/ || exit 1"]
interval: 30s
timeout: 10s
retries: 3

View File

@@ -1,5 +1,3 @@
version: '3.8'
services:
traefik:
image: traefik:v3.0

View File

@@ -1,466 +0,0 @@
# ==============================================================================
# Production Docker Swarm Stack with Traefik Load Balancer
# ==============================================================================
# Usage: docker stack deploy -c docker-compose.prod.yml framework
#
# This is a STANDALONE file - no merging with docker-compose.yml required.
# All services are production-ready with Swarm deployment configurations.
# ==============================================================================
version: '3.8'
# ==============================================================================
# Services
# ==============================================================================
services:
# ----------------------------------------------------------------------------
# Traefik - Reverse Proxy & Load Balancer
# ----------------------------------------------------------------------------
traefik:
image: traefik:v2.10
command:
# API & Dashboard
- "--api.dashboard=true"
- "--api.insecure=true"
# Docker Swarm Provider
- "--providers.docker=true"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedByDefault=false"
- "--providers.docker.network=framework_traefik-public"
# Entrypoints
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
# HTTP → HTTPS Redirect
- "--entrypoints.web.http.redirections.entryPoint.to=websecure"
- "--entrypoints.web.http.redirections.entryPoint.scheme=https"
# Access Logs
- "--accesslog=true"
- "--accesslog.filepath=/var/log/traefik/access.log"
# Metrics
- "--metrics.prometheus=true"
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
- target: 8080
published: 8080
protocol: tcp
mode: host
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./ssl:/ssl:ro
- traefik-logs:/var/log/traefik
networks:
- traefik-public
deploy:
placement:
constraints:
- node.role == manager
labels:
# Dashboard
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`traefik.localhost`)"
- "traefik.http.routers.traefik.service=api@internal"
- "traefik.http.routers.traefik.entrypoints=websecure"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.services.traefik.loadbalancer.server.port=8080"
# ----------------------------------------------------------------------------
# Web - PHP Application (Nginx + PHP-FPM)
# ----------------------------------------------------------------------------
web:
image: 94.16.110.151:5000/framework:latest
environment:
# Application
- APP_ENV=production
- APP_DEBUG=false
- APP_NAME=Michael Schiemer
- APP_TIMEZONE=Europe/Berlin
- APP_LOCALE=de
# Database
- DB_DRIVER=pgsql
- DB_HOST=db
- DB_PORT=5432
- DB_DATABASE=framework_prod
- DB_USERNAME=postgres
- DB_CHARSET=utf8
# Redis (Sessions & Cache)
- REDIS_HOST=redis
- REDIS_PORT=6379
- SESSION_DRIVER=redis
- CACHE_DRIVER=redis
# Security
- SECURITY_ALLOWED_HOSTS=localhost,michaelschiemer.de,www.michaelschiemer.de
- SECURITY_RATE_LIMIT_PER_MINUTE=60
- SESSION_LIFETIME=1800
- FORCE_HTTPS=true
# Performance
- OPCACHE_ENABLED=true
# Analytics
- ANALYTICS_ENABLED=true
- ANALYTICS_TRACK_PAGE_VIEWS=true
secrets:
- db_password
- app_key
- vault_encryption_key
- shopify_webhook_secret
- rapidmail_password
volumes:
- storage-logs:/var/www/html/storage/logs:rw
- storage-uploads:/var/www/html/storage/uploads:rw
networks:
- traefik-public
- backend
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
failure_action: rollback
order: start-first
rollback_config:
parallelism: 1
delay: 5s
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
labels:
# Traefik Configuration
- "traefik.enable=true"
# HTTP Router
- "traefik.http.routers.web.rule=Host(`michaelschiemer.de`) || Host(`www.michaelschiemer.de`)"
- "traefik.http.routers.web.entrypoints=websecure"
- "traefik.http.routers.web.tls=true"
# Load Balancer
- "traefik.http.services.web.loadbalancer.server.port=80"
- "traefik.http.services.web.loadbalancer.sticky.cookie=true"
- "traefik.http.services.web.loadbalancer.sticky.cookie.name=PHPSESSID"
- "traefik.http.services.web.loadbalancer.sticky.cookie.secure=true"
- "traefik.http.services.web.loadbalancer.sticky.cookie.httpOnly=true"
# Health Check
- "traefik.http.services.web.loadbalancer.healthcheck.path=/health"
- "traefik.http.services.web.loadbalancer.healthcheck.interval=30s"
- "traefik.http.services.web.loadbalancer.healthcheck.timeout=5s"
# ----------------------------------------------------------------------------
# Database - PostgreSQL
# ----------------------------------------------------------------------------
db:
image: postgres:16-alpine
environment:
- POSTGRES_DB=framework_prod
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
- POSTGRES_INITDB_ARGS=-E UTF8 --locale=C
- PGDATA=/var/lib/postgresql/data/pgdata
secrets:
- db_password
volumes:
- db-data:/var/lib/postgresql/data
networks:
- backend
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d framework_prod"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
deploy:
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
resources:
limits:
memory: 1G
cpus: '1.0'
reservations:
memory: 512M
cpus: '0.5'
# ----------------------------------------------------------------------------
# Redis - Cache & Sessions
# ----------------------------------------------------------------------------
redis:
image: redis:7-alpine
command: >
redis-server
--appendonly yes
--appendfsync everysec
--maxmemory 256mb
--maxmemory-policy allkeys-lru
volumes:
- redis-data:/data
networks:
- backend
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
start_period: 10s
deploy:
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
delay: 5s
resources:
limits:
memory: 256M
cpus: '0.5'
reservations:
memory: 128M
cpus: '0.25'
# ----------------------------------------------------------------------------
# Queue Worker - Background Jobs
# ----------------------------------------------------------------------------
queue-worker:
image: 94.16.110.151:5000/framework:latest
command: ["php", "/var/www/html/worker.php"]
environment:
- APP_ENV=production
- APP_DEBUG=false
- DB_HOST=db
- DB_DATABASE=framework_prod
- REDIS_HOST=redis
- WORKER_DEBUG=false
- WORKER_SLEEP_TIME=100000
- WORKER_MAX_JOBS=1000
secrets:
- db_password
- app_key
volumes:
- storage-logs:/var/www/html/storage/logs:rw
- storage-queue:/var/www/html/storage/queue:rw
networks:
- backend
deploy:
replicas: 2
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
resources:
limits:
memory: 512M
reservations:
memory: 256M
stop_grace_period: 30s
# ----------------------------------------------------------------------------
# Prometheus - Metrics Collection
# ----------------------------------------------------------------------------
prometheus:
image: prom/prometheus:latest
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
- '--storage.tsdb.retention.time=30d'
volumes:
- prometheus-data:/prometheus
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
networks:
- traefik-public
- backend
deploy:
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 256M
cpus: '0.25'
labels:
- "traefik.enable=true"
- "traefik.http.routers.prometheus.rule=Host(`prometheus.michaelschiemer.de`)"
- "traefik.http.routers.prometheus.entrypoints=websecure"
- "traefik.http.routers.prometheus.tls=true"
- "traefik.http.services.prometheus.loadbalancer.server.port=9090"
# ----------------------------------------------------------------------------
# Grafana - Metrics Visualization
# ----------------------------------------------------------------------------
grafana:
image: grafana/grafana:latest
environment:
- GF_SECURITY_ADMIN_PASSWORD__FILE=/run/secrets/grafana_admin_password
- GF_USERS_ALLOW_SIGN_UP=false
- GF_SERVER_ROOT_URL=https://grafana.michaelschiemer.de
- GF_INSTALL_PLUGINS=
secrets:
- grafana_admin_password
volumes:
- grafana-data:/var/lib/grafana
networks:
- traefik-public
- backend
deploy:
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 256M
cpus: '0.25'
labels:
- "traefik.enable=true"
- "traefik.http.routers.grafana.rule=Host(`grafana.michaelschiemer.de`)"
- "traefik.http.routers.grafana.entrypoints=websecure"
- "traefik.http.routers.grafana.tls=true"
- "traefik.http.services.grafana.loadbalancer.server.port=3000"
# ----------------------------------------------------------------------------
# Portainer - Container Management
# ----------------------------------------------------------------------------
portainer:
image: portainer/portainer-ce:latest
command: -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer-data:/data
networks:
- traefik-public
deploy:
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
resources:
limits:
memory: 256M
cpus: '0.25'
reservations:
memory: 128M
cpus: '0.1'
labels:
- "traefik.enable=true"
- "traefik.http.routers.portainer.rule=Host(`portainer.michaelschiemer.de`)"
- "traefik.http.routers.portainer.entrypoints=websecure"
- "traefik.http.routers.portainer.tls=true"
- "traefik.http.services.portainer.loadbalancer.server.port=9000"
# ==============================================================================
# Networks
# ==============================================================================
networks:
traefik-public:
driver: overlay
attachable: true
backend:
driver: overlay
internal: true
# ==============================================================================
# Volumes
# ==============================================================================
volumes:
traefik-logs:
storage-logs:
storage-uploads:
storage-queue:
db-data:
redis-data:
prometheus-data:
grafana-data:
portainer-data:
# ==============================================================================
# Secrets (to be created before deployment)
# ==============================================================================
secrets:
db_password:
external: true
app_key:
external: true
vault_encryption_key:
external: true
shopify_webhook_secret:
external: true
rapidmail_password:
external: true
grafana_admin_password:
external: true

View File

@@ -1,183 +0,0 @@
# Production Overrides for docker-compose.yml
# Usage: docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
services:
# ==============================================================================
# Web - Production Overrides
# ==============================================================================
web:
image: 94.16.110.151:5000/framework:latest
pull_policy: always # Always pull from registry, never build
# entrypoint.sh from image is used (starts PHP-FPM + nginx)
user: root # Run as root for nginx/php-fpm management
ports:
- "8888:80"
- "8443:443"
volumes:
# Production: Only mount persistent data subdirectories
# Source code is embedded in Docker image
- ./storage/logs:/var/www/html/storage/logs:rw
- ./storage/uploads:/var/www/html/storage/uploads:rw
- ./ssl:/var/www/ssl:ro
# Named volumes for cache and other runtime data
- storage-cache:/var/www/html/storage/cache:rw
- storage-queue:/var/www/html/storage/queue:rw
- var-data:/var/www/html/var:rw
networks:
- backend
environment:
- APP_ENV=production
healthcheck:
# Use curl (available in production image)
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
labels:
com.centurylinklabs.watchtower.enable: "true"
# ==============================================================================
# PHP - Production Overrides
# ==============================================================================
php:
image: 94.16.110.151:5000/framework:latest
user: root # Override base user - production image runs supervisor as root
volumes:
# Minimal volumes for production
- ./storage/logs:/var/www/html/storage/logs:rw
- ./storage/uploads:/var/www/html/storage/uploads:rw
- storage-cache:/var/www/html/storage/cache:rw
- storage-queue:/var/www/html/storage/queue:rw
- var-data:/var/www/html/var:rw
environment:
APP_ENV: production
APP_DEBUG: "false"
XDEBUG_MODE: "off" # No Xdebug in production
labels:
com.centurylinklabs.watchtower.enable: "true"
# ==============================================================================
# Database - Production Config
# ==============================================================================
db:
volumes:
# Override volumes completely to exclude config files
- db_data:/var/lib/postgresql/data
# ==============================================================================
# Redis - Production Config
# ==============================================================================
redis:
volumes:
- ./docker/redis/redis-secure.conf:/usr/local/etc/redis/redis.conf:ro
- redis_data:/data
# ==============================================================================
# Queue Worker - Production
# ==============================================================================
queue-worker:
image: 94.16.110.151:5000/framework:latest
user: root # Override base user - production image runs supervisor as root
environment:
- APP_ENV=production
- WORKER_DEBUG=false
labels:
com.centurylinklabs.watchtower.enable: "true"
# ==============================================================================
# Watchtower - Auto-Update
# ==============================================================================
watchtower:
image: containrrr/watchtower:latest
container_name: watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
WATCHTOWER_CLEANUP: "true"
WATCHTOWER_POLL_INTERVAL: 300 # Check every 5 minutes
WATCHTOWER_LABEL_ENABLE: "true"
WATCHTOWER_INCLUDE_STOPPED: "false"
WATCHTOWER_REVIVE_STOPPED: "false"
WATCHTOWER_NOTIFICATIONS: "shoutrrr"
WATCHTOWER_NOTIFICATION_URL: "${WATCHTOWER_NOTIFICATION_URL:-}"
command: --interval 300 --cleanup --label-enable
networks:
- backend
# ==============================================================================
# Prometheus - Monitoring (VPN-only)
# ==============================================================================
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: unless-stopped
ports:
- "10.8.0.1:9090:9090" # VPN-only
volumes:
- ./monitoring/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
networks:
- backend
# ==============================================================================
# Grafana - Visualization (VPN-only)
# ==============================================================================
grafana:
image: grafana/grafana:latest
container_name: grafana
restart: unless-stopped
ports:
- "10.8.0.1:3000:3000" # VPN-only
environment:
GF_SECURITY_ADMIN_USER: ${GRAFANA_USER:-admin}
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD}
GF_INSTALL_PLUGINS: ""
volumes:
- grafana-data:/var/lib/grafana
- ./monitoring/grafana/provisioning:/etc/grafana/provisioning:ro
depends_on:
- prometheus
networks:
- backend
# ==============================================================================
# Portainer - Container Management (VPN-only)
# ==============================================================================
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
ports:
- "10.8.0.1:9443:9443" # VPN-only HTTPS
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer-data:/data
networks:
- backend
# ==============================================================================
# Production Networks
# ==============================================================================
networks:
backend:
driver: bridge
# ==============================================================================
# Production Volumes
# ==============================================================================
volumes:
storage-cache:
storage-queue:
var-data:
db_data:
redis_data:
prometheus-data:
grafana-data:
portainer-data:

View File

@@ -10,8 +10,6 @@
# - Production PostgreSQL configuration
# - Certbot for SSL certificates
version: '3.8'
services:
web:
# Production restart policy
@@ -211,8 +209,7 @@ services:
- cache
queue-worker:
# Use same image as php service (has application code copied)
image: framework-production-php
# Use same build as php service (has application code copied)
# Production restart policy
restart: always

View File

@@ -248,6 +248,44 @@ services:
reservations:
memory: 512M
minio:
container_name: minio
image: minio/minio:latest
restart: ${RESTART_POLICY:-unless-stopped}
environment:
- TZ=Europe/Berlin
- MINIO_ROOT_USER=${MINIO_ROOT_USER:-minioadmin}
- MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD:-minioadmin}
command: server /data --console-address ":9001"
ports:
- "${MINIO_API_PORT:-9000}:9000"
- "${MINIO_CONSOLE_PORT:-9001}:9001"
volumes:
- minio_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
networks:
- backend
logging:
driver: "${LOG_DRIVER:-local}"
options:
max-size: "${LOG_MAX_SIZE:-5m}"
max-file: "${LOG_MAX_FILE:-2}"
deploy:
resources:
limits:
memory: ${MINIO_MEMORY_LIMIT:-512M}
cpus: ${MINIO_CPU_LIMIT:-0.5}
reservations:
memory: ${MINIO_MEMORY_RESERVATION:-256M}
cpus: ${MINIO_CPU_RESERVATION:-0.25}
# websocket:
# build:
# context: .
@@ -285,3 +323,4 @@ volumes:
worker-logs:
worker-queue:
worker-storage: # Complete separate storage for worker with correct permissions
minio_data: # MinIO object storage data

View File

@@ -1,48 +0,0 @@
# Optimierungsvorschläge für die Docker-Compose Umgebung
## ToDo-Liste
- [ ] **Datenbank Passwörter & Secrets absichern**
- Datenbankpasswörter nicht im Klartext in der YAML speichern, sondern `secrets`-Mechanismus verwenden.
- `.env` Werte für Datenbanken statt statischer Angaben verwenden.
- Beispiel: `MYSQL_ROOT_PASSWORD_FILE` und `MYSQL_PASSWORD_FILE` setzen und Secrets einbinden.
- [ ] **Performance & Caching verbessern**
- `cache_from` und `cache_to` im Build-Prozess (BuildKit) einrichten.
- Für PHP einen dedizierten Volume für den Composer-Cache nutzen.
- Nginx-Cache als eigenes Volume deklarieren.
- Die Vendor-Ordner aus Mounts ausschließen oder gesondert berücksichtigen, damit lokale Änderungen keine Build-Optimierungen verhindern.
- [ ] **Netzwerk- und Bind-Mounts optimieren**
- Bei Nginx nur das Public-Verzeichnis (`public/`) einbinden, nicht das gesamte Projektverzeichnis.
- Nicht benötigte Verzeichnisse (wie z.B. `vendor/`) explizit ausschließen.
- Healthchecks und Startbedingungen konsistent definieren.
- [ ] **Image-Versionen festlegen**
- Keine `latest`-Images nutzen, sondern möglichst immer eine feste Version angeben (z.B. `mariadb:11.3` statt `mariadb:latest`).
- Gilt auch für Redis, PHP und weitere Services.
- [ ] **Ressourcenlimits setzen**
- `deploy.resources` für Speicher und CPU bei allen Services, nicht nur beim Worker.
- [ ] **Security-Best-Practices**
- Nicht produktive Ports (z.B. bei Entwicklung) durch `.env` variabel und gezielt auf localhost begrenzen.
- Feste Netzwerkbereiche und eigene Netzwerke für sensible Kommunikation (z.B. Backend, Cache).
- [ ] **Multi-Stage Builds in Dockerfiles nutzen**
- Die Images im PHP- und Worker-Bereich sollten über Multi-Stage-Builds möglichst klein gehalten werden (z.B. `FROM php:X-cli AS base`, dann Production-Image).
- [ ] **Environment-Konfiguration für Dev/Prod trennen**
- Eine `docker-compose.override.yml` für Entwicklung mit vollem Source-Mount und Debug-Konfiguration anlegen.
- Für Produktion keine Source-Mounts, keine Debug-Variablen, optimierte Settings.
- [ ] **Log-Rotation aktivieren**
- Logging-Driver auf `json-file` einstellen und Optionen für Größe/Rotation setzen.
- [ ] **Monitoring & Healthchecks**
- Für alle Services sinnvolle Healthchecks ergänzen.
- (Optional) Monitoring und/oder Alerting ergänzen.
---
**Tipp:** Die oben stehenden Punkte können Schritt für Schritt umgesetzt und pro optimiertem Bereich abgehakt werden.

View File

@@ -22,13 +22,80 @@ load_secret "APP_KEY"
load_secret "VAULT_ENCRYPTION_KEY"
load_secret "SHOPIFY_WEBHOOK_SECRET"
load_secret "RAPIDMAIL_PASSWORD"
load_secret "GIT_TOKEN"
echo "✅ All secrets loaded"
# Git Clone/Pull functionality
if [ -n "$GIT_REPOSITORY_URL" ]; then
echo ""
echo "📥 Cloning/Pulling code from Git repository..."
GIT_BRANCH="${GIT_BRANCH:-main}"
GIT_TARGET_DIR="/var/www/html"
# Setup Git credentials if provided
if [ -n "$GIT_TOKEN" ]; then
# Use token for HTTPS authentication
GIT_URL_WITH_AUTH=$(echo "$GIT_REPOSITORY_URL" | sed "s|https://|https://${GIT_TOKEN}@|")
elif [ -n "$GIT_USERNAME" ] && [ -n "$GIT_PASSWORD" ]; then
GIT_URL_WITH_AUTH=$(echo "$GIT_REPOSITORY_URL" | sed "s|https://|https://${GIT_USERNAME}:${GIT_PASSWORD}@|")
else
GIT_URL_WITH_AUTH="$GIT_REPOSITORY_URL"
fi
# Clone or pull repository
if [ ! -d "$GIT_TARGET_DIR/.git" ]; then
echo "📥 Cloning repository from $GIT_REPOSITORY_URL (branch: $GIT_BRANCH)..."
# Remove existing files if they exist (from image build)
if [ "$(ls -A $GIT_TARGET_DIR 2>/dev/null)" ]; then
echo "🗑️ Cleaning existing files..."
rm -rf "$GIT_TARGET_DIR"/* "$GIT_TARGET_DIR"/.* 2>/dev/null || true
fi
# Clone repository
git clone --branch "$GIT_BRANCH" --depth 1 "$GIT_URL_WITH_AUTH" "$GIT_TARGET_DIR" || {
echo "❌ Git clone failed. Falling back to image contents."
}
else
echo "🔄 Pulling latest changes from $GIT_BRANCH..."
cd "$GIT_TARGET_DIR"
# Fetch and reset to latest
git fetch origin "$GIT_BRANCH" || {
echo "⚠️ Git fetch failed. Using existing code."
}
git reset --hard "origin/$GIT_BRANCH" || {
echo "⚠️ Git reset failed. Using existing code."
}
git clean -fd || true
fi
# Install/update dependencies if composer.json exists
if [ -f "$GIT_TARGET_DIR/composer.json" ]; then
echo "📦 Installing/updating Composer dependencies..."
cd "$GIT_TARGET_DIR"
composer install --no-dev --optimize-autoloader --no-interaction --no-scripts || {
echo "⚠️ Composer install failed. Continuing..."
}
# Run composer scripts if needed
composer dump-autoload --optimize --classmap-authoritative || true
fi
echo "✅ Git sync completed"
else
echo ""
echo " GIT_REPOSITORY_URL not set, using code from image"
fi
echo ""
echo "📊 Environment variables:"
env | grep -E "DB_|APP_" | grep -v "PASSWORD\|KEY\|SECRET" || true
env | grep -E "DB_|APP_" | grep -v "PASSWORD|KEY|SECRET" || true
# Start PHP-FPM in background (inherits all environment variables)
echo ""
echo "🚀 Starting PHP-FPM..."
php-fpm &

View File

@@ -1,5 +1,15 @@
# Ansible-Based Deployment
⚠️ **WICHTIG:** Diese Dokumentation ist veraltet.
**Für aktuelle Ansible-Deployment-Dokumentation siehe:**
- **[deployment/ansible/README.md](../../deployment/ansible/README.md)** - Aktuelle Ansible-Dokumentation
- **[deployment/DEPLOYMENT_COMMANDS.md](../../deployment/DEPLOYMENT_COMMANDS.md)** - Command-Referenz
---
**Historische Dokumentation (veraltet):**
Fortgeschrittenes Deployment mit Ansible für Multi-Server Orchestrierung und Infrastructure as Code.
## Übersicht

View File

@@ -1,73 +0,0 @@
# 🚀 Production Deployment Guide
## Schneller Deployment-Workflow
### 1. Environment Setup (KRITISCH)
```bash
# Kopiere .env Template
cp .env.production .env
# Setze ALLE CHANGE_ME Werte:
nano .env
```
**WICHTIG:** Folgende Werte MÜSSEN gesetzt werden:
- `DB_PASSWORD` - Starkes Datenbankpasswort
- `SHOPIFY_WEBHOOK_SECRET` - Nur wenn Shopify verwendet wird
- `RAPIDMAIL_USERNAME/PASSWORD` - Nur wenn RapidMail verwendet wird
### 2. Database Setup
```bash
# 1. Datenbank erstellen
mysql -u root -p
CREATE DATABASE production_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'production_user'@'localhost' IDENTIFIED BY 'DEIN_PASSWORT';
GRANT ALL PRIVILEGES ON production_db.* TO 'production_user'@'localhost';
FLUSH PRIVILEGES;
EXIT;
# 2. Migration ausführen
mysql -u production_user -p production_db < migrations/2024_01_01_create_meta_entries_table.sql
```
### 3. Assets Build (falls Frontend verwendet)
```bash
npm install
npm run build
```
### 4. Basic Health Check
```bash
# Server starten und testen
php -S localhost:8000 -t public/
curl http://localhost:8000/
```
## Sicherheits-Checklist ✅
- [x] Keine hardcoded Secrets im Code
- [x] Starke Datenbankpasswörter
- [x] Production .env Template erstellt
- [x] Environment-basierte Konfiguration
## Next Steps (Optional)
1. **SSL Setup** - Let's Encrypt oder eigene Zertifikate
2. **Webserver Config** - nginx/Apache Konfiguration
3. **Process Manager** - PM2, systemd oder supervisor
4. **Monitoring** - Log-Aggregation und Error-Tracking
5. **Backup Strategy** - Automatische DB-Backups
## Rollback Strategy
Bei Problemen:
```bash
# 1. Alte Version aktivieren
git checkout previous-version
# 2. Assets neu bauen (falls nötig)
npm run build
# 3. Cache leeren
# (abhängig von Setup)
```

View File

@@ -1,374 +0,0 @@
# Production Deployment Checklist
**Print this and check off items as you complete them.**
---
## Pre-Deployment Checklist
### Infrastructure
- [ ] Server meets requirements (Ubuntu 22.04+, 4GB RAM, 40GB disk)
- [ ] Domain name configured and pointing to server IP
- [ ] DNS propagation verified (nslookup yourdomain.com)
- [ ] Firewall rules configured (ports 22, 80, 443 open)
- [ ] SSH access to server confirmed
- [ ] Root or sudo access verified
### Security
- [ ] SSH key pair generated
- [ ] SSH key added to server
- [ ] Vault encryption key generated
- [ ] Vault key stored in password manager
- [ ] Database passwords generated (32+ characters)
- [ ] JWT secrets generated (64+ characters)
- [ ] Admin allowed IPs list prepared
- [ ] SSL certificate email address ready
### Code
- [ ] Application repository accessible
- [ ] Production branch exists and tested
- [ ] All tests passing locally
- [ ] Database migrations reviewed
- [ ] .env.example up to date
- [ ] Dependencies reviewed (composer.json, package.json)
---
## Deployment Steps Checklist
### Step 1: Server Setup
- [ ] SSH into server
- [ ] System updated (apt update && upgrade)
- [ ] Docker installed
- [ ] Docker Compose installed
- [ ] Certbot installed
- [ ] Application user created
- [ ] Application user added to docker group
- [ ] Directory structure created (/var/www/app, /var/log/app, /opt/vault)
### Step 2: SSL Certificate
- [ ] Webroot directory created (/var/www/certbot)
- [ ] Certbot certificate obtained
- [ ] Certificate files verified (fullchain.pem, privkey.pem)
- [ ] Certificate expiration date checked (>30 days)
- [ ] Auto-renewal tested (certbot renew --dry-run)
### Step 3: Application Code
- [ ] Repository cloned to /home/appuser/app
- [ ] Production branch checked out
- [ ] Git configured (user.name, user.email)
- [ ] File permissions set correctly (chown -R appuser:appuser)
### Step 4: Environment Configuration
- [ ] .env.production created from .env.example
- [ ] APP_ENV set to "production"
- [ ] APP_DEBUG set to "false"
- [ ] APP_URL configured with domain
- [ ] Database credentials configured
- [ ] VAULT_ENCRYPTION_KEY added
- [ ] LOG_PATH configured
- [ ] ADMIN_ALLOWED_IPS configured
- [ ] All required environment variables set
- [ ] Sensitive values NOT committed to git
### Step 5: Docker Containers
- [ ] docker-compose.production.yml reviewed
- [ ] Containers built (docker compose build)
- [ ] Containers started (docker compose up -d)
- [ ] All containers running (docker compose ps)
- [ ] Container logs checked for errors
- [ ] Container networking verified
### Step 6: Database
- [ ] Database container healthy
- [ ] Database migrations applied (php console.php db:migrate)
- [ ] Migration status verified (php console.php db:status)
- [ ] Database backup created
- [ ] Database connection tested
### Step 7: Health Checks
- [ ] Health endpoint accessible (curl http://localhost/health/summary)
- [ ] All health checks passing (overall_healthy: true)
- [ ] Database health check: healthy
- [ ] Cache health check: healthy
- [ ] Queue health check: healthy
- [ ] Filesystem health check: healthy
- [ ] SSL health check: healthy
- [ ] Detailed health endpoint tested
### Step 8: Nginx Configuration
- [ ] Nginx installed
- [ ] Site configuration created (/etc/nginx/sites-available/app)
- [ ] SSL certificates paths correct in config
- [ ] Proxy settings configured
- [ ] Site enabled (symlink in sites-enabled)
- [ ] Nginx configuration tested (nginx -t)
- [ ] Nginx restarted
- [ ] HTTPS redirect working (http → https)
### Step 9: Application Verification
- [ ] HTTPS endpoint accessible (https://yourdomain.com)
- [ ] SSL certificate valid (no browser warnings)
- [ ] Homepage loads correctly
- [ ] API endpoints responding
- [ ] Authentication working
- [ ] Admin panel accessible (from allowed IPs)
- [ ] File uploads working
- [ ] Background jobs processing
- [ ] Email sending configured
### Step 10: Monitoring
- [ ] Metrics endpoint accessible (/metrics)
- [ ] Prometheus metrics valid format
- [ ] Health checks integrated with monitoring
- [ ] Log files being created (/var/log/app/)
- [ ] Log rotation configured
- [ ] Disk space monitored
- [ ] Memory usage monitored
- [ ] CPU usage monitored
---
## Post-Deployment Checklist
### Security Hardening
- [ ] UFW firewall enabled
- [ ] Only required ports open (22, 80, 443)
- [ ] SSH password authentication disabled
- [ ] Root login disabled via SSH
- [ ] Fail2Ban installed and configured
- [ ] Security headers verified (X-Frame-Options, CSP, etc.)
- [ ] OWASP security scan performed
- [ ] SSL Labs test passed (A+ rating)
### Backups
- [ ] Database backup script created
- [ ] Vault backup script created
- [ ] Backup directory created (/opt/backups)
- [ ] Backup cron job configured
- [ ] Backup restoration tested
- [ ] Backup retention policy configured (7 days)
- [ ] Off-site backup configured (optional but recommended)
### Monitoring & Alerts
- [ ] Grafana installed (optional)
- [ ] Prometheus configured (optional)
- [ ] Alert rules configured
- [ ] Email notifications configured
- [ ] Disk space alerts set (>90% usage)
- [ ] Memory alerts set (>90% usage)
- [ ] Health check alerts set
- [ ] SSL expiration alerts set (30 days)
### Documentation
- [ ] Deployment procedure documented
- [ ] Server credentials documented (in secure location)
- [ ] Vault encryption key documented (in secure location)
- [ ] Database backup location documented
- [ ] Rollback procedure documented
- [ ] Team access granted and documented
- [ ] On-call rotation documented
### Performance
- [ ] Performance baseline established
- [ ] Slow query log enabled
- [ ] Cache hit rate monitored
- [ ] Response time benchmarked
- [ ] Load testing performed
- [ ] Database indexes optimized
- [ ] Asset compression enabled (gzip)
- [ ] CDN configured (optional)
### Compliance & Legal
- [ ] Privacy policy deployed
- [ ] Terms of service deployed
- [ ] Cookie consent implemented (if EU traffic)
- [ ] GDPR compliance verified (if EU traffic)
- [ ] Data retention policies documented
- [ ] Incident response plan documented
---
## Rollback Checklist
**Use this if deployment fails and you need to rollback:**
### Immediate Rollback
- [ ] Stop new containers: `docker compose down`
- [ ] Start old containers: `docker compose -f docker-compose.old.yml up -d`
- [ ] Verify health: `curl http://localhost/health/summary`
- [ ] Rollback database migrations: `php console.php db:rollback`
- [ ] Clear cache: `php console.php cache:clear`
- [ ] Verify application functionality
- [ ] Notify team of rollback
### Post-Rollback
- [ ] Document rollback reason
- [ ] Identify root cause
- [ ] Create fix for issue
- [ ] Test fix in staging
- [ ] Plan next deployment attempt
- [ ] Update deployment procedure if needed
---
## Weekly Maintenance Checklist
**Perform these checks weekly:**
- [ ] Review application logs for errors
- [ ] Check disk space (should be <80%)
- [ ] Review health check status
- [ ] Verify backups running successfully
- [ ] Check SSL certificate expiration (>30 days remaining)
- [ ] Review security logs (fail2ban)
- [ ] Check for system updates
- [ ] Review performance metrics
- [ ] Test backup restoration (monthly)
---
## Monthly Maintenance Checklist
**Perform these checks monthly:**
- [ ] Apply system security updates
- [ ] Review and update dependencies (composer update, npm update)
- [ ] Rotate secrets (API keys, tokens) if required
- [ ] Review and archive old logs
- [ ] Perform security audit
- [ ] Review and update documentation
- [ ] Test disaster recovery procedure
- [ ] Review and optimize database performance
- [ ] Review monitoring alerts effectiveness
- [ ] Update deployment runbook with lessons learned
---
## Quarterly Maintenance Checklist
**Perform these checks quarterly:**
- [ ] Rotate Vault encryption key
- [ ] Rotate database passwords
- [ ] Review and update security policies
- [ ] Conduct penetration testing
- [ ] Review and optimize infrastructure costs
- [ ] Update disaster recovery plan
- [ ] Review team access and permissions
- [ ] Conduct deployment drill with team
- [ ] Review compliance requirements
- [ ] Update technical documentation
---
## Emergency Contacts
**Fill this in and keep it secure:**
```
Server Provider: _______________________
Support Phone: _________________________
Support Email: _________________________
Domain Registrar: ______________________
Support Phone: _________________________
Support Email: _________________________
SSL Provider: __________________________
Support Phone: _________________________
Support Email: _________________________
Database Backup Location: ______________
Vault Key Location: ____________________
SSH Key Location: ______________________
Team Lead: _____________________________
On-Call Phone: _________________________
DevOps Lead: ___________________________
On-Call Phone: _________________________
Security Contact: ______________________
Emergency Phone: _______________________
```
---
## Deployment Sign-Off
**Deployment Details:**
```
Date: _____________________
Deployed By: ______________
Version/Commit: ___________
Environment: Production
Deployment Method: [ ] Manual [ ] Script [ ] Ansible
Health Check Status: [ ] All Passing
SSL Certificate: [ ] Valid
Database Migrations: [ ] Applied
Backups: [ ] Verified
Issues During Deployment:
_____________________________________________
_____________________________________________
Post-Deployment Notes:
_____________________________________________
_____________________________________________
Signed: ___________________ Date: __________
```
---
## Continuous Improvement
After each deployment, answer these questions:
1. **What went well?**
- _______________________________________________
- _______________________________________________
2. **What could be improved?**
- _______________________________________________
- _______________________________________________
3. **What was unexpected?**
- _______________________________________________
- _______________________________________________
4. **Action items for next deployment:**
- _______________________________________________
- _______________________________________________
5. **Documentation updates needed:**
- _______________________________________________
- _______________________________________________
---
**Remember**: This checklist should be updated after each deployment to reflect lessons learned and process improvements.

View File

@@ -1,96 +0,0 @@
# Deployment System Restructuring Complete
## Summary
The deployment system has been successfully restructured from multiple scattered configurations into a modern, organized hybrid deployment system.
## What Was Done
### 1. Backup Creation
Created `.deployment-backup/` directory containing all old deployment files:
- `ansible/` - Multiple Ansible configurations (netcup-simple-deploy, nginx-cdn-germany, wireguard-server)
- `x_ansible/` - Alternative Ansible setup
- `ssl/` - SSL certificates for local development
- `bin/` - Deployment utility scripts
- `BACKUP_SUMMARY.md` - Detailed backup documentation
### 2. New Structure Creation
Created modern `deployment/` directory with four main components:
**Infrastructure** (`deployment/infrastructure/`)
- Ansible playbooks for server setup and management
- Environment-specific inventories
- Reusable roles for common tasks
- Configuration management via group_vars
**Applications** (`deployment/applications/`)
- Docker Compose configurations per environment
- Production, staging, and development setups
- Service orchestration and health monitoring
- Environment-specific optimizations
**Scripts** (`deployment/scripts/`)
- Automated deployment orchestration
- Setup, rollback, and utility scripts
- Health checking and monitoring tools
- Integration with infrastructure and applications
**Configs** (`deployment/configs/`)
- Configuration templates for all services
- Nginx, PHP, MySQL, and SSL configurations
- Environment-specific template variables
- Monitoring and logging configurations
## Key Improvements
**Separation of Concerns**: Clear distinction between infrastructure and application deployment
**Environment Management**: Dedicated configurations for production, staging, development
**Docker Integration**: Full containerization with Docker Compose
**Automation**: Streamlined deployment workflows
**Configuration Management**: Centralized template system
**Documentation**: Comprehensive README files for each component
## Next Steps
1. **Infrastructure Setup** (`deployment/infrastructure/`)
- Create Ansible inventories for environments
- Develop server setup playbooks
- Configure security and Docker roles
2. **Application Configuration** (`deployment/applications/`)
- Create environment-specific Docker Compose files
- Configure service networking and volumes
- Set up health checks and monitoring
3. **Script Development** (`deployment/scripts/`)
- Implement main deployment orchestration script
- Create setup and rollback automation
- Develop health monitoring tools
4. **Configuration Templates** (`deployment/configs/`)
- Create Nginx configuration templates
- Set up SSL certificate management
- Configure environment file templates
## Migration Benefits
- **Maintainability**: Organized, documented structure
- **Scalability**: Easy to add new environments or services
- **Reliability**: Automated testing and rollback capabilities
- **Security**: Modern security practices and SSL management
- **Consistency**: Standardized deployment across environments
## Rollback Option
If needed, the old system can be quickly restored by moving files back from `.deployment-backup/` to their original locations. See `BACKUP_SUMMARY.md` for detailed instructions.
## Framework Integration
The new deployment system is specifically designed for the Custom PHP Framework with:
- HTTPS-first configuration (framework requirement)
- Docker-based architecture (current setup)
- Production server targeting (94.16.110.151 with deploy user)
- SSL certificate automation for framework's HTTPS requirement
- Health check integration with framework's built-in endpoints
This modern deployment system provides a solid foundation for reliable, automated deployment of the Custom PHP Framework across all environments.

View File

@@ -1,568 +0,0 @@
# Production Deployment Infrastructure - Summary
**Project**: Custom PHP Framework
**Status**: ✅ Complete
**Date**: January 2025
---
## Overview
Complete production deployment infrastructure has been implemented for the Custom PHP Framework, providing multiple deployment paths from quick manual setup to fully automated infrastructure as code.
---
## Completed Components
### 1. Health Check & Monitoring System ✅
**Location**: `src/Application/Health/`, `src/Application/Metrics/`
**Features**:
- Multiple health check endpoints for different use cases
- Automatic health check discovery via attributes
- Prometheus-compatible metrics endpoint
- Real-time performance monitoring
- Health check categories (Database, Cache, Security, Infrastructure)
**Endpoints**:
```
GET /health/summary - Quick health overview
GET /health/detailed - Comprehensive health report
GET /health/checks - List all registered checks
GET /health/category/{cat} - Category-specific checks
GET /metrics - Prometheus metrics
GET /metrics/json - JSON metrics
```
**Health Checks Implemented**:
- ✅ Database connectivity and performance
- ✅ Cache system health (Redis/File)
- ✅ Queue system monitoring
- ✅ SSL certificate validity (30-day warning, 7-day critical)
- ✅ Disk space monitoring
- ✅ Memory usage monitoring
- ✅ Vault availability
---
### 2. Production Logging Configuration ✅
**Location**: `src/Framework/Logging/ProductionLogConfig.php`
**Available Configurations**:
| Configuration | Use Case | Performance | Volume Reduction |
|---------------|----------|-------------|------------------|
| **production()** | Standard production | 10K+ logs/sec | Baseline |
| **highPerformance()** | High traffic (>100 req/s) | 50K+ logs/sec | 80-90% |
| **productionWithAggregation()** | Repetitive patterns | 20K+ logs/sec | 70-90% |
| **debug()** | Temporary troubleshooting | 2-3ms latency | N/A (verbose) |
| **staging()** | Pre-production testing | Standard | N/A |
**Features**:
- Resilient logging with automatic fallback
- Buffered writes for performance (100 entries, 5s flush)
- 14-day rotating log files
- Structured JSON logs with request/trace context
- Intelligent sampling and aggregation
- Integration with Prometheus metrics
**Documentation**: [production-logging.md](production-logging.md)
---
### 3. Deployment Documentation Suite ✅
Six comprehensive guides covering all deployment scenarios:
#### 3.1. Quick Start Guide
**File**: [QUICKSTART.md](QUICKSTART.md)
**Purpose**: Get to production in 30 minutes
**Target**: First-time deployment, quick setup
**Contents**:
- 10-step deployment process
- Minimal configuration required
- SSL certificate automation
- Vault key generation
- Database initialization
- Health verification
- Basic troubleshooting
#### 3.2. Deployment Checklist
**File**: [DEPLOYMENT_CHECKLIST.md](DEPLOYMENT_CHECKLIST.md)
**Purpose**: Ensure nothing is missed
**Target**: Compliance verification, team coordination
**Contents**:
- Pre-deployment checklist (Infrastructure, Security, Code)
- Step-by-step deployment verification
- Post-deployment security hardening
- Maintenance schedules (weekly, monthly, quarterly)
- Emergency contacts template
- Deployment sign-off form
- Continuous improvement framework
#### 3.3. Complete Deployment Workflow
**File**: [DEPLOYMENT_WORKFLOW.md](DEPLOYMENT_WORKFLOW.md)
**Purpose**: Detailed deployment lifecycle
**Target**: Understanding complete process
**Contents**:
- **Phase 1**: Initial Server Setup (one-time)
- Server preparation
- SSL certificate with Let's Encrypt
- Vault key generation
- Environment configuration
- **Phase 2**: Initial Deployment
- Docker container setup
- Database migrations
- Health check verification
- Nginx reverse proxy
- **Phase 3**: Ongoing Deployment
- Automated deployment scripts
- Zero-downtime deployment
- Manual deployment steps
- **Phase 4**: Monitoring Setup
- Prometheus and Grafana
- Alerting configuration
#### 3.4. Production Deployment Guide
**File**: [PRODUCTION_DEPLOYMENT.md](PRODUCTION_DEPLOYMENT.md)
**Purpose**: Comprehensive infrastructure reference
**Target**: Deep technical details
**Contents**:
- Complete infrastructure setup
- SSL/TLS configuration
- Secrets management with Vault
- Docker deployment
- Database migration strategy
- All monitoring endpoints documented
- Logging configuration
- Security best practices
- Comprehensive troubleshooting
- Rollback procedures
- Maintenance tasks
#### 3.5. Production Logging Guide
**File**: [production-logging.md](production-logging.md)
**Purpose**: Logging configuration and optimization
**Target**: Production logging setup
**Contents**:
- All ProductionLogConfig options explained
- Environment-based configuration
- Log rotation and retention policies
- Structured JSON format
- Metrics integration
- Performance tuning guidelines
- Troubleshooting common issues
- Best practices
#### 3.6. Ansible Deployment Guide
**File**: [ANSIBLE_DEPLOYMENT.md](ANSIBLE_DEPLOYMENT.md)
**Purpose**: Infrastructure as Code automation
**Target**: Multi-server, enterprise deployments
**Contents**:
- Complete Ansible project structure
- Ansible roles (common, docker, ssl, application)
- Playbooks (site.yml, deploy.yml, rollback.yml, provision.yml)
- Ansible Vault for secrets
- CI/CD integration (GitHub Actions)
- Comparison: Script-Based vs Ansible
- Hybrid approach recommendation
#### 3.7. Deployment README
**File**: [README.md](README.md)
**Purpose**: Navigation and quick reference
**Target**: All deployment scenarios
**Contents**:
- Document overview and navigation
- Which guide for which scenario
- Deployment methods comparison
- Common tasks quick reference
- Troubleshooting quick reference
- Support resources
---
## Deployment Options
### Option 1: Quick Start (Recommended for First Deployment)
**Time**: 30 minutes
**Best For**: Single server, getting started
**Guide**: [QUICKSTART.md](QUICKSTART.md)
**Process**:
1. Server setup (10 min)
2. SSL certificate (5 min)
3. Clone application (2 min)
4. Generate secrets (3 min)
5. Create environment file (5 min)
6. Build and start containers (3 min)
7. Initialize database (2 min)
### Option 2: Script-Based Deployment
**Time**: 2 hours initial, 10 minutes ongoing
**Best For**: Single server, repeatable deployments
**Guide**: [DEPLOYMENT_WORKFLOW.md](DEPLOYMENT_WORKFLOW.md)
**Features**:
- Automated deployment scripts
- Zero-downtime blue-green deployment
- Rollback support
- Health check integration
**Scripts**:
- `scripts/deployment/deploy-production.sh` - Standard deployment
- `scripts/deployment/blue-green-deploy.sh` - Zero-downtime deployment
- `scripts/deployment/blue-green-rollback.sh` - Safe rollback
### Option 3: Ansible Automation
**Time**: 4 hours initial, 5 minutes ongoing
**Best For**: Multiple servers, enterprise deployments
**Guide**: [ANSIBLE_DEPLOYMENT.md](ANSIBLE_DEPLOYMENT.md)
**Features**:
- Infrastructure as Code
- Multi-server orchestration
- Idempotent operations
- Automated rollback
- CI/CD integration
**Roles**:
- **common**: System packages, firewall, directories
- **docker**: Docker installation and configuration
- **ssl**: Certificate management with auto-renewal
- **application**: Git, composer, migrations, health checks
---
## Infrastructure Components
### SSL/TLS Management
- ✅ Let's Encrypt integration
- ✅ Automatic certificate renewal
- ✅ 30-day expiration warning
- ✅ 7-day critical alert
- ✅ Health check integration
### Secrets Management
- ✅ Vault encryption key generation
- ✅ Encrypted secrets storage
- ✅ Environment-based configuration
- ✅ Key rotation procedures
### Docker Infrastructure
- ✅ Production-ready docker-compose configuration
- ✅ Container health checks
- ✅ Resource limits and constraints
- ✅ Logging configuration
- ✅ Network isolation
### Database Management
- ✅ Migration system with safe rollback architecture
- ✅ Forward-only migrations by default
- ✅ Optional SafelyReversible interface
- ✅ Fix-forward strategy for unsafe changes
- ✅ Automated migration execution
### Reverse Proxy
- ✅ Nginx configuration
- ✅ SSL/TLS termination
- ✅ Proxy headers
- ✅ Health check routing
- ✅ Static asset serving
---
## Security Features
### Web Application Firewall (WAF)
- ✅ SQL injection detection
- ✅ XSS protection
- ✅ Path traversal prevention
- ✅ Command injection detection
- ✅ Rate limiting
- ✅ Suspicious user agent blocking
### Security Headers
- ✅ X-Frame-Options: SAMEORIGIN
- ✅ X-Content-Type-Options: nosniff
- ✅ X-XSS-Protection: 1; mode=block
- ✅ Strict-Transport-Security (HSTS)
- ✅ Content-Security-Policy (CSP)
- ✅ Referrer-Policy
- ✅ Permissions-Policy
### Authentication & Authorization
- ✅ IP-based authentication for admin routes
- ✅ Session-based authentication
- ✅ Token-based authentication
- ✅ CSRF protection
- ✅ Rate limiting
### Hardening
- ✅ UFW firewall configuration
- ✅ SSH key-only authentication
- ✅ Fail2Ban integration
- ✅ Regular security updates
- ✅ OWASP security event logging
---
## Monitoring & Observability
### Health Checks
- ✅ Multiple endpoints for different use cases
- ✅ Category-based filtering
- ✅ Automatic service discovery
- ✅ Response time tracking
- ✅ Detailed error reporting
### Metrics
- ✅ Prometheus-compatible metrics
- ✅ Health check metrics
- ✅ Performance metrics
- ✅ Resource utilization metrics
- ✅ Custom business metrics
### Logging
- ✅ Structured JSON logs
- ✅ Request ID tracing
- ✅ Distributed tracing support
- ✅ Performance metrics
- ✅ Error aggregation
### Alerting
- ✅ Prometheus alert rules
- ✅ Health check failure alerts
- ✅ Disk space alerts
- ✅ SSL expiration alerts
- ✅ Custom alert rules
---
## Performance Characteristics
### Health Check Performance
- **Response Time**: <100ms for summary endpoint
- **Detailed Check**: <500ms with all checks
- **Throughput**: 1000+ requests/second
- **Timeout Protection**: Configurable per-check timeouts
### Logging Performance
- **Standard Production**: 10,000+ logs/second
- **High Performance**: 50,000+ logs/second (with sampling)
- **Write Latency**: <1ms (buffered)
- **Disk I/O**: Minimized via buffering and rotation
### Deployment Performance
- **Manual Deployment**: ~15 minutes
- **Automated Deployment**: ~5-10 minutes
- **Zero-Downtime Deployment**: ~10-15 minutes
- **Rollback**: ~5 minutes
---
## Testing & Validation
### Pre-Deployment Testing
- ✅ Unit tests passing
- ✅ Integration tests passing
- ✅ Migration tests
- ✅ Health check tests
- ✅ Security tests
### Deployment Verification
- ✅ Container health checks
- ✅ Application health endpoints
- ✅ SSL certificate validation
- ✅ Database migration verification
- ✅ Performance baseline
### Post-Deployment Monitoring
- ✅ Health check monitoring
- ✅ Metrics collection
- ✅ Log aggregation
- ✅ Alert verification
- ✅ User acceptance testing
---
## Maintenance Procedures
### Weekly Maintenance
- Review application logs
- Check disk space (<80%)
- Verify health check status
- Verify backups
- Check SSL certificate (>30 days)
- Review security logs
### Monthly Maintenance
- Apply system security updates
- Update dependencies
- Rotate secrets if required
- Review and archive logs
- Security audit
- Database optimization
### Quarterly Maintenance
- Rotate Vault encryption key
- Rotate database passwords
- Penetration testing
- Infrastructure cost review
- Disaster recovery drill
- Team training
---
## Rollback & Disaster Recovery
### Rollback Procedures
- ✅ Blue-green deployment rollback
- ✅ Database migration rollback (safe migrations)
- ✅ Fix-forward strategy (unsafe migrations)
- ✅ Container version rollback
- ✅ Configuration rollback
### Disaster Recovery
- ✅ Automated database backups (daily)
- ✅ Vault backup procedures
- ✅ Configuration backups
- ✅ Off-site backup storage
- ✅ Recovery testing procedures
---
## Documentation Highlights
### Comprehensive Coverage
- 6 deployment guides totaling 140+ pages
- Step-by-step instructions for all scenarios
- Troubleshooting guides for common issues
- Best practices and recommendations
- Security considerations
- Performance tuning guidelines
### Accessibility
- Quick start for fast deployment (30 min)
- Detailed guides for deep understanding
- Printable checklists for verification
- Navigation guide for finding information
- Cross-references between documents
### Maintainability
- Continuous improvement framework
- Post-deployment feedback template
- Lessons learned documentation
- Version history tracking
- Regular update procedures
---
## Team Readiness
### Documentation
- ✅ Complete deployment documentation
- ✅ Troubleshooting guides
- ✅ Runbooks for common operations
- ✅ Emergency procedures
- ✅ Contact information templates
### Training Materials
- ✅ Quick start guide for new team members
- ✅ Detailed workflow documentation
- ✅ Video walkthrough opportunities
- ✅ FAQ sections
- ✅ Best practices documentation
### Support
- ✅ Internal documentation references
- ✅ External resource links
- ✅ Community support channels
- ✅ Escalation procedures
- ✅ On-call rotation guidelines
---
## Next Steps
### Recommended Actions
1. **First Deployment**: Follow [QUICKSTART.md](QUICKSTART.md)
2. **Team Review**: Distribute [DEPLOYMENT_README.md](README.md) to team
3. **Production Deploy**: Schedule deployment using deployment checklist
4. **Monitoring Setup**: Configure Prometheus/Grafana (Phase 4 in workflow)
5. **Security Hardening**: Complete post-deployment security checklist
6. **Team Training**: Conduct deployment drill with team
7. **Documentation Review**: Schedule quarterly documentation updates
### Future Enhancements
**Potential additions** (not required for production):
- Kubernetes deployment option (for larger scale)
- Multi-region deployment strategies
- Advanced monitoring dashboards
- Automated security scanning integration
- Performance testing automation
- Chaos engineering practices
---
## Success Metrics
### Deployment Success
- ✅ All health checks passing
- ✅ SSL certificate valid
- ✅ Zero errors in logs
- ✅ Metrics collecting correctly
- ✅ Backups running successfully
### Operational Success
- ⏱️ Deployment time: <30 minutes (target)
- 🎯 Uptime: 99.9% (target)
- ⚡ Response time: <200ms (target)
- 🔒 Security: Zero critical vulnerabilities
- 📊 Monitoring: 100% coverage
---
## Conclusion
The Custom PHP Framework now has **production-ready deployment infrastructure** with:
**Multiple deployment paths** (Quick, Script-Based, Ansible)
**Comprehensive monitoring** (Health checks, Metrics, Logging)
**Security hardening** (WAF, SSL, Vault, Headers)
**Zero-downtime deployments** (Blue-green strategy)
**Safe rollback procedures** (Migration architecture)
**Complete documentation** (6 comprehensive guides)
**Team readiness** (Checklists, runbooks, procedures)
**The infrastructure is ready for production deployment.**
---
## Quick Reference
| Need | Document | Time |
|------|----------|------|
| Deploy now | [QUICKSTART.md](QUICKSTART.md) | 30 min |
| Understand process | [DEPLOYMENT_WORKFLOW.md](DEPLOYMENT_WORKFLOW.md) | 2 hours |
| Deep technical details | [PRODUCTION_DEPLOYMENT.md](PRODUCTION_DEPLOYMENT.md) | Reference |
| Logging setup | [production-logging.md](production-logging.md) | 30 min |
| Automation | [ANSIBLE_DEPLOYMENT.md](ANSIBLE_DEPLOYMENT.md) | 4 hours |
| Verification | [DEPLOYMENT_CHECKLIST.md](DEPLOYMENT_CHECKLIST.md) | Ongoing |
| Navigation | [README.md](README.md) | Reference |
---
**For questions or support, see [README.md](README.md) → Support and Resources**
**Ready to deploy? → [QUICKSTART.md](QUICKSTART.md)**

View File

@@ -1,720 +0,0 @@
# Concrete Deployment Workflow
Schritt-für-Schritt Anleitung für das Production Deployment mit und ohne Ansible.
## Deployment-Optionen
Das Framework bietet **zwei Deployment-Strategien**:
1. **Manual/Script-Based** (einfach, für Single-Server)
2. **Ansible-Based** (automatisiert, für Multi-Server)
Beide Strategien nutzen Docker Compose als Container-Orchestrierung.
---
## Option 1: Manual/Script-Based Deployment (Empfohlen für Start)
### Voraussetzungen
- Server mit Ubuntu 22.04 LTS
- SSH-Zugriff mit sudo-Rechten
- Domain mit DNS konfiguriert
- Git Repository Access
### Phase 1: Initiales Server Setup (Einmalig)
#### 1.1 Server vorbereiten
```bash
# SSH-Verbindung zum Server
ssh user@your-server.com
# System aktualisieren
sudo apt update && sudo apt upgrade -y
# Docker installieren
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
# Docker Compose installieren
sudo apt install -y docker-compose-plugin
# Neuanmeldung für Docker-Gruppe
exit
ssh user@your-server.com
# Verify
docker --version
docker compose version
```
#### 1.2 Projektverzeichnisse erstellen
```bash
# Verzeichnisstruktur anlegen
sudo mkdir -p /var/www/app
sudo mkdir -p /var/log/app
sudo mkdir -p /opt/vault
sudo mkdir -p /etc/ssl/app
sudo mkdir -p /backups/database
sudo mkdir -p /backups/volumes
# Berechtigungen setzen
sudo chown -R $USER:$USER /var/www/app
sudo chown -R www-data:www-data /var/log/app
sudo chown -R www-data:www-data /opt/vault
sudo chmod 755 /var/www/app
sudo chmod 755 /var/log/app
sudo chmod 700 /opt/vault
```
#### 1.3 Repository klonen
```bash
cd /var/www
git clone git@github.com:yourusername/app.git
cd app
# Production branch
git checkout production
# Scripts ausführbar machen
chmod +x scripts/deployment/*.sh
```
#### 1.4 SSL-Zertifikat einrichten
```bash
# Nginx für Certbot installieren
sudo apt install -y nginx certbot python3-certbot-nginx
# Temporäre Nginx-Config für Certbot
sudo tee /etc/nginx/sites-available/temp-certbot > /dev/null <<'EOF'
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
}
EOF
sudo ln -s /etc/nginx/sites-available/temp-certbot /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
# Zertifikat holen
sudo certbot certonly --webroot \
-w /var/www/certbot \
-d yourdomain.com \
-d www.yourdomain.com \
--email your-email@example.com \
--agree-tos \
--no-eff-email
# Zertifikate für Container verfügbar machen
sudo cp /etc/letsencrypt/live/yourdomain.com/fullchain.pem /etc/ssl/app/cert.pem
sudo cp /etc/letsencrypt/live/yourdomain.com/privkey.pem /etc/ssl/app/key.pem
sudo chmod 644 /etc/ssl/app/cert.pem
sudo chmod 600 /etc/ssl/app/key.pem
# Auto-Renewal einrichten
echo "0 3 * * * root certbot renew --quiet && cp /etc/letsencrypt/live/yourdomain.com/fullchain.pem /etc/ssl/app/cert.pem && cp /etc/letsencrypt/live/yourdomain.com/privkey.pem /etc/ssl/app/key.pem && docker compose -f /var/www/app/docker-compose.production.yml restart nginx" | sudo tee -a /etc/crontab
```
#### 1.5 Vault Encryption Key generieren
```bash
cd /var/www/app
# Key generieren
php scripts/deployment/generate-vault-key.php
# Output kopieren (sicher speichern!):
# VAULT_ENCRYPTION_KEY=base64encodedkey...
# In 1Password, Bitwarden, oder AWS Secrets Manager speichern
```
#### 1.6 Environment File erstellen
```bash
cd /var/www/app
# Template kopieren
cp .env.example .env.production
# Mit echten Production-Werten füllen
nano .env.production
```
**Minimal erforderliche Werte**:
```env
# Application
APP_ENV=production
APP_DEBUG=false
APP_URL=https://yourdomain.com
# Database
DB_CONNECTION=mysql
DB_HOST=db
DB_PORT=3306
DB_DATABASE=app_production
DB_USERNAME=app_user
DB_PASSWORD=<strong-database-password>
# Cache & Queue
CACHE_DRIVER=redis
QUEUE_DRIVER=redis
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=<strong-redis-password>
# Vault
VAULT_ENCRYPTION_KEY=<from-generate-vault-key>
# Admin Access
ADMIN_ALLOWED_IPS=your.ip.address.here
# Logging
LOG_PATH=/var/log/app
LOG_LEVEL=info
```
#### 1.7 Secrets in Vault speichern
```bash
# Secrets-Setup Script ausführen
php scripts/deployment/setup-production-secrets.php
# Manuelle Secrets hinzufügen
docker compose -f docker-compose.production.yml run --rm php php -r "
require 'vendor/autoload.php';
\$vault = new App\Framework\Vault\EncryptedVault(
\$_ENV['VAULT_ENCRYPTION_KEY'],
'/opt/vault/production.vault'
);
// API Keys
\$vault->set(
App\Framework\Vault\SecretKey::from('stripe_secret_key'),
App\Framework\Vault\SecretValue::from('sk_live_...')
);
// Mail Password
\$vault->set(
App\Framework\Vault\SecretKey::from('mail_password'),
App\Framework\Vault\SecretValue::from('your-mail-password')
);
echo 'Secrets stored successfully\n';
"
```
### Phase 2: Initiales Deployment
#### 2.1 Dependencies installieren
```bash
cd /var/www/app
# Composer Dependencies
docker compose -f docker-compose.production.yml run --rm php composer install --no-dev --optimize-autoloader
# NPM Dependencies und Build
docker compose -f docker-compose.production.yml run --rm nodejs npm ci
docker compose -f docker-compose.production.yml run --rm nodejs npm run build
```
#### 2.2 Container starten
```bash
# Docker Images bauen
docker compose -f docker-compose.production.yml build
# Container starten
docker compose -f docker-compose.production.yml up -d
# Logs verfolgen
docker compose -f docker-compose.production.yml logs -f
```
#### 2.3 Datenbank initialisieren
```bash
# Warten bis MySQL ready ist
sleep 30
# Migrations ausführen
docker compose -f docker-compose.production.yml exec php php console.php db:migrate
# Verify
docker compose -f docker-compose.production.yml exec php php console.php db:status
```
#### 2.4 Health Checks verifizieren
```bash
# Health Check (sollte 200 zurückgeben)
curl -f http://localhost/health || echo "Health check failed"
# Detailed Health Report
curl -s http://localhost/health/detailed | jq
# Alle Checks sollten "healthy" sein
curl -s http://localhost/health/summary | jq '.summary'
```
#### 2.5 Nginx Reverse Proxy konfigurieren
```bash
# System Nginx als Reverse Proxy
sudo tee /etc/nginx/sites-available/app > /dev/null <<'EOF'
upstream app_backend {
server localhost:8080;
}
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
# Redirect HTTP to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name yourdomain.com www.yourdomain.com;
ssl_certificate /etc/ssl/app/cert.pem;
ssl_certificate_key /etc/ssl/app/key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Security Headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
# Proxy to Docker container
location / {
proxy_pass http://app_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Increase timeouts for long-running requests
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
EOF
# Enable site
sudo ln -sf /etc/nginx/sites-available/app /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default
sudo rm -f /etc/nginx/sites-enabled/temp-certbot
# Test config
sudo nginx -t
# Reload
sudo systemctl reload nginx
```
#### 2.6 Finaler Test
```bash
# HTTPS Health Check
curl -f https://yourdomain.com/health || echo "HTTPS health check failed"
# SSL Test
openssl s_client -connect yourdomain.com:443 -servername yourdomain.com < /dev/null
# Metrics
curl https://yourdomain.com/metrics | head -20
# Homepage
curl -I https://yourdomain.com
```
### Phase 3: Laufendes Deployment (Updates)
#### 3.1 Automatisches Deployment Script nutzen
```bash
cd /var/www/app
# Standard Deployment
./scripts/deployment/deploy-production.sh
# Mit spezifischem Branch
./scripts/deployment/deploy-production.sh --branch production-v2.1.0
# Dry-Run (keine Änderungen)
./scripts/deployment/deploy-production.sh --dry-run
```
Das Script führt automatisch aus:
1. ✅ Pre-deployment Checks
2. ✅ Backup Erstellung
3. ✅ Git Pull
4. ✅ Composer/NPM Install
5. ✅ Docker Image Build
6. ✅ Database Migrations
7. ✅ Container Restart
8. ✅ Health Checks
9. ✅ Smoke Tests
#### 3.2 Zero-Downtime Deployment (Blue-Green)
```bash
# Blue-Green Deployment für Zero-Downtime
./scripts/deployment/blue-green-deploy.sh
# Bei Problemen: Rollback
./scripts/deployment/blue-green-rollback.sh
```
#### 3.3 Manuelles Deployment (wenn Scripts nicht verfügbar)
```bash
cd /var/www/app
# 1. Pre-Deployment Backup
docker compose -f docker-compose.production.yml exec db \
mysqldump -u app_user -p<password> app_production \
> /backups/database/backup_$(date +%Y%m%d_%H%M%S).sql
# 2. Git Pull
git fetch origin production
git checkout production
git pull origin production
# 3. Dependencies aktualisieren
docker compose -f docker-compose.production.yml run --rm php \
composer install --no-dev --optimize-autoloader
# 4. Frontend Build (falls geändert)
docker compose -f docker-compose.production.yml run --rm nodejs npm ci
docker compose -f docker-compose.production.yml run --rm nodejs npm run build
# 5. Images neu bauen
docker compose -f docker-compose.production.yml build
# 6. Migrations ausführen
docker compose -f docker-compose.production.yml exec php php console.php db:migrate
# 7. Container neu starten
docker compose -f docker-compose.production.yml up -d --no-deps --build php nginx
# 8. Health Check
curl -f https://yourdomain.com/health/summary
# 9. Logs prüfen
docker compose -f docker-compose.production.yml logs -f --tail=100 php
```
### Phase 4: Monitoring Setup
#### 4.1 Prometheus (Optional)
```yaml
# docker-compose.monitoring.yml erstellen
version: '3.8'
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus-data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
restart: unless-stopped
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
volumes:
- grafana-data:/var/lib/grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=<strong-password>
restart: unless-stopped
volumes:
prometheus-data:
grafana-data:
```
```yaml
# prometheus.yml
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'app'
static_configs:
- targets: ['app-nginx:80']
metrics_path: '/metrics'
```
```bash
# Monitoring starten
docker compose -f docker-compose.monitoring.yml up -d
# Grafana öffnen: http://your-server:3000
# Login: admin / <strong-password>
# Dashboard importieren: docs/deployment/grafana-dashboard.json
```
#### 4.2 Alerting (Optional)
```bash
# Simple Alert Script für kritische Health Checks
tee /opt/health-check-alert.sh > /dev/null <<'EOF'
#!/bin/bash
HEALTH=$(curl -s http://localhost/health/summary | jq -r '.overall_healthy')
if [ "$HEALTH" != "true" ]; then
# Alert senden (Email, Slack, PagerDuty, etc.)
curl -X POST https://hooks.slack.com/services/YOUR/WEBHOOK/URL \
-H 'Content-Type: application/json' \
-d '{"text":"🚨 Production Health Check FAILED!"}'
fi
EOF
chmod +x /opt/health-check-alert.sh
# Crontab: Jede 5 Minuten prüfen
echo "*/5 * * * * /opt/health-check-alert.sh" | crontab -
```
---
## Option 2: Ansible-Based Deployment (Multi-Server)
### Wann Ansible verwenden?
**Verwende Ansible wenn**:
- Mehrere Production Server (Load Balancing)
- Staging + Production Environments
- Infrastructure as Code gewünscht
- Wiederholbare, idempotente Deployments
- Team-basierte Deployments
**Ansible NICHT notwendig wenn**:
- Einzelner Production Server
- Einfache Infrastruktur
- Kleine Team-Größe
- Docker Compose Scripts ausreichend
### Ansible Setup
Siehe separate Dokumentation: [ANSIBLE_DEPLOYMENT.md](ANSIBLE_DEPLOYMENT.md)
**Kurzübersicht**:
```bash
# Ansible installieren
pip install ansible
# Playbooks ausführen
cd ansible
ansible-playbook -i inventory/production site.yml
# Spezifische Playbooks
ansible-playbook -i inventory/production playbooks/deploy.yml
ansible-playbook -i inventory/production playbooks/rollback.yml
```
---
## Deployment-Checkliste
### Pre-Deployment
- [ ] Server vorbereitet und zugänglich
- [ ] Domain DNS konfiguriert
- [ ] SSL-Zertifikat vorhanden
- [ ] Vault Encryption Key generiert und sicher gespeichert
- [ ] Environment File `.env.production` erstellt
- [ ] Secrets in Vault gespeichert
- [ ] Docker und Docker Compose installiert
- [ ] Nginx Reverse Proxy konfiguriert
### Initial Deployment
- [ ] Repository geklont
- [ ] Dependencies installiert
- [ ] Docker Images gebaut
- [ ] Container gestartet
- [ ] Datenbank migriert
- [ ] Health Checks grün
- [ ] HTTPS funktioniert
- [ ] Monitoring konfiguriert (optional)
### Laufendes Deployment
- [ ] Backup erstellt
- [ ] Git Pull erfolgreich
- [ ] Dependencies aktualisiert
- [ ] Frontend gebaut (falls nötig)
- [ ] Images neu gebaut
- [ ] Migrations ausgeführt
- [ ] Container neu gestartet
- [ ] Health Checks grün
- [ ] Smoke Tests erfolgreich
- [ ] Logs geprüft
### Post-Deployment
- [ ] Application erreichbar
- [ ] Alle Features funktional
- [ ] Performance akzeptabel
- [ ] Monitoring aktiv
- [ ] Logs rotieren
- [ ] Backups funktionieren
- [ ] Rollback-Plan getestet
---
## Rollback-Prozedur
### Quick Rollback
```bash
cd /var/www/app
# 1. Zu vorherigem Commit
git log --oneline -10 # Vorherigen Commit finden
git checkout <previous-commit>
# 2. Dependencies (falls nötig)
docker compose -f docker-compose.production.yml run --rm php \
composer install --no-dev --optimize-autoloader
# 3. Migrations rückgängig
docker compose -f docker-compose.production.yml exec php \
php console.php db:rollback 3
# 4. Container neu starten
docker compose -f docker-compose.production.yml up -d --build
# 5. Health Check
curl -f https://yourdomain.com/health/summary
```
### Database Rollback
```bash
# Datenbank aus Backup wiederherstellen
docker compose -f docker-compose.production.yml exec -T db \
mysql -u app_user -p<password> app_production \
< /backups/database/backup_20250115_120000.sql
# Verify
docker compose -f docker-compose.production.yml exec php \
php console.php db:status
```
---
## Troubleshooting Deployment
### Container starten nicht
```bash
# Logs prüfen
docker compose -f docker-compose.production.yml logs
# Port-Konflikte prüfen
sudo netstat -tulpn | grep -E ':(80|443|3306|6379)'
# Container Status
docker compose -f docker-compose.production.yml ps
# Neustart
docker compose -f docker-compose.production.yml down
docker compose -f docker-compose.production.yml up -d
```
### Health Checks schlagen fehl
```bash
# Detailed Health Report
curl http://localhost/health/detailed | jq
# Spezifische Checks
curl http://localhost/health/category/database | jq
curl http://localhost/health/category/security | jq
# Container-Logs
docker compose -f docker-compose.production.yml logs php
docker compose -f docker-compose.production.yml logs nginx
```
### Migrations schlagen fehl
```bash
# Migration Status
docker compose -f docker-compose.production.yml exec php \
php console.php db:status
# Migrations rollback
docker compose -f docker-compose.production.yml exec php \
php console.php db:rollback 1
# Database Connection testen
docker compose -f docker-compose.production.yml exec php \
php -r "new PDO('mysql:host=db;dbname=app_production', 'app_user', '<password>');"
```
---
## Empfohlener Workflow für dein Projekt
### Für Initial Setup und kleine Deployments:
**Verwende Script-Based Deployment**:
1. Server Setup (einmalig): Siehe Phase 1
2. Initial Deployment: Siehe Phase 2
3. Updates: `./scripts/deployment/deploy-production.sh`
4. Zero-Downtime: `./scripts/deployment/blue-green-deploy.sh`
### Für Skalierung und Multiple Environments:
**Ergänze mit Ansible**:
1. Server Provisioning automatisieren
2. Multi-Server Deployments orchestrieren
3. Konsistente Configuration Management
4. Infrastructure as Code
Siehe nächstes Dokument: [ANSIBLE_DEPLOYMENT.md](ANSIBLE_DEPLOYMENT.md)
---
## Nächste Schritte
1. ✅ Server vorbereiten (Phase 1)
2. ✅ Initial Deployment durchführen (Phase 2)
3. ✅ Monitoring einrichten (Phase 4)
4. 📝 Deployment dokumentieren
5. 🔄 Ansible evaluieren (optional, wenn Multi-Server)
Für detaillierte Ansible-Integration siehe nächstes Dokument!

File diff suppressed because it is too large Load Diff

View File

@@ -1,599 +0,0 @@
# Production Deployment - Quick Start Guide
**Goal**: Get the application running in production in under 30 minutes.
This is a simplified, essential-steps-only guide. For comprehensive documentation, see:
- [Complete Deployment Workflow](DEPLOYMENT_WORKFLOW.md)
- [Production Deployment Guide](PRODUCTION_DEPLOYMENT.md)
- [Ansible Deployment](ANSIBLE_DEPLOYMENT.md)
---
## Prerequisites
- Ubuntu 22.04+ server with root access
- Domain name pointing to server IP
- Port 80 and 443 open in firewall
---
## Step 1: Initial Server Setup (10 minutes)
```bash
# SSH into server
ssh root@your-server.com
# Update system
apt update && apt upgrade -y
# Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
# Install Docker Compose
apt install docker-compose-plugin -y
# Install certbot for SSL
apt install certbot -y
# Create application user
useradd -m -s /bin/bash appuser
usermod -aG docker appuser
```
---
## Step 2: SSL Certificate (5 minutes)
```bash
# Create webroot directory
mkdir -p /var/www/certbot
# Get SSL certificate
certbot certonly --webroot \
-w /var/www/certbot \
-d yourdomain.com \
--email your-email@example.com \
--agree-tos \
--non-interactive
# Verify certificates
ls -la /etc/letsencrypt/live/yourdomain.com/
```
**Expected output**: `fullchain.pem` and `privkey.pem` files
---
## Step 3: Clone Application (2 minutes)
```bash
# Switch to app user
su - appuser
# Clone repository
git clone https://github.com/your-org/your-app.git /home/appuser/app
cd /home/appuser/app
# Checkout production branch
git checkout main
```
---
## Step 4: Generate Secrets (3 minutes)
```bash
# Generate Vault encryption key
php scripts/deployment/generate-vault-key.php
# Save output - YOU MUST STORE THIS SECURELY!
# Example output: vault_key_abc123def456...
```
**⚠️ CRITICAL**: Store this key in your password manager. You cannot recover it if lost.
---
## Step 5: Create Environment File (5 minutes)
```bash
# Copy example
cp .env.example .env.production
# Edit configuration
nano .env.production
```
**Minimal required configuration**:
```env
# Application
APP_ENV=production
APP_DEBUG=false
APP_URL=https://yourdomain.com
# Database
DB_HOST=database
DB_PORT=3306
DB_NAME=app_production
DB_USER=app_user
DB_PASS=GENERATE_STRONG_PASSWORD_HERE
# Vault
VAULT_ENCRYPTION_KEY=YOUR_GENERATED_KEY_FROM_STEP_4
# Logging
LOG_PATH=/var/log/app
LOG_LEVEL=INFO
# Admin Access
ADMIN_ALLOWED_IPS=YOUR.SERVER.IP,127.0.0.1
```
**Generate strong passwords**:
```bash
# Generate DB password
openssl rand -base64 32
# Generate JWT secret
openssl rand -base64 64
```
---
## Step 6: Build and Start (3 minutes)
```bash
# Build containers
docker compose -f docker-compose.production.yml build
# Start containers
docker compose -f docker-compose.production.yml up -d
# Check status
docker compose -f docker-compose.production.yml ps
```
**Expected output**: All containers should be "Up"
---
## Step 7: Initialize Database (2 minutes)
```bash
# Run migrations
docker compose -f docker-compose.production.yml exec php php console.php db:migrate
# Verify migration status
docker compose -f docker-compose.production.yml exec php php console.php db:status
```
---
## Step 8: Verify Health (1 minute)
```bash
# Check health endpoint
curl http://localhost/health/summary
# Expected output (healthy):
{
"timestamp": "2025-01-15T10:00:00+00:00",
"overall_status": "healthy",
"overall_healthy": true,
"summary": {
"total_checks": 8,
"healthy": 8,
"warning": 0,
"unhealthy": 0
}
}
```
If unhealthy, check logs:
```bash
docker compose -f docker-compose.production.yml logs php
```
---
## Step 9: Configure Nginx Reverse Proxy
```bash
# Exit to root user
exit
# Create Nginx config
nano /etc/nginx/sites-available/app
```
**Nginx configuration**:
```nginx
server {
listen 80;
server_name yourdomain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name yourdomain.com;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256';
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /health {
proxy_pass http://localhost:8080/health;
access_log off;
}
}
```
**Enable and restart**:
```bash
# Enable site
ln -s /etc/nginx/sites-available/app /etc/nginx/sites-enabled/
# Test configuration
nginx -t
# Restart Nginx
systemctl restart nginx
```
---
## Step 10: Final Verification
```bash
# Test HTTPS endpoint
curl -f https://yourdomain.com/health/summary
# Test detailed health
curl -f https://yourdomain.com/health/detailed
# Test metrics (should be accessible)
curl -f https://yourdomain.com/metrics
```
**✅ Success criteria**:
- All curl commands return 200 OK
- No SSL certificate warnings
- Health endpoint shows all checks healthy
---
## Post-Deployment Tasks
### Setup Automatic Certificate Renewal
```bash
# Test renewal
certbot renew --dry-run
# Certbot automatically creates cron job, verify:
systemctl status certbot.timer
```
### Setup Log Rotation
```bash
# Create logrotate config
nano /etc/logrotate.d/app
```
```
/var/log/app/*.log {
daily
rotate 14
compress
delaycompress
notifempty
missingok
create 0644 appuser appuser
sharedscripts
postrotate
docker compose -f /home/appuser/app/docker-compose.production.yml exec php php console.php cache:clear > /dev/null 2>&1 || true
endscript
}
```
### Setup Monitoring (Optional but Recommended)
```bash
# Install monitoring stack
cd /home/appuser/app
docker compose -f docker-compose.monitoring.yml up -d
# Access Grafana
# URL: http://your-server:3000
# Default credentials: admin/admin
```
### Setup Backups
```bash
# Create backup script
nano /home/appuser/backup-production.sh
```
```bash
#!/bin/bash
set -e
BACKUP_DIR="/opt/backups"
DATE=$(date +%Y%m%d_%H%M%S)
# Backup database
docker compose -f /home/appuser/app/docker-compose.production.yml \
exec -T database mysqldump -u app_user -p${DB_PASS} app_production | \
gzip > ${BACKUP_DIR}/db_${DATE}.sql.gz
# Backup Vault
tar -czf ${BACKUP_DIR}/vault_${DATE}.tar.gz /opt/vault
# Keep only last 7 days
find ${BACKUP_DIR} -name "*.gz" -mtime +7 -delete
echo "Backup completed: ${DATE}"
```
```bash
# Make executable
chmod +x /home/appuser/backup-production.sh
# Add to crontab (daily at 2 AM)
crontab -e
```
Add line:
```
0 2 * * * /home/appuser/backup-production.sh >> /var/log/backup.log 2>&1
```
---
## Troubleshooting
### Container won't start
```bash
# Check logs
docker compose -f docker-compose.production.yml logs php
# Common issues:
# - Wrong database credentials → Check .env.production
# - Port already in use → Check: netstat -tulpn | grep 8080
# - Permission issues → Run: chown -R appuser:appuser /home/appuser/app
```
### Health checks failing
```bash
# Check specific health check
curl http://localhost/health/category/database
# Common issues:
# - Database not migrated → Run: php console.php db:migrate
# - Cache not writable → Check: ls -la /var/cache/app
# - Queue not running → Check: docker compose ps
```
### SSL certificate issues
```bash
# Check certificate validity
openssl x509 -in /etc/letsencrypt/live/yourdomain.com/fullchain.pem -noout -dates
# Renew certificate
certbot renew --force-renewal
# Restart Nginx
systemctl restart nginx
```
### Application errors
```bash
# Check application logs
docker compose -f docker-compose.production.yml logs -f php
# Check Nginx logs
tail -f /var/log/nginx/error.log
# Check system logs
journalctl -u nginx -f
```
---
## Security Hardening (Do This After Deployment)
### 1. Firewall Configuration
```bash
# Install UFW
apt install ufw -y
# Allow SSH
ufw allow 22/tcp
# Allow HTTP/HTTPS
ufw allow 80/tcp
ufw allow 443/tcp
# Enable firewall
ufw enable
```
### 2. SSH Key-Only Authentication
```bash
# Generate SSH key on local machine
ssh-keygen -t ed25519 -C "your-email@example.com"
# Copy to server
ssh-copy-id root@your-server.com
# Disable password authentication
nano /etc/ssh/sshd_config
```
Set:
```
PasswordAuthentication no
PermitRootLogin prohibit-password
```
Restart SSH:
```bash
systemctl restart sshd
```
### 3. Fail2Ban
```bash
# Install fail2ban
apt install fail2ban -y
# Create jail for Nginx
nano /etc/fail2ban/jail.d/nginx-limit.conf
```
```ini
[nginx-limit-req]
enabled = true
filter = nginx-limit-req
logpath = /var/log/nginx/error.log
maxretry = 10
findtime = 60
bantime = 3600
```
```bash
# Restart fail2ban
systemctl restart fail2ban
```
---
## Deployment Updates (Ongoing)
For deploying updates after initial setup:
```bash
# SSH to server
ssh appuser@your-server.com
# Navigate to app
cd /home/appuser/app
# Pull latest code
git pull origin main
# Rebuild containers
docker compose -f docker-compose.production.yml build
# Run migrations (if any)
docker compose -f docker-compose.production.yml exec php php console.php db:migrate
# Restart containers
docker compose -f docker-compose.production.yml up -d
# Verify health
curl -f http://localhost/health/summary
```
**For zero-downtime deployments**, use the automated script:
```bash
./scripts/deployment/blue-green-deploy.sh
```
---
## Getting Help
If you encounter issues not covered in this quick start:
1. **Check detailed documentation**:
- [Complete Deployment Workflow](DEPLOYMENT_WORKFLOW.md)
- [Production Deployment Guide](PRODUCTION_DEPLOYMENT.md)
- [Troubleshooting Guide](PRODUCTION_DEPLOYMENT.md#troubleshooting)
2. **Check application logs**:
```bash
docker compose -f docker-compose.production.yml logs -f
```
3. **Check health endpoints**:
```bash
curl http://localhost/health/detailed | jq
```
4. **Check metrics**:
```bash
curl http://localhost/metrics
```
---
## Success Checklist
✅ Before considering deployment complete:
- [ ] SSL certificate installed and valid
- [ ] Application accessible via HTTPS
- [ ] All health checks passing (green)
- [ ] Database migrations applied successfully
- [ ] Logs being written to `/var/log/app`
- [ ] Automatic certificate renewal configured
- [ ] Backup script running daily
- [ ] Firewall configured (ports 22, 80, 443 only)
- [ ] SSH key-only authentication enabled
- [ ] Fail2Ban installed and monitoring
- [ ] Monitoring stack running (optional)
- [ ] Team has access to Vault encryption key
- [ ] Database backup verified and restorable
---
## Next Steps
After successful deployment:
1. **Setup Monitoring Alerts**: Configure Prometheus alerting rules
2. **Performance Tuning**: Review metrics and optimize based on actual traffic
3. **Security Audit**: Run security scan with tools like OWASP ZAP
4. **Documentation**: Document any custom configuration changes
5. **Team Training**: Ensure team knows deployment and rollback procedures
6. **Disaster Recovery**: Test backup restoration procedure
---
## Estimated Timeline
- **Initial deployment (following this guide)**: 30 minutes
- **Security hardening**: 15 minutes
- **Monitoring setup**: 20 minutes
- **Total time to production**: ~1 hour
---
**You're ready for production! 🚀**
For questions or issues, refer to the comprehensive guides linked throughout this document.

View File

@@ -1,477 +1,66 @@
# Production Deployment Documentation
# Deployment Documentation
Complete documentation for deploying the Custom PHP Framework to production.
**Hinweis:** Die Haupt-Deployment-Dokumentation befindet sich jetzt im `deployment/` Ordner.
---
## Quick Navigation
## 📖 Haupt-Dokumentation
**New to deployment? Start here:**
1. [Quick Start Guide](QUICKSTART.md) - Get running in 30 minutes
2. [Deployment Checklist](DEPLOYMENT_CHECKLIST.md) - Printable checklist
**Für aktuelle Deployment-Informationen siehe:**
**Need detailed information?**
- [Complete Deployment Workflow](DEPLOYMENT_WORKFLOW.md) - Step-by-step deployment process
- [Production Deployment Guide](PRODUCTION_DEPLOYMENT.md) - Comprehensive infrastructure guide
- [Production Logging](production-logging.md) - Logging configuration and best practices
**Want automation?**
- [Ansible Deployment](ANSIBLE_DEPLOYMENT.md) - Infrastructure as Code with Ansible
**Need secure access?**
- [WireGuard VPN Setup](WIREGUARD-SETUP.md) - Secure VPN access to production services
- **[deployment/README.md](../../deployment/README.md)** - Haupt-Deployment-Dokumentation
- **[deployment/QUICK_START.md](../../deployment/QUICK_START.md)** - Schnellstart-Guide
- **[deployment/DEPLOYMENT_COMMANDS.md](../../deployment/DEPLOYMENT_COMMANDS.md)** - Command-Referenz
- **[deployment/CODE_CHANGE_WORKFLOW.md](../../deployment/CODE_CHANGE_WORKFLOW.md)** - Code-Deployment-Workflow
---
## Documentation Structure
## 📚 Spezifische Themen-Dokumentation
### 1. [QUICKSTART.md](QUICKSTART.md)
**Best for**: First-time deployment, getting started quickly
Die folgenden Dokumente behandeln spezifische Deployment-Themen:
**Content**:
- 10-step deployment process (~30 minutes)
- Minimal configuration required
- Immediate verification steps
- Basic troubleshooting
### VPN & Security
- **[WIREGUARD-SETUP.md](WIREGUARD-SETUP.md)** - WireGuard VPN Setup (komplett)
- **[WIREGUARD-FUTURE-SECURITY.md](WIREGUARD-FUTURE-SECURITY.md)** - Zukünftige Security-Überlegungen
- **[PRODUCTION-SECURITY-UPDATES.md](PRODUCTION-SECURITY-UPDATES.md)** - Security-Updates
**Use when**: You want to get the application running in production as fast as possible.
### Configuration & Setup
- **[database-migration-strategy.md](database-migration-strategy.md)** - Database Migration Strategy
- **[logging-configuration.md](logging-configuration.md)** - Logging Configuration
- **[production-logging.md](production-logging.md)** - Production Logging Best Practices
- **[secrets-management.md](secrets-management.md)** - Secrets Management mit Vault
- **[ssl-setup.md](ssl-setup.md)** - SSL/TLS Setup mit Let's Encrypt
- **[SSL-PRODUCTION-SETUP.md](SSL-PRODUCTION-SETUP.md)** - Production SSL Setup
- **[env-production-template.md](env-production-template.md)** - Environment Template
- **[production-prerequisites.md](production-prerequisites.md)** - Production Prerequisites
### Automation (Veraltet - siehe deployment/ansible)
- **[ANSIBLE_DEPLOYMENT.md](ANSIBLE_DEPLOYMENT.md)** - ⚠️ Veraltet, siehe `deployment/ansible/README.md`
- **[deployment-automation.md](deployment-automation.md)** - ⚠️ Veraltet, siehe `deployment/ansible/`
---
### 2. [DEPLOYMENT_CHECKLIST.md](DEPLOYMENT_CHECKLIST.md)
**Best for**: Ensuring nothing is missed, compliance verification
## 🚀 Schnellstart
**Content**:
- Pre-deployment checklist
- Step-by-step deployment verification
- Post-deployment security hardening
- Maintenance schedules (weekly, monthly, quarterly)
- Emergency contacts template
- Deployment sign-off form
**Für aktuelle Deployment-Informationen:**
**Use when**: You want a printable, check-off-items-as-you-go guide.
---
### 3. [DEPLOYMENT_WORKFLOW.md](DEPLOYMENT_WORKFLOW.md)
**Best for**: Understanding the complete deployment lifecycle
**Content**:
- Phase 1: Initial Server Setup (one-time)
- Phase 2: Initial Deployment
- Phase 3: Ongoing Deployment (updates)
- Phase 4: Monitoring Setup
- Two deployment options: Manual/Script-Based and Ansible-Based
- Automated deployment scripts
- Zero-downtime deployment
- Rollback procedures
**Use when**: You need detailed explanations of each deployment phase or want to understand deployment options.
---
### 4. [PRODUCTION_DEPLOYMENT.md](PRODUCTION_DEPLOYMENT.md)
**Best for**: Comprehensive infrastructure reference
**Content**:
- Complete infrastructure setup
- SSL/TLS configuration with Let's Encrypt
- Secrets management with Vault
- Environment configuration
- Docker deployment
- Database migrations
- Monitoring and health checks (all endpoints documented)
- Logging configuration
- Security considerations
- Troubleshooting guide
- Maintenance procedures
**Use when**: You need deep technical details about any production infrastructure component.
---
### 5. [production-logging.md](production-logging.md)
**Best for**: Production logging configuration and optimization
**Content**:
- ProductionLogConfig options (production, highPerformance, withAggregation, debug, staging)
- Environment-based configuration
- Log rotation and retention policies
- Structured JSON log format
- Metrics and monitoring integration
- Performance tuning (buffer sizes, sampling rates, aggregation)
- Troubleshooting guides
- Best practices
**Use when**: You need to configure or troubleshoot production logging.
---
### 6. [ANSIBLE_DEPLOYMENT.md](ANSIBLE_DEPLOYMENT.md)
**Best for**: Automated, multi-server deployments
**Content**:
- Complete Ansible project structure
- Ansible roles (common, docker, ssl, application)
- Playbooks (site.yml, deploy.yml, rollback.yml, provision.yml)
- Ansible Vault for secrets
- CI/CD integration (GitHub Actions)
- Comparison: Script-Based vs Ansible
- Hybrid approach recommendation
**Use when**: You're scaling to multiple servers or want infrastructure as code.
---
### 7. [WIREGUARD-SETUP.md](WIREGUARD-SETUP.md)
**Best for**: Secure VPN access to production services
**Content**:
- Complete WireGuard VPN setup guide
- Server installation via Ansible
- Client configuration and management
- Connection testing and troubleshooting
- Security best practices
- Monitoring and maintenance
**Use when**: You need secure access to internal services (Prometheus, Grafana, Portainer) or want to restrict access via VPN.
---
## Which Guide Should I Use?
### Scenario 1: First-Time Deployment
**Path**: QUICKSTART.md → DEPLOYMENT_CHECKLIST.md
1. Follow [QUICKSTART.md](QUICKSTART.md) for initial deployment
2. Use [DEPLOYMENT_CHECKLIST.md](DEPLOYMENT_CHECKLIST.md) to verify everything
3. Keep [PRODUCTION_DEPLOYMENT.md](PRODUCTION_DEPLOYMENT.md) handy for troubleshooting
**Time Required**: ~1 hour
---
### Scenario 2: Enterprise Deployment
**Path**: PRODUCTION_DEPLOYMENT.md → ANSIBLE_DEPLOYMENT.md → DEPLOYMENT_CHECKLIST.md
1. Review [PRODUCTION_DEPLOYMENT.md](PRODUCTION_DEPLOYMENT.md) for infrastructure understanding
2. Implement with [ANSIBLE_DEPLOYMENT.md](ANSIBLE_DEPLOYMENT.md) for automation
3. Verify with [DEPLOYMENT_CHECKLIST.md](DEPLOYMENT_CHECKLIST.md)
**Time Required**: ~4 hours (initial setup), ~30 minutes (ongoing deployments)
---
### Scenario 3: Single Server, Team Collaboration
**Path**: DEPLOYMENT_WORKFLOW.md → DEPLOYMENT_CHECKLIST.md
1. Follow [DEPLOYMENT_WORKFLOW.md](DEPLOYMENT_WORKFLOW.md) for comprehensive process
2. Use automated scripts (deploy-production.sh)
3. Verify with [DEPLOYMENT_CHECKLIST.md](DEPLOYMENT_CHECKLIST.md)
**Time Required**: ~2 hours
---
### Scenario 4: Logging Issues
**Path**: production-logging.md
1. Consult [production-logging.md](production-logging.md) for logging configuration
2. Check troubleshooting section
3. Adjust ProductionLogConfig based on needs
**Time Required**: ~30 minutes
---
### Scenario 5: Adding Monitoring
**Path**: PRODUCTION_DEPLOYMENT.md (Monitoring section)
1. Jump to Monitoring section in [PRODUCTION_DEPLOYMENT.md](PRODUCTION_DEPLOYMENT.md)
2. Follow Prometheus/Grafana setup
3. Configure alerts
**Time Required**: ~1 hour
---
## Deployment Methods Comparison
| Feature | Quick Start | Script-Based | Ansible |
|---------|-------------|--------------|---------|
| **Setup Time** | 30 min | 2 hours | 4 hours |
| **Ongoing Deployment** | 15 min | 10 min | 5 min |
| **Multi-Server** | Manual | Manual | Automated |
| **Rollback** | Manual | Script | Automated |
| **Team Collaboration** | Docs | Scripts + Docs | Playbooks |
| **Infrastructure as Code** | No | Partial | Yes |
| **Idempotency** | No | Partial | Yes |
| **Best For** | Single server, quick start | Single server, repeatable | Multiple servers, scaling |
---
## Prerequisites Summary
All deployment methods require:
### Server Requirements
- Ubuntu 22.04+ (or Debian 11+)
- 4GB RAM minimum (8GB recommended)
- 40GB disk space minimum
- Root or sudo access
### Network Requirements
- Domain name configured
- DNS pointing to server IP
- Ports 22, 80, 443 accessible
- Static IP address recommended
### Tools Required
- SSH client
- Git
- Text editor (nano, vim, or VS Code with Remote SSH)
### Knowledge Requirements
- Basic Linux command line
- SSH and file permissions
- Docker basics
- DNS and domain configuration
- (Optional) Ansible for automation
---
## Common Tasks
### Initial Deployment
```bash
# Follow Quick Start Guide
cat docs/deployment/QUICKSTART.md
# Haupt-Dokumentation
cat deployment/README.md
# Verify with checklist
cat docs/deployment/DEPLOYMENT_CHECKLIST.md
# Schnellstart
cat deployment/QUICK_START.md
# Commands
cat deployment/DEPLOYMENT_COMMANDS.md
```
### Deploy Update
```bash
# Manual method
cd /home/appuser/app
git pull origin main
docker compose -f docker-compose.production.yml build
docker compose -f docker-compose.production.yml up -d
php console.php db:migrate
# Automated script method
./scripts/deployment/deploy-production.sh
# Zero-downtime method
./scripts/deployment/blue-green-deploy.sh
```
### Rollback
```bash
# Manual rollback (see DEPLOYMENT_WORKFLOW.md)
docker compose -f docker-compose.old.yml up -d
php console.php db:rollback 1
# Automated rollback
./scripts/deployment/blue-green-rollback.sh
```
### Health Check
```bash
# Quick health check
curl -f https://yourdomain.com/health/summary
# Detailed health check
curl -f https://yourdomain.com/health/detailed | jq
# Specific category
curl -f https://yourdomain.com/health/category/database
```
### View Logs
```bash
# Application logs
docker compose -f docker-compose.production.yml logs -f php
# System logs
tail -f /var/log/app/app.log
# Nginx logs
tail -f /var/log/nginx/error.log
```
### Database Backup
```bash
# Manual backup
docker compose exec database mysqldump -u app_user -p app_production > backup.sql
# Automated backup (configured in QUICKSTART.md)
/home/appuser/backup-production.sh
```
### SSL Certificate Renewal
```bash
# Test renewal
certbot renew --dry-run
# Force renewal
certbot renew --force-renewal
# Automatic renewal is configured via cron/systemd timer
```
**Für spezifische Themen:**
- VPN Setup: `docs/deployment/WIREGUARD-SETUP.md`
- Database Migrations: `docs/deployment/database-migration-strategy.md`
- Logging: `docs/deployment/production-logging.md`
- SSL Setup: `docs/deployment/ssl-setup.md`
---
## Troubleshooting Quick Reference
### Issue: Containers won't start
**Solution**: Check logs
```bash
docker compose -f docker-compose.production.yml logs php
```
**Common causes**: Database credentials, port conflicts, permissions
**Full guide**: [PRODUCTION_DEPLOYMENT.md - Troubleshooting](PRODUCTION_DEPLOYMENT.md#troubleshooting)
---
### Issue: Health checks failing
**Solution**: Check specific health check
```bash
curl http://localhost/health/category/database
```
**Common causes**: Database not migrated, cache not writable, queue not running
**Full guide**: [DEPLOYMENT_WORKFLOW.md - Troubleshooting](DEPLOYMENT_WORKFLOW.md#troubleshooting)
---
### Issue: SSL certificate problems
**Solution**: Verify certificate
```bash
openssl x509 -in /etc/letsencrypt/live/yourdomain.com/fullchain.pem -noout -dates
```
**Common causes**: DNS not propagated, port 80 blocked, wrong domain
**Full guide**: [PRODUCTION_DEPLOYMENT.md - SSL/TLS](PRODUCTION_DEPLOYMENT.md#ssltls-configuration)
---
### Issue: Application errors
**Solution**: Check application logs
```bash
docker compose -f docker-compose.production.yml logs -f php
tail -f /var/log/app/app.log
```
**Common causes**: Environment configuration, missing migrations, permission issues
**Full guide**: [production-logging.md - Troubleshooting](production-logging.md#troubleshooting)
---
## Security Considerations
All deployment methods include security best practices:
- ✅ HTTPS enforced (SSL/TLS)
- ✅ Firewall configured (UFW)
- ✅ SSH key-only authentication
- ✅ Fail2Ban for intrusion prevention
- ✅ Security headers (CSP, HSTS, X-Frame-Options)
- ✅ CSRF protection
- ✅ Rate limiting
- ✅ WAF (Web Application Firewall)
- ✅ Vault for secrets management
- ✅ Regular security updates
**Detailed security guide**: [PRODUCTION_DEPLOYMENT.md - Security](PRODUCTION_DEPLOYMENT.md#security-considerations)
---
## Monitoring and Health Checks
### Available Endpoints
```
GET /health/summary - Quick health summary
GET /health/detailed - Full health report with all checks
GET /health/checks - List registered health checks
GET /health/category/{cat} - Health checks by category
GET /metrics - Prometheus metrics
GET /metrics/json - JSON metrics
```
### Health Check Categories
- `DATABASE` - Database connectivity and performance
- `CACHE` - Cache system health (Redis/File)
- `SECURITY` - SSL certificates, rate limiting, CSRF
- `INFRASTRUCTURE` - Disk space, memory, queue status
- `EXTERNAL` - External service connectivity
**Full monitoring guide**: [PRODUCTION_DEPLOYMENT.md - Monitoring](PRODUCTION_DEPLOYMENT.md#monitoring-and-health-checks)
---
## Support and Resources
### Internal Documentation
- [Framework Guidelines](../claude/guidelines.md)
- [Security Patterns](../claude/security-patterns.md)
- [Database Patterns](../claude/database-patterns.md)
- [Error Handling](../claude/error-handling.md)
### External Resources
- [Docker Documentation](https://docs.docker.com/)
- [Let's Encrypt Documentation](https://letsencrypt.org/docs/)
- [Nginx Documentation](https://nginx.org/en/docs/)
- [Ansible Documentation](https://docs.ansible.com/) (for automation)
### Getting Help
1. **Check documentation** (this directory)
2. **Review application logs** (`docker compose logs`)
3. **Check health endpoints** (`/health/detailed`)
4. **Review metrics** (`/metrics`)
5. **Consult troubleshooting guides** (in each document)
---
## Contribution
This documentation should be updated after each deployment to reflect:
- Lessons learned
- Process improvements
- Common issues encountered
- New best practices discovered
**Deployment feedback template**: See [DEPLOYMENT_CHECKLIST.md - Continuous Improvement](DEPLOYMENT_CHECKLIST.md#continuous-improvement)
---
## Version History
| Version | Date | Changes | Author |
|---------|------|---------|--------|
| 1.0 | 2025-01-15 | Initial comprehensive deployment documentation | System |
| | | Complete with Quick Start, Workflow, Ansible, Checklists | |
---
**Quick Links**:
- [Quick Start](QUICKSTART.md) - Fastest path to production
- [Checklist](DEPLOYMENT_CHECKLIST.md) - Ensure nothing is missed
- [Complete Workflow](DEPLOYMENT_WORKFLOW.md) - Detailed deployment process
- [Production Guide](PRODUCTION_DEPLOYMENT.md) - Comprehensive reference
- [Logging Guide](production-logging.md) - Production logging configuration
- [Ansible Guide](ANSIBLE_DEPLOYMENT.md) - Infrastructure automation
- [WireGuard VPN](WIREGUARD-SETUP.md) - Secure VPN access to production
---
**Ready to deploy? Start with [QUICKSTART.md](QUICKSTART.md) →**
**Haupt-Deployment-Dokumentation:** Siehe [deployment/](../../deployment/) Ordner

View File

@@ -1,5 +1,16 @@
# Production Deployment Automation
⚠️ **WICHTIG:** Diese Dokumentation ist veraltet.
**Für aktuelle Deployment-Automation siehe:**
- **[deployment/ansible/](../../deployment/ansible/)** - Aktuelle Ansible-Playbooks
- **[deployment/DEPLOYMENT_COMMANDS.md](../../deployment/DEPLOYMENT_COMMANDS.md)** - Command-Referenz
- **[deployment/CODE_CHANGE_WORKFLOW.md](../../deployment/CODE_CHANGE_WORKFLOW.md)** - Workflow-Dokumentation
---
**Historische Dokumentation (veraltet):**
Comprehensive guide to automated production deployment scripts for the Custom PHP Framework.
## Overview

View File

@@ -1,711 +0,0 @@
# Production Docker Compose Configuration
Production Docker Compose configuration mit Sicherheits-Härtung, Performance-Optimierung und Monitoring für das Custom PHP Framework.
## Übersicht
Das Projekt verwendet Docker Compose Overlay-Pattern:
- **Base**: `docker-compose.yml` - Entwicklungsumgebung
- **Production**: `docker-compose.production.yml` - Production-spezifische Overrides
## Usage
```bash
# Production-Stack starten
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
--env-file .env.production \
up -d
# Mit Build (bei Änderungen)
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
--env-file .env.production \
up -d --build
# Stack stoppen
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
down
# Logs anzeigen
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
logs -f [service]
# Service Health Check
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
ps
```
## Production Overrides
### 1. Web (Nginx) Service
**Restart Policy**:
```yaml
restart: always # Automatischer Neustart bei Fehlern
```
**SSL/TLS Configuration**:
```yaml
volumes:
- certbot-conf:/etc/letsencrypt:ro
- certbot-www:/var/www/certbot:ro
```
- Let's Encrypt Zertifikate via Certbot
- Read-only Mounts für Sicherheit
**Health Checks**:
```yaml
healthcheck:
test: ["CMD", "curl", "-f", "https://localhost/health"]
interval: 15s
timeout: 5s
retries: 5
start_period: 30s
```
- HTTPS Health Check auf `/health` Endpoint
- 15 Sekunden Intervall für schnelle Fehler-Erkennung
- 5 Retries vor Service-Nestart
**Resource Limits**:
```yaml
deploy:
resources:
limits:
memory: 512M
cpus: '1.0'
reservations:
memory: 256M
cpus: '0.5'
```
- Nginx ist lightweight, moderate Limits
**Logging**:
```yaml
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
compress: "true"
labels: "service,environment"
```
- JSON-Format für Log-Aggregation (ELK Stack kompatibel)
- 10MB pro Datei, 5 Dateien = 50MB total
- Komprimierte Rotation
### 2. PHP Service
**Restart Policy**:
```yaml
restart: always
```
**Build Configuration**:
```yaml
build:
args:
- ENV=production
- COMPOSER_INSTALL_FLAGS=--no-dev --optimize-autoloader --classmap-authoritative
```
- `--no-dev`: Keine Development-Dependencies
- `--optimize-autoloader`: PSR-4 Optimization
- `--classmap-authoritative`: Keine Filesystem-Lookups (Performance)
**Environment**:
```yaml
environment:
- APP_ENV=production
- APP_DEBUG=false # DEBUG AUS in Production!
- PHP_MEMORY_LIMIT=512M
- PHP_MAX_EXECUTION_TIME=30
- XDEBUG_MODE=off # Xdebug aus für Performance
```
**Health Checks**:
```yaml
healthcheck:
test: ["CMD", "php-fpm-healthcheck"]
interval: 15s
timeout: 5s
retries: 5
start_period: 30s
```
- PHP-FPM Health Check via Custom Script
- Schnelles Failure-Detection
**Resource Limits**:
```yaml
deploy:
resources:
limits:
memory: 1G
cpus: '2.0'
reservations:
memory: 512M
cpus: '1.0'
```
- PHP benötigt mehr Memory als Nginx
- 2 CPUs für parallele Request-Verarbeitung
**Volumes**:
```yaml
volumes:
- storage-logs:/var/www/html/storage/logs:rw
- storage-cache:/var/www/html/storage/cache:rw
- storage-queue:/var/www/html/storage/queue:rw
- storage-discovery:/var/www/html/storage/discovery:rw
- storage-uploads:/var/www/html/storage/uploads:rw
```
- Nur notwendige Docker Volumes
- **KEINE Host-Mounts** für Sicherheit
- Application Code im Image (nicht gemountet)
### 3. Database (PostgreSQL 16) Service
**Restart Policy**:
```yaml
restart: always
```
**Production Configuration**:
```yaml
volumes:
- db_data:/var/lib/postgresql/data
- ./docker/postgres/postgresql.production.conf:/etc/postgresql/postgresql.conf:ro
- ./docker/postgres/init:/docker-entrypoint-initdb.d:ro
```
- Production-optimierte `postgresql.production.conf`
- Init-Scripts für Schema-Setup
**Resource Limits**:
```yaml
deploy:
resources:
limits:
memory: 2G
cpus: '2.0'
reservations:
memory: 1G
cpus: '1.0'
```
- PostgreSQL benötigt Memory für `shared_buffers` (2GB in Config)
- 2 CPUs für parallele Query-Verarbeitung
**Health Checks**:
```yaml
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USERNAME:-postgres} -d ${DB_DATABASE:-michaelschiemer}"]
interval: 10s
timeout: 3s
retries: 5
start_period: 30s
```
- `pg_isready` für schnelle Connection-Prüfung
- 10 Sekunden Intervall (häufiger als andere Services)
**Logging**:
```yaml
logging:
driver: json-file
options:
max-size: "20m" # Größere Log-Dateien für PostgreSQL
max-file: "10"
compress: "true"
```
- PostgreSQL loggt mehr (Slow Queries, Checkpoints, etc.)
- 20MB pro Datei, 10 Dateien = 200MB total
### 4. Redis Service
**Restart Policy**:
```yaml
restart: always
```
**Resource Limits**:
```yaml
deploy:
resources:
limits:
memory: 512M
cpus: '1.0'
reservations:
memory: 256M
cpus: '0.5'
```
- Redis ist Memory-basiert, moderate Limits
**Health Checks**:
```yaml
healthcheck:
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
interval: 10s
timeout: 3s
retries: 5
start_period: 10s
```
- `redis-cli ping` für Connection-Check
- Schneller Start (10s start_period)
### 5. Queue Worker Service
**Restart Policy**:
```yaml
restart: always
```
**Environment**:
```yaml
environment:
- APP_ENV=production
- WORKER_DEBUG=false
- WORKER_SLEEP_TIME=100000
- WORKER_MAX_JOBS=10000
```
- Production-Modus ohne Debug
- 10,000 Jobs pro Worker-Lifecycle
**Resource Limits**:
```yaml
deploy:
resources:
limits:
memory: 2G
cpus: '2.0'
reservations:
memory: 1G
cpus: '1.0'
replicas: 2 # 2 Worker-Instanzen
```
- Worker benötigen Memory für Job-Processing
- **2 Replicas** für Parallelität
**Graceful Shutdown**:
```yaml
stop_grace_period: 60s
```
- 60 Sekunden für Job-Completion vor Shutdown
- Verhindert Job-Abbrüche
**Logging**:
```yaml
logging:
driver: json-file
options:
max-size: "20m"
max-file: "10"
compress: "true"
```
- Worker loggen ausführlich (Job-Start, Completion, Errors)
- 200MB total Log-Storage
### 6. Certbot Service
**Restart Policy**:
```yaml
restart: always
```
**Auto-Renewal**:
```yaml
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew --webroot -w /var/www/certbot --quiet; sleep 12h & wait $${!}; done;'"
```
- Automatische Erneuerung alle 12 Stunden
- Webroot-Challenge über Nginx
**Volumes**:
```yaml
volumes:
- certbot-conf:/etc/letsencrypt
- certbot-www:/var/www/certbot
- certbot-logs:/var/log/letsencrypt
```
- Zertifikate werden mit Nginx geteilt
## Network Configuration
**Security Isolation**:
```yaml
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # Backend network is internal (no internet access)
cache:
driver: bridge
internal: true # Cache network is internal
```
**Network-Segmentierung**:
- **Frontend**: Nginx, Certbot (Internet-Zugriff)
- **Backend**: PHP, PostgreSQL, Queue Worker (KEIN Internet-Zugriff)
- **Cache**: Redis (KEIN Internet-Zugriff)
**Security Benefits**:
- Backend Services können nicht nach außen kommunizieren
- Verhindert Data Exfiltration bei Compromise
- Zero-Trust Network Architecture
## Volumes Configuration
**SSL/TLS Volumes**:
```yaml
certbot-conf:
driver: local
certbot-www:
driver: local
certbot-logs:
driver: local
```
**Application Storage Volumes**:
```yaml
storage-logs:
driver: local
storage-cache:
driver: local
storage-queue:
driver: local
storage-discovery:
driver: local
storage-uploads:
driver: local
```
**Database Volume**:
```yaml
db_data:
driver: local
# Optional: External volume for backups
# driver_opts:
# type: none
# o: bind
# device: /mnt/db-backups/michaelschiemer-prod
```
**Volume Best Practices**:
- Alle Volumes sind `driver: local` (nicht Host-Mounts)
- Für Backups: Optional External Volume für Database
- Keine Development-Host-Mounts in Production
## Logging Strategy
**JSON Logging** für alle Services:
```yaml
logging:
driver: json-file
options:
max-size: "10m" # Service-abhängig
max-file: "5" # Service-abhängig
compress: "true"
labels: "service,environment"
```
**Log Rotation**:
| Service | Max Size | Max Files | Total Storage |
|---------|----------|-----------|---------------|
| Nginx | 10MB | 5 | 50MB |
| PHP | 10MB | 10 | 100MB |
| PostgreSQL | 20MB | 10 | 200MB |
| Redis | 10MB | 5 | 50MB |
| Queue Worker | 20MB | 10 | 200MB |
| Certbot | 5MB | 3 | 15MB |
| **TOTAL** | | | **615MB** |
**Log Aggregation**:
- JSON-Format für ELK Stack (Elasticsearch, Logstash, Kibana)
- Labels für Service-Identifikation
- Komprimierte Log-Files für Storage-Effizienz
## Resource Allocation
**Total Resource Requirements**:
| Service | Memory Limit | Memory Reservation | CPU Limit | CPU Reservation |
|---------|--------------|-------------------|-----------|-----------------|
| Nginx | 512M | 256M | 1.0 | 0.5 |
| PHP | 1G | 512M | 2.0 | 1.0 |
| PostgreSQL | 2G | 1G | 2.0 | 1.0 |
| Redis | 512M | 256M | 1.0 | 0.5 |
| Queue Worker (x2) | 4G | 2G | 4.0 | 2.0 |
| **TOTAL** | **8GB** | **4GB** | **10 CPUs** | **5 CPUs** |
**Server Sizing Recommendations**:
- **Minimum**: 8GB RAM, 4 CPUs (Resource Limits)
- **Recommended**: 16GB RAM, 8 CPUs (Headroom für OS und Spikes)
- **Optimal**: 32GB RAM, 16 CPUs (Production mit Monitoring)
## Health Checks
**Health Check Strategy**:
| Service | Endpoint | Interval | Timeout | Retries | Start Period |
|---------|----------|----------|---------|---------|--------------|
| Nginx | HTTPS /health | 15s | 5s | 5 | 30s |
| PHP | php-fpm-healthcheck | 15s | 5s | 5 | 30s |
| PostgreSQL | pg_isready | 10s | 3s | 5 | 30s |
| Redis | redis-cli ping | 10s | 3s | 5 | 10s |
**Health Check Benefits**:
- Automatische Service-Recovery bei Failures
- Docker orchestriert Neustart nur bei unhealthy Services
- Health-Status via `docker-compose ps`
## Deployment Workflow
### Initial Deployment
```bash
# 1. Server vorbereiten (siehe production-prerequisites.md)
# 2. .env.production konfigurieren (siehe env-production-template.md)
# 3. Build und Deploy
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
--env-file .env.production \
up -d --build
# 4. SSL Zertifikate initialisieren
docker exec php php console.php ssl:init
# 5. Database Migrationen
docker exec php php console.php db:migrate
# 6. Health Checks verifizieren
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
ps
```
### Rolling Update (Zero-Downtime)
```bash
# 1. Neue Version pullen
git pull origin main
# 2. Build neue Images
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
--env-file .env.production \
build --no-cache
# 3. Rolling Update (Service für Service)
# Nginx
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
up -d --no-deps web
# PHP (nach Nginx)
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
up -d --no-deps php
# Queue Worker (nach PHP)
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
up -d --no-deps --scale queue-worker=2 queue-worker
# 4. Health Checks verifizieren
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
ps
```
### Rollback Strategy
```bash
# 1. Previous Git Commit
git log --oneline -5
git checkout <previous-commit>
# 2. Rebuild und Deploy
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
--env-file .env.production \
up -d --build
# 3. Database Rollback (wenn nötig)
docker exec php php console.php db:rollback 1
```
## Monitoring
### Container Status
```bash
# Status aller Services
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
ps
# Detaillierte Informationen
docker inspect <container-name>
```
### Resource Usage
```bash
# CPU/Memory Usage
docker stats
# Service-spezifisch
docker stats php db redis
```
### Logs
```bash
# Alle Logs (Follow)
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
logs -f
# Service-spezifisch
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
logs -f php
# Letzte N Zeilen
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
logs --tail=100 php
```
### Health Check Status
```bash
# Health Check Logs
docker inspect --format='{{json .State.Health}}' php | jq
# Health History
docker inspect --format='{{range .State.Health.Log}}{{.Start}} {{.ExitCode}} {{.Output}}{{end}}' php
```
## Backup Strategy
### Database Backup
```bash
# Manual Backup
docker exec db pg_dump -U postgres michaelschiemer_prod > backup_$(date +%Y%m%d_%H%M%S).sql
# Automated Backup (Cron)
# /etc/cron.daily/postgres-backup
#!/bin/bash
docker exec db pg_dump -U postgres michaelschiemer_prod | gzip > /mnt/backups/michaelschiemer_$(date +%Y%m%d).sql.gz
```
### Volume Backup
```bash
# Backup all volumes
docker run --rm \
-v michaelschiemer_db_data:/data:ro \
-v $(pwd)/backups:/backup \
alpine tar czf /backup/db_data_$(date +%Y%m%d).tar.gz -C /data .
```
## Troubleshooting
### Service Won't Start
```bash
# Check logs
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
logs <service>
# Check configuration
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
config
```
### Health Check Failing
```bash
# Manual health check
docker exec php php-fpm-healthcheck
docker exec db pg_isready -U postgres
docker exec redis redis-cli ping
# Check health logs
docker inspect --format='{{json .State.Health}}' <container> | jq
```
### Memory Issues
```bash
# Check memory usage
docker stats
# Increase limits in docker-compose.production.yml
# Then restart service
docker-compose -f docker-compose.yml \
-f docker-compose.production.yml \
up -d --no-deps <service>
```
### Network Issues
```bash
# Check networks
docker network ls
docker network inspect michaelschiemer-prod_backend
# Test connectivity
docker exec php ping db
docker exec php nc -zv db 5432
```
## Security Considerations
### 1. Network Isolation
- ✅ Backend network is internal (no internet access)
- ✅ Cache network is internal
- ✅ Only frontend services expose ports
### 2. Volume Security
- ✅ No host mounts (application code in image)
- ✅ Read-only mounts where possible (SSL certificates)
- ✅ Named Docker volumes (managed by Docker)
### 3. Secrets Management
- ✅ Use `.env.production` (not committed to git)
- ✅ Use Vault for sensitive data
- ✅ No secrets in docker-compose files
### 4. Resource Limits
- ✅ All services have memory limits (prevent OOM)
- ✅ CPU limits prevent resource starvation
- ✅ Restart policies for automatic recovery
### 5. Logging
- ✅ JSON logging for security monitoring
- ✅ Log rotation prevents disk exhaustion
- ✅ Compressed logs for storage efficiency
## Best Practices
1. **Always use `.env.production`** - Never commit production secrets
2. **Test updates in staging first** - Use same docker-compose setup
3. **Monitor resource usage** - Adjust limits based on metrics
4. **Regular backups** - Automate database and volume backups
5. **Health checks** - Ensure all services have working health checks
6. **Log aggregation** - Send logs to centralized logging system (ELK)
7. **SSL renewal** - Monitor Certbot logs for renewal issues
8. **Security updates** - Regularly update Docker images
## See Also
- **Prerequisites**: `docs/deployment/production-prerequisites.md`
- **Environment Configuration**: `docs/deployment/env-production-template.md`
- **SSL Setup**: `docs/deployment/ssl-setup.md`
- **Database Migrations**: `docs/deployment/database-migration-strategy.md`
- **Logging Configuration**: `docs/deployment/logging-configuration.md`

View File

@@ -1,381 +0,0 @@
# Docker Swarm + Traefik Deployment Guide
Production deployment guide for the Custom PHP Framework using Docker Swarm orchestration with Traefik load balancer.
## Architecture Overview
```
Internet → Traefik (SSL Termination, Load Balancing)
[Web Service - 3 Replicas]
↓ ↓
Database Redis Queue Workers
(PostgreSQL) (Cache/Sessions) (2 Replicas)
```
**Key Components**:
- **Traefik v2.10**: Reverse proxy, SSL termination, automatic service discovery
- **Web Service**: 3 replicas of PHP-FPM + Nginx (HTTP only, Traefik handles HTTPS)
- **PostgreSQL 16**: Single instance database (manager node)
- **Redis 7**: Sessions and cache (manager node)
- **Queue Workers**: 2 replicas for background job processing
- **Docker Swarm**: Native container orchestration with rolling updates and health checks
## Prerequisites
1. **Docker Engine 28.0+** with Swarm mode enabled
2. **Production Server** with SSH access
3. **SSL Certificates** in `./ssl/` directory (cert.pem, key.pem)
4. **Environment Variables** in `.env` file on production server
5. **Docker Image** built and available
## Initial Setup
### 1. Initialize Docker Swarm
On production server:
```bash
docker swarm init
```
Verify:
```bash
docker node ls
# Should show 1 node as Leader
```
### 2. Create Docker Secrets
Create secrets from .env file values:
```bash
cd /home/deploy/framework
# Create secrets (one-time setup)
echo "$DB_PASSWORD" | docker secret create db_password -
echo "$APP_KEY" | docker secret create app_key -
echo "$VAULT_ENCRYPTION_KEY" | docker secret create vault_encryption_key -
echo "$SHOPIFY_WEBHOOK_SECRET" | docker secret create shopify_webhook_secret -
echo "$RAPIDMAIL_PASSWORD" | docker secret create rapidmail_password -
```
Or use the automated script:
```bash
./scripts/setup-production-secrets.sh
```
Verify secrets:
```bash
docker secret ls
```
### 3. Build and Transfer Docker Image
On local machine:
**Option A: Via Private Registry** (if available):
```bash
# Build image
docker build -f Dockerfile.production -t 94.16.110.151:5000/framework:latest .
# Push to registry
docker push 94.16.110.151:5000/framework:latest
```
**Option B: Direct Transfer via SSH** (recommended for now):
```bash
# Build image
docker build -f Dockerfile.production -t 94.16.110.151:5000/framework:latest .
# Save and transfer to production
docker save 94.16.110.151:5000/framework:latest | \
ssh -i ~/.ssh/production deploy@94.16.110.151 'docker load'
```
### 4. Deploy Stack
On production server:
```bash
cd /home/deploy/framework
# Deploy the stack
docker stack deploy -c docker-compose.prod.yml framework
# Monitor deployment
watch docker stack ps framework
# Check service status
docker stack services framework
```
## Health Monitoring
### Check Service Status
```bash
# List all services
docker stack services framework
# Check specific service
docker service ps framework_web
# View service logs
docker service logs framework_web -f
docker service logs framework_traefik -f
docker service logs framework_db -f
```
### Health Check Endpoints
- **Main Health**: http://localhost/health (via Traefik)
- **Traefik Dashboard**: http://traefik.localhost:8080 (manager node only)
### Expected Service Replicas
| Service | Replicas | Purpose |
|---------|----------|---------|
| traefik | 1 | Reverse proxy + SSL |
| web | 3 | Application servers |
| db | 1 | PostgreSQL database |
| redis | 1 | Cache + sessions |
| queue-worker | 2 | Background jobs |
## Rolling Updates
### Update Application
1. Build new image with updated code:
```bash
docker build -f Dockerfile.production -t 94.16.110.151:5000/framework:latest .
```
2. Transfer to production (if no registry):
```bash
docker save 94.16.110.151:5000/framework:latest | \
ssh -i ~/.ssh/production deploy@94.16.110.151 'docker load'
```
3. Update the service:
```bash
# On production server
docker service update --image 94.16.110.151:5000/framework:latest framework_web
```
The update will:
- Roll out to 1 container at a time (`parallelism: 1`)
- Wait 10 seconds between updates (`delay: 10s`)
- Start new container before stopping old one (`order: start-first`)
- Automatically rollback on failure (`failure_action: rollback`)
### Monitor Update Progress
```bash
# Watch update status
watch docker service ps framework_web
# View update logs
docker service logs framework_web -f --tail 50
```
### Manual Rollback
If needed, rollback to previous version:
```bash
docker service rollback framework_web
```
## Troubleshooting
### Service Won't Start
Check service logs:
```bash
docker service logs framework_web --tail 100
```
Check task failures:
```bash
docker service ps framework_web --no-trunc
```
### Container Crashing
Inspect individual container:
```bash
# Get container ID
docker ps -a | grep framework_web
# View logs
docker logs <container_id>
# Exec into running container
docker exec -it <container_id> bash
```
### SSL/TLS Issues
Traefik handles SSL termination. Check Traefik logs:
```bash
docker service logs framework_traefik -f
```
Verify SSL certificates are mounted in docker-compose.prod.yml:
```yaml
volumes:
- ./ssl:/ssl:ro
```
### Database Connection Issues
Check PostgreSQL health:
```bash
docker service logs framework_db --tail 50
# Exec into db container
docker exec -it $(docker ps -q -f name=framework_db) psql -U postgres -d framework_prod
```
### Redis Connection Issues
Check Redis availability:
```bash
docker service logs framework_redis --tail 50
# Test Redis connection
docker exec -it $(docker ps -q -f name=framework_redis) redis-cli ping
```
### Performance Issues
Check resource usage:
```bash
# Service resource limits
docker service inspect framework_web --format='{{json .Spec.TaskTemplate.Resources}}' | jq
# Container stats
docker stats
```
## Scaling
### Scale Web Service
```bash
# Scale up to 5 replicas
docker service scale framework_web=5
# Scale down to 2 replicas
docker service scale framework_web=2
```
### Scale Queue Workers
```bash
# Scale workers based on queue backlog
docker service scale framework_queue-worker=4
```
## Cleanup
### Remove Stack
```bash
# Remove entire stack
docker stack rm framework
# Verify removal
docker stack ls
```
### Remove Secrets
```bash
# List secrets
docker secret ls
# Remove specific secret
docker secret rm db_password
# Remove all framework secrets
docker secret ls | grep -E "db_password|app_key|vault_encryption_key" | awk '{print $2}' | xargs docker secret rm
```
### Leave Swarm
```bash
# Force leave Swarm (removes all services and secrets)
docker swarm leave --force
```
## Network Architecture
### Overlay Networks
- **traefik-public**: External network for Traefik ↔ Web communication
- **backend**: Internal network for Web ↔ Database/Redis communication
### Port Mappings
| Port | Service | Purpose |
|------|---------|---------|
| 80 | Traefik | HTTP (redirects to 443) |
| 443 | Traefik | HTTPS (production traffic) |
| 8080 | Traefik | Dashboard (manager node only) |
## Volume Management
### Named Volumes
| Volume | Purpose | Mounted In |
|--------|---------|------------|
| traefik-logs | Traefik access logs | traefik |
| storage-logs | Application logs | web, queue-worker |
| storage-uploads | User uploads | web |
| storage-queue | Queue data | queue-worker |
| db-data | PostgreSQL data | db |
| redis-data | Redis persistence | redis |
### Backup Volumes
```bash
# Backup database
docker exec $(docker ps -q -f name=framework_db) pg_dump -U postgres framework_prod > backup.sql
# Backup Redis (if persistence enabled)
docker exec $(docker ps -q -f name=framework_redis) redis-cli --rdb /data/dump.rdb
```
## Security Best Practices
1. **Secrets Management**: Never commit secrets to version control, use Docker Secrets
2. **Network Isolation**: Backend network is internal-only, no external access
3. **SSL/TLS**: Traefik enforces HTTPS, redirects HTTP → HTTPS
4. **Health Checks**: All services have health checks with automatic restart
5. **Resource Limits**: Production services have memory/CPU limits
6. **Least Privilege**: Containers run as www-data (not root) where possible
## Phase 2 - Monitoring (Coming Soon)
- Prometheus for metrics collection
- Grafana dashboards
- Automated PostgreSQL backups
- Email/Slack alerting
## Phase 3 - CI/CD (Coming Soon)
- Gitea Actions workflow
- Loki + Promtail for log aggregation
- Performance tuning
## Phase 4 - High Availability (Future)
- Multi-node Swarm cluster
- Varnish CDN cache layer
- PostgreSQL Primary/Replica with pgpool
- MinIO object storage
## References
- [Docker Swarm Documentation](https://docs.docker.com/engine/swarm/)
- [Traefik v2 Documentation](https://doc.traefik.io/traefik/)
- [Docker Secrets Management](https://docs.docker.com/engine/swarm/secrets/)

View File

@@ -0,0 +1,297 @@
# Gitea Secrets Setup - Vollständige Anleitung
## Problem: "Secrets" fehlt in Repository Settings
Wenn du unter **Repository → Settings** keine "Secrets" Option siehst, kann das verschiedene Gründe haben. Diese Anleitung zeigt alle Möglichkeiten, wie Secrets in Gitea konfiguriert werden können.
## 📍 Wo finde ich Secrets in Gitea?
### Option 1: Repository Secrets (Standard)
**Pfad:** `Repository → Settings → Secrets`
⚠️ **Falls diese Option fehlt:**
- Gitea Actions muss für das Repository aktiviert sein
- Benötigt Berechtigung: Repository Owner oder Admin
- Manche Gitea-Versionen haben Secrets an anderen Orten
### Option 2: Über Actions/Tabs
Manche Gitea-Versionen haben Secrets unter:
- **Repository → Settings → Actions → Secrets**
- **Repository → Actions → Secrets**
### Option 3: Organization/Admin Level
Falls Repository-Secrets nicht verfügbar sind, können Secrets auf Organisations-Level gesetzt werden:
1. **Als Admin:** `Site Administration → Actions → Secrets`
2. **Als Org-Owner:** `Organization Settings → Actions → Secrets`
### Option 4: Via Gitea API (Programmatisch)
Falls die UI-Option fehlt, können Secrets via API gesetzt werden:
```bash
# Siehe: scripts/setup-gitea-secrets.sh
```
## 🔍 Diagnose: Warum sehe ich keine Secrets-Option?
### Checkliste:
1. **Ist Gitea Actions aktiviert?**
- Gehe zu: **Site Administration → Actions → Settings**
- Stelle sicher, dass Actions aktiviert ist
2. **Hast du die richtigen Berechtigungen?**
- Du musst Repository Owner oder Admin sein
- Prüfe: **Repository → Settings → Collaborators**
3. **Welche Gitea-Version wird verwendet?**
- Neuere Versionen haben Secrets unter Repository → Settings → Secrets
- Ältere Versionen könnten andere Pfade haben
4. **Sind Actions für das Repository aktiviert?**
- Prüfe ob Workflows unter **Repository → Actions** sichtbar sind
- Falls nicht, müssen Actions für das Repository aktiviert werden
## 🛠️ Lösung: Secrets manuell setzen
### Methode 1: Via Gitea API (Empfohlen)
Wenn die UI-Option fehlt, verwende das Setup-Script:
```bash
# Script vorbereiten
cd /home/michael/dev/michaelschiemer
# Setze deine Gitea-Credentials
export GITEA_URL="https://git.michaelschiemer.de"
export GITEA_TOKEN="dein-gitea-token" # Siehe unten wie man einen Token erstellt
export REPO_OWNER="dein-username" # Oder Organization-Name
export REPO_NAME="michaelschiemer"
# Setze die Secrets
./scripts/setup-gitea-secrets.sh
```
**Gitea Token erstellen:**
1. Gehe zu: `https://git.michaelschiemer.de/user/settings/applications`
2. Scrolle zu "Generate New Token"
3. Name: z.B. "CI/CD Secrets Setup"
4. Scopes: `write:repository` (mindestens)
5. Klicke "Generate Token"
6. Kopiere den Token (wird nur einmal angezeigt!)
### Methode 2: Environment-Variablen im Runner (Alternative)
Falls Repository Secrets wirklich nicht verfügbar sind, können Secrets auch als Environment-Variablen im Runner konfiguriert werden:
**In `deployment/gitea-runner/docker-compose.yml`:**
```yaml
services:
gitea-runner:
environment:
- REGISTRY_USER=admin
- REGISTRY_PASSWORD=registry-secure-password-2025
# Weitere Secrets...
```
⚠️ **Nicht empfohlen:** Weniger sicher, da Secrets im docker-compose.yml gespeichert werden.
### Methode 3: Secrets im Workflow hardcoden (NICHT EMPFOHLEN!)
**⚠️ NIEMALS PASSWÖRTER IM CODE SPEICHERN!**
Diese Methode sollte nur für Tests verwendet werden und ist für Produktion unsicher.
## 📝 Schritt-für-Schritt: Secrets via API setzen
### Schritt 1: Gitea Token erstellen
1. Gehe zu: `https://git.michaelschiemer.de/user/settings/applications`
2. Scrolle zu "Generate New Token"
3. **Name:** `CI/CD Secrets Management`
4. **Scopes:** Wähle:
-`read:repository`
-`write:repository`
-`read:organization` (falls Org-Repo)
5. Klicke **"Generate Token"**
6. **Kopiere den Token sofort** (wird nur einmal angezeigt!)
### Schritt 2: Repository-Informationen finden
**Repository Owner:**
- Gehe zu deinem Repository: `https://git.michaelschiemer.de/[username]/michaelschiemer`
- Der erste Teil nach `/` ist der Owner (Username oder Org-Name)
**Repository Name:**
- Der zweite Teil ist der Repository-Name (meist `michaelschiemer`)
### Schritt 3: Setup-Script ausführen
```bash
cd /home/michael/dev/michaelschiemer
# Konfiguriere die Variablen
export GITEA_URL="https://git.michaelschiemer.de"
export GITEA_TOKEN="dein-token-hier"
export REPO_OWNER="dein-username"
export REPO_NAME="michaelschiemer"
export REGISTRY_PASSWORD="registry-secure-password-2025"
# Führe das Script aus
bash scripts/setup-gitea-secrets.sh
```
Das Script setzt automatisch:
- `REGISTRY_USER` = `admin`
- `REGISTRY_PASSWORD` = Wert von `$REGISTRY_PASSWORD`
- `SSH_PRIVATE_KEY` = aus `~/.ssh/production`
### Schritt 4: Verifizierung
Nach dem Ausführen des Scripts:
1. **Prüfe ob Secrets gesetzt wurden:**
```bash
# Teste mit einem Workflow
# Gehe zu: Repository → Actions → Test Registry Credentials → Run workflow
```
2. **Oder prüfe via API:**
```bash
curl -H "Authorization: token ${GITEA_TOKEN}" \
"${GITEA_URL}/api/v1/repos/${REPO_OWNER}/${REPO_NAME}/actions/secrets"
```
## 🔐 Alternative: Secrets in Runner Environment
Falls Repository Secrets gar nicht funktionieren, können Secrets auch als Environment-Variablen im Runner konfiguriert werden:
### Schritt 1: .env Datei erstellen
```bash
cd deployment/gitea-runner
cp .env.example .env
```
### Schritt 2: Secrets zur .env hinzufügen
```bash
nano .env
```
Füge hinzu:
```bash
REGISTRY_USER=admin
REGISTRY_PASSWORD=registry-secure-password-2025
# SSH_PRIVATE_KEY würde hier nicht gesetzt werden (zu lang)
```
### Schritt 3: docker-compose.yml anpassen
```yaml
services:
gitea-runner:
environment:
- REGISTRY_USER=${REGISTRY_USER:-admin}
- REGISTRY_PASSWORD=${REGISTRY_PASSWORD}
# ... rest of config
```
### Schritt 4: Workflow anpassen
Im Workflow müssen Secrets dann anders geladen werden:
```yaml
- name: Login to Registry
env:
REGISTRY_USER: ${REGISTRY_USER}
REGISTRY_PASSWORD: ${REGISTRY_PASSWORD}
run: |
# Secrets werden jetzt aus Runner Environment geladen
```
⚠️ **Nachteil:** Secrets sind dann im Runner-Environment und nicht repository-spezifisch.
## 🧪 Test ob Secrets funktionieren
### Test 1: Workflow ausführen
```bash
# Gehe zu: Repository → Actions → Test Registry Credentials → Run workflow
```
### Test 2: Debug-Output im Workflow
Füge temporär Debug-Output hinzu:
```yaml
- name: Debug Secrets
run: |
echo "REGISTRY_USER length: ${#REGISTRY_USER}"
echo "REGISTRY_PASSWORD length: ${#REGISTRY_PASSWORD}"
# Zeigt die Länge, nicht den Inhalt (für Sicherheit)
```
Wenn beide > 0 sind, funktionieren die Secrets.
### Test 3: Lokales Test-Script
```bash
export REGISTRY_USER="admin"
export REGISTRY_PASSWORD="registry-secure-password-2025"
./scripts/test-registry-credentials.sh
```
## 📚 Nützliche Links
- **Gitea Actions Docs:** https://docs.gitea.io/en-us/actions/
- **Secrets API:** https://docs.gitea.io/en-us/api-usage/#repository-secrets
- **Setup Script:** `scripts/setup-gitea-secrets.sh`
## ❓ Häufige Fragen
### Q: Warum sehe ich keine "Secrets" Option?
**A:** Mögliche Gründe:
1. Gitea Actions ist nicht aktiviert
2. Du hast nicht die nötigen Berechtigungen (Repository Owner/Admin)
3. Deine Gitea-Version hat Secrets an einem anderen Ort
4. Secrets müssen via API gesetzt werden
### Q: Kann ich Secrets für mehrere Repositories setzen?
**A:** Ja, über:
- **Organization Secrets:** Für alle Repos in der Organisation
- **API:** Script mehrmals ausführen mit verschiedenen `REPO_NAME`
### Q: Sind Secrets verschlüsselt?
**A:** Ja, Gitea verschlüsselt Secrets vor der Speicherung.
### Q: Wie ändere ich ein Secret?
**A:**
- Via UI: Bearbeiten (falls verfügbar)
- Via API: Script erneut ausführen (überschreibt das Secret)
- Via Script: `setup-gitea-secrets.sh` erneut ausführen
## 🚀 Quick Start
**Schnellste Methode (wenn API verfügbar):**
```bash
export GITEA_URL="https://git.michaelschiemer.de"
export GITEA_TOKEN="dein-token"
export REPO_OWNER="dein-username"
export REPO_NAME="michaelschiemer"
export REGISTRY_PASSWORD="registry-secure-password-2025"
bash scripts/setup-gitea-secrets.sh
```
Das war's! Secrets sollten jetzt funktionieren.

View File

@@ -1,800 +0,0 @@
# Production Deployment Guide
Umfassende Anleitung für das Deployment der Custom PHP Framework Anwendung auf dem Production Server.
## Inhaltsverzeichnis
1. [Architektur-Übersicht](#architektur-übersicht)
2. [Voraussetzungen](#voraussetzungen)
3. [Sicherheits-Setup](#sicherheits-setup)
4. [Docker Registry Setup](#docker-registry-setup)
5. [Production Image Build](#production-image-build)
6. [Deployment Prozess](#deployment-prozess)
7. [Troubleshooting](#troubleshooting)
8. [Monitoring](#monitoring)
---
## Architektur-Übersicht
### Development vs Production
**Development** (docker-compose.yml):
- Separate Container: Nginx + PHP-FPM
- Source Code via Volume Mounts
- Hot-Reload für Development
- Xdebug aktiviert
**Production** (docker-compose.prod.yml):
- Single Container: Supervisor → Nginx + PHP-FPM
- Code im Image eingebacken
- Minimale Volume Mounts (nur logs/uploads)
- Optimiert für Performance
### Production Stack
```
┌─────────────────────────────────────────────────┐
│ Production Server (94.16.110.151) │
│ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐│
│ │ Web │ │ PHP │ │ Redis ││
│ │ (Supervisor│ │ │ │ Cache ││
│ │ Nginx + │ │ │ │ ││
│ │ PHP-FPM) │ │ │ │ ││
│ └────────────┘ └────────────┘ └────────────┘│
│ │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐│
│ │ PostgreSQL │ │ Queue │ │ Watchtower ││
│ │ Database │ │ Worker │ │ Auto-Update││
│ └────────────┘ └────────────┘ └────────────┘│
│ │
│ Monitoring (VPN-only): │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐│
│ │ Prometheus │ │ Grafana │ │ Portainer ││
│ │ :9090 │ │ :3000 │ │ :9443 ││
│ └────────────┘ └────────────┘ └────────────┘│
└─────────────────────────────────────────────────┘
│ WireGuard VPN (10.8.0.0/24)
┌───┴────┐
│ Client │
└────────┘
```
---
## Voraussetzungen
### Server Requirements
- **OS**: Ubuntu 22.04 LTS (oder neuer)
- **RAM**: Minimum 4GB (empfohlen: 8GB+)
- **CPU**: 2+ Cores
- **Disk**: 50GB+ freier Speicherplatz
- **Network**: Statische IP oder DNS
### Installierte Software
```bash
# Docker & Docker Compose
docker --version # 24.0+
docker-compose --version # 2.20+
# WireGuard (für sicheren Zugriff)
wg --version
# SSL Tools
openssl version
```
### Ports
**Public (Firewall offen)**:
- `8888`: HTTP (optional, für HTTP→HTTPS Redirect)
- `8443`: HTTPS (Hauptzugang)
- `51820`: WireGuard VPN (UDP)
**VPN-only (über 10.8.0.1)**:
- `9090`: Prometheus
- `3000`: Grafana
- `9443`: Portainer
**Internal (nicht extern erreichbar)**:
- `5432`: PostgreSQL
- `6379`: Redis
- `9000`: PHP-FPM
---
## Sicherheits-Setup
### 1. WireGuard VPN
WireGuard bietet verschlüsselten Zugang zum Production Server für Administration und Monitoring.
**Server Installation**:
```bash
# Als root auf Production Server
apt update
apt install -y wireguard
# Schlüssel generieren
cd /etc/wireguard
umask 077
wg genkey | tee server_private.key | wg pubkey > server_public.key
# Server Config
cat > /etc/wireguard/wg0.conf <<'EOF'
[Interface]
Address = 10.8.0.1/24
ListenPort = 51820
PrivateKey = <server_private_key>
# Client (Development Machine)
[Peer]
PublicKey = <client_public_key>
AllowedIPs = 10.8.0.2/32
EOF
# Service starten
systemctl enable wg-quick@wg0
systemctl start wg-quick@wg0
systemctl status wg-quick@wg0
```
**Client Configuration** (`/etc/wireguard/wg0-production.conf`):
```ini
[Interface]
Address = 10.8.0.2/32
PrivateKey = <client_private_key>
DNS = 1.1.1.1
[Peer]
PublicKey = <server_public_key>
Endpoint = 94.16.110.151:51820
AllowedIPs = 10.8.0.0/24, 94.16.110.151/32
PersistentKeepalive = 25
```
**Client Start**:
```bash
sudo wg-quick up wg0-production
sudo wg show # Verify connection
ping 10.8.0.1 # Test connectivity
```
### 2. Firewall Configuration
**UFW Rules** (auf Production Server):
```bash
# Default policies
ufw default deny incoming
ufw default allow outgoing
# SSH (nur von spezifischen IPs)
ufw allow from <deine_ip> to any port 22
# WireGuard
ufw allow 51820/udp
# HTTP/HTTPS
ufw allow 8888/tcp # HTTP (optional)
ufw allow 8443/tcp # HTTPS
# Enable firewall
ufw enable
ufw status verbose
```
### 3. SSL/TLS Zertifikate
**Development/Testing** (Self-Signed):
```bash
# Bereits vorhanden in ./ssl/
# - cert.pem
# - key.pem
```
**Production** (Let's Encrypt empfohlen):
```bash
# Mit certbot
certbot certonly --standalone -d yourdomain.com
# Zertifikate nach ./ssl/ kopieren
```
---
## Docker Registry Setup
### Local Registry on Production Server
Für sichere, private Image-Verwaltung läuft eine lokale Docker Registry auf dem Production Server.
**Registry starten**:
```bash
docker run -d \
--restart=always \
--name registry \
-p 127.0.0.1:5000:5000 \
registry:2
```
**Verify**:
```bash
curl http://localhost:5000/v2/_catalog
```
**Registry in Docker konfigurieren**:
`/etc/docker/daemon.json`:
```json
{
"insecure-registries": ["94.16.110.151:5000"]
}
```
```bash
sudo systemctl restart docker
```
---
## Production Image Build
### Build-Prozess
Das Production Image wird lokal gebaut und dann zur Registry gepusht.
**1. Production Dockerfile** (`Dockerfile.production`):
```dockerfile
# Multi-stage build für optimale Image-Größe
FROM php:8.3-fpm-alpine AS base
# System dependencies
RUN apk add --no-cache \
nginx \
supervisor \
postgresql-dev \
libpq \
&& docker-php-ext-install pdo pdo_pgsql
# PHP Configuration
COPY docker/php/php.production.ini /usr/local/etc/php/conf.d/production.ini
COPY docker/php/opcache.ini /usr/local/etc/php/conf.d/opcache.ini
COPY docker/php/zz-docker.production.conf /usr/local/etc/php-fpm.d/zz-docker.conf
# Nginx Configuration
COPY docker/nginx/nginx.production.conf /etc/nginx/nginx.conf
COPY docker/nginx/default.production.conf /etc/nginx/http.d/default.conf
# Supervisor Configuration
COPY docker/supervisor/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Application Code
WORKDIR /var/www/html
COPY --chown=www-data:www-data . .
# Composer dependencies (Production only)
RUN composer install --no-dev --optimize-autoloader --no-interaction
# NPM build
RUN npm ci && npm run build
# Permissions
RUN chown -R www-data:www-data /var/www/html/storage \
&& chmod -R 775 /var/www/html/storage
# Start Supervisor (manages nginx + php-fpm)
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
```
**2. Build Command**:
```bash
# Im Projekt-Root
docker build \
-f Dockerfile.production \
-t 94.16.110.151:5000/framework:latest \
.
```
**3. Push to Registry**:
```bash
docker push 94.16.110.151:5000/framework:latest
```
**4. Verify Push**:
```bash
curl http://94.16.110.151:5000/v2/framework/tags/list
```
### Wichtige Konfigurationsdateien
#### Supervisor Configuration (`docker/supervisor/supervisord.conf`)
```ini
[supervisord]
nodaemon=true
silent=false
logfile=/dev/null
logfile_maxbytes=0
pidfile=/var/run/supervisord.pid
loglevel=info
[program:php-fpm]
command=php-fpm -F
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=true
startretries=3
[program:nginx]
command=nginx -g 'daemon off;'
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=true
startretries=3
depends_on=php-fpm
```
**Wichtige Änderungen**:
- `silent=false` + `logfile=/dev/null`: Supervisor loggt nach stdout/stderr statt Datei
- Grund: Python's logging kann `/dev/stdout` oder `/proc/self/fd/1` nicht im append-mode öffnen
#### PHP-FPM Production Config (`docker/php/zz-docker.production.conf`)
```ini
[www]
user = www-data
group = www-data
listen = 9000
listen.owner = www-data
listen.group = www-data
pm = dynamic
pm.max_children = 50
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.max_requests = 500
```
**Wichtig**: User/Group explizit auf `www-data` setzen, da Container als root läuft.
---
## Deployment Prozess
### Docker Compose Setup
**Base Configuration** (`docker-compose.yml`):
- Definiert alle Services für Development
- Wird **nicht** auf Production Server deployed
**Production Overrides** (`docker-compose.prod.yml`):
- Merged mit base config
- Production-spezifische Einstellungen
### Production Override Highlights
**Web Service**:
```yaml
web:
image: 94.16.110.151:5000/framework:latest
pull_policy: always # Immer von Registry pullen, nie bauen
entrypoint: [] # Entrypoint von Base-Image clearen
command: ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
user: root # Container läuft als root, PHP-FPM workers als www-data
volumes:
- ./storage/logs:/var/www/html/storage/logs:rw
- ./storage/uploads:/var/www/html/storage/uploads:rw
- ./ssl:/var/www/ssl:ro
environment:
- APP_ENV=production
labels:
com.centurylinklabs.watchtower.enable: "true"
```
**Wichtige Overrides**:
1. `pull_policy: always`: Verhindert lokales Build, zwingt Registry-Pull
2. `entrypoint: []`: Clearen des inherited entrypoint vom Base PHP-Image
3. `command: [...]`: Expliziter Start-Command für Supervisor
4. `user: root`: Nötig für Supervisor, PHP-FPM läuft intern als www-data
### Deployment Steps
**1. Files auf Server kopieren**:
```bash
# Lokale Entwicklungsmaschine (via WireGuard)
scp docker-compose.prod.yml deploy@94.16.110.151:/home/deploy/framework/
scp .env.production deploy@94.16.110.151:/home/deploy/framework/.env
```
**2. Auf Server: Pull und Deploy**:
```bash
# SSH auf Production Server
ssh deploy@94.16.110.151
# In Projekt-Verzeichnis
cd /home/deploy/framework
# Pull latest image
docker pull 94.16.110.151:5000/framework:latest
# Deploy Stack
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
# Check Status
docker-compose -f docker-compose.yml -f docker-compose.prod.yml ps
```
**3. Logs überwachen**:
```bash
# Alle Container
docker-compose -f docker-compose.yml -f docker-compose.prod.yml logs -f
# Spezifischer Container
docker logs -f web
docker logs -f php
```
### Deployment Verification
**Container Health Checks**:
```bash
# Alle Container sollten "healthy" sein
docker-compose ps
# Output sollte zeigen:
# web Up (healthy)
# php Up (healthy)
# db Up (healthy)
# redis Up (healthy)
```
**Supervisor Status (im web container)**:
```bash
docker exec web supervisorctl status
# Output:
# nginx RUNNING pid 7, uptime 0:05:23
# php-fpm RUNNING pid 8, uptime 0:05:23
```
**Nginx & PHP-FPM Processes**:
```bash
docker exec web ps aux | grep -E 'nginx|php-fpm'
# Sollte zeigen:
# root 1 supervisor
# root 7 nginx: master
# www-data nginx: worker (mehrere)
# root 8 php-fpm: master
# www-data php-fpm: pool www (mehrere)
```
**Application Test**:
```bash
# Von lokalem Rechner (via WireGuard)
curl -k -I https://94.16.110.151:8443/
# Erwartete Response:
# HTTP/2 200
# server: nginx
# content-type: text/html
```
---
## Troubleshooting
### Problem 1: Supervisor Log File Permission Denied
**Symptom**:
```
PermissionError: [Errno 13] Permission denied: '/var/log/supervisor/supervisord.log'
```
**Ursache**: Supervisor kann nicht in `/var/log/supervisor/` schreiben, selbst als root.
**Lösung**: `supervisord.conf` ändern:
```ini
silent=false
logfile=/dev/null
logfile_maxbytes=0
```
**Grund**: Python's logging library kann `/dev/stdout` oder `/proc/self/fd/1` nicht im append-mode öffnen. `/dev/null` + `silent=false` macht Supervisor's logging auf stdout/stderr.
### Problem 2: EACCES Errors in Web Container
**Symptom**:
```
CRIT could not write pidfile /var/run/supervisord.pid
spawnerr: unknown error making dispatchers for 'nginx': EACCES
```
**Ursache**: Web container läuft nicht als root, sondern mit inherited user von base config.
**Lösung**: `docker-compose.prod.yml` - `user: root` setzen:
```yaml
web:
user: root
```
### Problem 3: Docker Entrypoint Override funktioniert nicht
**Symptom**: Container command zeigt entrypoint prepended:
```
/usr/local/bin/docker-entrypoint.sh /usr/bin/supervisord -c ...
```
**Ursache**: Base `docker-compose.yml` hat `web` service mit separate build context. Inherited ENTRYPOINT vom Base PHP-Image wird prepended.
**Lösung**: Explizit entrypoint clearen:
```yaml
web:
image: 94.16.110.151:5000/framework:latest
pull_policy: always
entrypoint: [] # WICHTIG: Entrypoint clearen
command: ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
```
### Problem 4: Queue Worker restarts kontinuierlich
**Symptom**:
```
docker ps # zeigt queue-worker als "Restarting"
```
**Ursache**: Base `docker-compose.yml` command sucht `/var/www/html/worker.php` das nicht existiert.
**Temporary Fix**: Service deaktivieren in `docker-compose.prod.yml`:
```yaml
queue-worker:
deploy:
replicas: 0
```
**Proper Fix**: Richtigen Worker-Command konfigurieren:
```yaml
queue-worker:
command: ["php", "/var/www/html/console.php", "queue:work"]
```
### Problem 5: HTTP Port 80 nicht erreichbar
**Symptom**: `curl http://94.16.110.151:8888/` → Connection refused
**Mögliche Ursachen**:
1. Nginx nicht auf Port 80 listening (nur 443)
2. Firewall blockiert Port 8888
3. Intentional (HTTPS-only Configuration)
**Debug**:
```bash
# Im Container checken
docker exec web netstat -tlnp | grep :80
# Nginx config testen
docker exec web nginx -t
# Nginx config anschauen
docker exec web cat /etc/nginx/http.d/default.conf
```
**Fix (falls HTTP→HTTPS Redirect gewünscht)**:
In `docker/nginx/default.production.conf`:
```nginx
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
```
---
## Monitoring
### Prometheus
**Zugang**: http://10.8.0.1:9090 (nur via WireGuard)
**Konfiguration**: `monitoring/prometheus/prometheus.yml`
**Scraped Targets**:
- Framework Application Metrics
- Container Metrics (cAdvisor)
- Node Exporter (Server Metrics)
### Grafana
**Zugang**: http://10.8.0.1:3000 (nur via WireGuard)
**Default Login**:
- User: `admin`
- Password: `${GRAFANA_PASSWORD}` (aus `.env`)
**Dashboards**: `monitoring/grafana/provisioning/dashboards/`
### Portainer
**Zugang**: https://10.8.0.1:9443 (nur via WireGuard)
**Features**:
- Container Management
- Stack Deployment
- Log Viewing
- Resource Usage
### Watchtower Auto-Update
Watchtower überwacht Container mit Label `com.centurylinklabs.watchtower.enable: "true"` und updated sie automatisch bei neuen Images.
**Konfiguration**:
```yaml
watchtower:
environment:
WATCHTOWER_CLEANUP: "true"
WATCHTOWER_POLL_INTERVAL: 300 # 5 Minuten
WATCHTOWER_LABEL_ENABLE: "true"
WATCHTOWER_NOTIFICATIONS: "shoutrrr"
WATCHTOWER_NOTIFICATION_URL: "${WATCHTOWER_NOTIFICATION_URL}"
```
**Monitoren**:
```bash
docker logs -f watchtower
```
---
## Maintenance
### Image Updates
**1. Lokal neues Image bauen**:
```bash
docker build -f Dockerfile.production -t 94.16.110.151:5000/framework:latest .
docker push 94.16.110.151:5000/framework:latest
```
**2. Auf Server**:
```bash
# Watchtower erkennt Update automatisch innerhalb von 5 Minuten
# Oder manuell:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml pull
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
```
### Database Backups
```bash
# Manual Backup
docker exec db pg_dump -U framework_user framework_db > backup_$(date +%Y%m%d_%H%M%S).sql
# Automated (via cron)
0 2 * * * /home/deploy/scripts/backup-database.sh
```
### Log Rotation
Logs in `./storage/logs/` automatisch rotieren:
```bash
# /etc/logrotate.d/framework
/home/deploy/framework/storage/logs/*.log {
daily
rotate 14
compress
delaycompress
notifempty
missingok
create 0640 www-data www-data
}
```
### SSL Certificate Renewal
**Let's Encrypt** (automatisch via certbot):
```bash
certbot renew --deploy-hook "docker exec web nginx -s reload"
```
---
## Security Checklist
- [ ] WireGuard VPN konfiguriert und aktiv
- [ ] Firewall (UFW) konfiguriert und enabled
- [ ] Nur benötigte Ports offen (8443, 51820)
- [ ] Monitoring nur via VPN erreichbar (10.8.0.1:*)
- [ ] SSL/TLS Zertifikate gültig
- [ ] `.env` Secrets nicht in Git committed
- [ ] Database Credentials rotiert
- [ ] Redis Password gesetzt
- [ ] Docker Registry läuft lokal (nicht public)
- [ ] Container laufen mit minimal privileges
- [ ] Watchtower auto-updates aktiviert
- [ ] Backup-Strategie implementiert
- [ ] Log monitoring aktiv
---
## Performance Tuning
### PHP-FPM
`docker/php/zz-docker.production.conf`:
```ini
pm.max_children = 50 # Max. gleichzeitige Requests
pm.start_servers = 10 # Initial workers
pm.min_spare_servers = 5 # Min. idle workers
pm.max_spare_servers = 20 # Max. idle workers
pm.max_requests = 500 # Worker recycling
```
**Tuning basierend auf RAM**:
- 4GB RAM: max_children = 30
- 8GB RAM: max_children = 50
- 16GB RAM: max_children = 100
### OPcache
`docker/php/opcache.ini`:
```ini
opcache.enable=1
opcache.memory_consumption=128
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10000
opcache.validate_timestamps=0 # Production: keine Timestamp-Checks
opcache.revalidate_freq=0
```
### Nginx
```nginx
worker_processes auto;
worker_connections 1024;
keepalive_timeout 65;
client_max_body_size 20M;
```
---
## Contact & Support
**Production Server**: 94.16.110.151
**VPN Gateway**: 10.8.0.1
**Documentation**: `/home/deploy/framework/docs/`
**Issue Tracker**: [GitHub/GitLab URL]
---
## Change Log
### 2025-10-28 - Initial Production Deployment
**Changes**:
- Supervisor logging: `/dev/null` + `silent=false`
- docker-compose.prod.yml: `user: root` für web, php, queue-worker
- docker-compose.prod.yml: `entrypoint: []` für web service
- docker-compose.prod.yml: `pull_policy: always` für registry images
**Deployed**:
- Image: `94.16.110.151:5000/framework:latest`
- Digest: `sha256:eee1db20b9293cf611f53d01de68e94df1cfb3c748fe967849e080d19b9e4c8b`
**Status**: ✅ Deployment erfolgreich, Container healthy

View File

@@ -1,227 +0,0 @@
# Quick Deploy Guide
Schnellanleitung für Production Deployments.
## Voraussetzungen
- WireGuard VPN aktiv: `sudo wg-quick up wg0-production`
- SSH-Zugang konfiguriert
- Docker Registry läuft auf Production Server
## Deployment in 5 Schritten
### 1. Image bauen und pushen
```bash
# Im Projekt-Root
docker build -f Dockerfile.production -t 94.16.110.151:5000/framework:latest .
docker push 94.16.110.151:5000/framework:latest
```
**Verify Push**:
```bash
curl http://94.16.110.151:5000/v2/framework/tags/list
```
### 2. Config-Files auf Server kopieren
```bash
# Falls docker-compose.prod.yml oder .env geändert wurden
scp docker-compose.prod.yml deploy@94.16.110.151:/home/deploy/framework/
scp .env.production deploy@94.16.110.151:/home/deploy/framework/.env
```
### 3. Auf Server deployen
```bash
ssh deploy@94.16.110.151
cd /home/deploy/framework
# Pull und Deploy
docker-compose -f docker-compose.yml -f docker-compose.prod.yml pull
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
```
### 4. Status checken
```bash
# Container Status
docker-compose -f docker-compose.yml -f docker-compose.prod.yml ps
# Logs anschauen
docker-compose -f docker-compose.yml -f docker-compose.prod.yml logs -f web php
# Supervisor Status (im web container)
docker exec web supervisorctl status
```
### 5. Application testen
```bash
# Von lokaler Maschine (via WireGuard)
curl -k -I https://94.16.110.151:8443/
# Erwartetes Ergebnis:
# HTTP/2 200
# server: nginx
```
## Rollback
Falls Probleme auftreten:
```bash
# Auf Server
cd /home/deploy/framework
# Vorheriges Image ID finden
docker images 94.16.110.151:5000/framework
# Zu spezifischem Image wechseln
docker-compose -f docker-compose.yml -f docker-compose.prod.yml down
docker tag 94.16.110.151:5000/framework@sha256:<old-digest> 94.16.110.151:5000/framework:latest
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
```
## Monitoring URLs
**Zugang nur via WireGuard VPN (10.8.0.1)**:
- Prometheus: http://10.8.0.1:9090
- Grafana: http://10.8.0.1:3000 (admin / $GRAFANA_PASSWORD)
- Portainer: https://10.8.0.1:9443
## Watchtower Auto-Updates
Watchtower überwacht automatisch und updated Container mit neuem Image (alle 5 Minuten).
**Status checken**:
```bash
docker logs watchtower
```
**Manuell triggern**:
```bash
# Watchtower neu starten (triggert sofortigen Check)
docker restart watchtower
```
## Troubleshooting
### Container nicht healthy
```bash
# Logs anschauen
docker logs web
docker logs php
# Im Container debuggen
docker exec -it web sh
docker exec -it php sh
# Supervisor Status
docker exec web supervisorctl status
# Nginx/PHP-FPM Prozesse
docker exec web ps aux | grep -E 'nginx|php-fpm'
```
### Database Connection Issues
```bash
# PostgreSQL Connection testen
docker exec php php -r "new PDO('pgsql:host=db;dbname=framework_db', 'framework_user', 'password');"
# Database Logs
docker logs db
# In Database connecten
docker exec -it db psql -U framework_user -d framework_db
```
### Redis Connection Issues
```bash
# Redis Connection testen
docker exec php php -r "var_dump((new Redis())->connect('redis', 6379));"
# Redis Logs
docker logs redis
# Redis CLI
docker exec -it redis redis-cli
```
## Maintenance Commands
### Database Backup
```bash
# Manual Backup
docker exec db pg_dump -U framework_user framework_db > backup_$(date +%Y%m%d_%H%M%S).sql
```
### Logs Cleanup
```bash
# Storage Logs leeren (auf Server)
docker exec web sh -c 'rm -rf /var/www/html/storage/logs/*.log'
# Docker Logs cleanup
docker system prune -f
docker volume prune -f
```
### Image Cleanup
```bash
# Alte Images entfernen
docker image prune -a -f
# Nur untagged images
docker image prune -f
```
## Performance Check
```bash
# Container Resource Usage
docker stats
# PHP-FPM Status
docker exec web curl http://localhost/php-fpm-status
# Nginx Status
docker exec web curl http://localhost/nginx-status
# Database Connections
docker exec db psql -U framework_user -d framework_db -c "SELECT count(*) FROM pg_stat_activity;"
```
## SSL Certificate Renewal
```bash
# Let's Encrypt Renewal (auf Server als root)
certbot renew
docker exec web nginx -s reload
```
## Nützliche Aliases
Füge zu `~/.bashrc` auf Production Server hinzu:
```bash
alias dc='docker-compose -f docker-compose.yml -f docker-compose.prod.yml'
alias dcup='dc up -d'
alias dcdown='dc down'
alias dcps='dc ps'
alias dclogs='dc logs -f'
alias dcrestart='dc restart'
```
Dann kannst du einfach verwenden:
```bash
dcup # Deploy
dcps # Status
dclogs # Logs anschauen
```

View File

@@ -0,0 +1,197 @@
# Docker Registry Credentials Testen
Dieses Dokument erklärt, wie du die Docker Registry Credentials testen kannst.
## Lokales Testen
### Voraussetzungen
1. Docker muss installiert und laufend sein
2. Curl muss verfügbar sein (wird automatisch installiert falls fehlend)
### Verwendung
```bash
# Setze die Credentials als Environment-Variablen
export REGISTRY_USER="admin"
export REGISTRY_PASSWORD="dein-registry-passwort"
# Führe das Test-Script aus
./scripts/test-registry-credentials.sh
```
### Mit expliziten Werten
```bash
REGISTRY_USER="admin" \
REGISTRY_PASSWORD="dein-passwort" \
REGISTRY_DOMAIN="registry.michaelschiemer.de" \
REGISTRY_HOST="94.16.110.151" \
REGISTRY_PORT="5000" \
./scripts/test-registry-credentials.sh
```
## Was wird getestet?
Das Script führt folgende Tests durch:
### 1. HTTP Connectivity
- Testet ob die Registry über HTTP erreichbar ist (`http://94.16.110.151:5000`)
- Erwartet Status 401 (Auth erforderlich) oder 200
### 2. HTTPS Connectivity
- Testet ob die Registry über HTTPS erreichbar ist (`https://registry.michaelschiemer.de`)
- Erwartet Status 401 (Auth erforderlich) oder 200
### 3. Docker Login über HTTP
- Versucht Docker Login über HTTP
- Benötigt `insecure-registry` Konfiguration im Docker-daemon
### 4. Docker Login über HTTPS
- Versucht Docker Login über HTTPS
- Keine zusätzliche Konfiguration nötig (empfohlen)
### 5. Registry API Zugriff
- Testet ob die Registry API funktioniert
- Listet verfügbare Repositories auf
## Test in CI/CD
### Via Gitea Workflow
Das Script kann auch in einem Gitea Workflow verwendet werden:
```yaml
- name: Test Registry Credentials
shell: bash
env:
REGISTRY_USER: ${{ secrets.REGISTRY_USER }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
run: |
./scripts/test-registry-credentials.sh
```
### Workflow ausführen
Es gibt einen dedizierten Workflow `.gitea/workflows/test-registry.yml`:
1. Gehe zu: **Actions****Test Registry Credentials**
2. Klicke auf **Run workflow**
3. Der Workflow testet automatisch die Credentials aus Gitea Secrets
## Troubleshooting
### Problem: "REGISTRY_PASSWORD ist nicht gesetzt"
**Lösung:**
```bash
export REGISTRY_PASSWORD="dein-passwort"
```
### Problem: "Unauthorized (401)" beim Login
**Ursache:** Falsche Credentials
**Lösung:**
1. Prüfe `REGISTRY_USER` - sollte `admin` sein (oder der richtige Username)
2. Prüfe `REGISTRY_PASSWORD` - muss mit dem Passwort in `deployment/stacks/registry/auth/htpasswd` übereinstimmen
3. Prüfe Gitea Secrets:
- Repository → Settings → Secrets
- Stelle sicher, dass `REGISTRY_USER` und `REGISTRY_PASSWORD` korrekt sind
### Problem: "HTTP response to HTTPS client"
**Ursache:** Docker versucht HTTPS, aber Registry läuft auf HTTP
**Lösung:**
- Für HTTP: Docker-daemon muss mit `--insecure-registry` konfiguriert werden
- Oder verwende HTTPS-Zugriff über `registry.michaelschiemer.de` (empfohlen)
### Problem: "Registry nicht erreichbar"
**Ursache:** Registry läuft nicht oder Netzwerk-Problem
**Lösung:**
```bash
# Prüfe ob Registry läuft
docker ps | grep registry
# Prüfe Registry-Logs
docker logs registry
# Prüfe Netzwerk
curl -v http://94.16.110.151:5000/v2/
```
### Problem: "404" bei HTTPS
**Ursache:** Traefik-Routing ist nicht richtig konfiguriert
**Lösung:**
1. Prüfe Traefik Labels in `deployment/stacks/registry/docker-compose.yml`
2. Prüfe ob Registry im `traefik-public` Netzwerk ist
3. Prüfe DNS-Konfiguration für `registry.michaelschiemer.de`
## Passwort prüfen/ändern
### Passwort auf dem Server prüfen
```bash
ssh deploy@94.16.110.151
cd ~/deployment/stacks/registry
cat auth/htpasswd
```
### Neues Passwort setzen
```bash
ssh deploy@94.16.110.151
cd ~/deployment/stacks/registry
# Neues Passwort setzen
htpasswd -B auth/htpasswd admin
# Registry neu starten
docker compose restart registry
```
Dann in Gitea Secrets aktualisieren:
- Repository → Settings → Secrets
- `REGISTRY_PASSWORD` mit neuem Passwort aktualisieren
## Script-Output
### Erfolgreich
```
✅ Docker ist verfügbar
✅ curl ist verfügbar
✅ Registry erreichbar über HTTPS (Status: 401 - Auth erforderlich, das ist gut!)
✅ Docker Login erfolgreich!
✅ Credentials sind korrekt und funktionieren!
```
### Fehlgeschlagen
```
❌ Docker Login fehlgeschlagen (Exit Code: 1)
⚠️ Fehler: Unauthorized (401)
Die Credentials sind falsch:
- Username: admin
- Password Länge: 24 Zeichen
```
## Integration in CI/CD
Das Script kann in jedem Workflow-Step verwendet werden:
```yaml
- name: Verify Registry Access
env:
REGISTRY_USER: ${{ secrets.REGISTRY_USER }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
run: |
./scripts/test-registry-credentials.sh
```
Falls das Script fehlschlägt (Exit Code 1), wird der Workflow gestoppt und du bekommst klare Fehlermeldungen.

View File

@@ -1,581 +0,0 @@
# Production Deployment Troubleshooting Checklist
Systematische Problemlösung für häufige Deployment-Issues.
## Issue 1: Supervisor Log File Permission Denied
### Symptom
```
PermissionError: [Errno 13] Permission denied: '/var/log/supervisor/supervisord.log'
```
Container startet nicht, Supervisor kann Logfile nicht schreiben.
### Diagnose
```bash
docker logs web # Zeigt Permission Error
docker exec web ls -la /var/log/supervisor/ # Directory existiert nicht oder keine Permissions
```
### Root Cause
- Supervisor versucht in `/var/log/supervisor/supervisord.log` zu schreiben
- Directory existiert nicht oder keine Write-Permissions
- Auch als root problematisch in containerisierter Umgebung
### Lösung 1 (FUNKTIONIERT NICHT)
**Versuch**: `/proc/self/fd/1` verwenden
`docker/supervisor/supervisord.conf`:
```ini
logfile=/proc/self/fd/1
```
**Fehler**: `PermissionError: [Errno 13] Permission denied: '/proc/self/fd/1'`
**Grund**: Python's logging library (verwendet von Supervisor) kann `/proc/self/fd/1` oder `/dev/stdout` nicht im append-mode öffnen.
### Lösung 2 (ERFOLGREICH)
**Fix**: `/dev/null` mit `silent=false`
`docker/supervisor/supervisord.conf`:
```ini
[supervisord]
nodaemon=true
silent=false # WICHTIG: Logging trotz /dev/null
logfile=/dev/null
logfile_maxbytes=0
pidfile=/var/run/supervisord.pid
loglevel=info
```
**Warum funktioniert das?**
- `logfile=/dev/null`: Kein File-Logging
- `silent=false`: Supervisor loggt nach stdout/stderr
- Logs erscheinen in `docker logs web`
### Verification
```bash
docker logs web
# Output:
# 2025-10-28 16:29:59,976 INFO supervisord started with pid 1
# 2025-10-28 16:30:00,980 INFO spawned: 'nginx' with pid 7
# 2025-10-28 16:30:00,982 INFO spawned: 'php-fpm' with pid 8
# 2025-10-28 16:30:02,077 INFO success: nginx entered RUNNING state
# 2025-10-28 16:30:02,077 INFO success: php-fpm entered RUNNING state
```
### Related Files
- `docker/supervisor/supervisord.conf`
- `Dockerfile.production` (COPY supervisord.conf)
---
## Issue 2: Web Container EACCES Errors
### Symptom
```
2025-10-28 16:16:52,152 CRIT could not write pidfile /var/run/supervisord.pid
2025-10-28 16:16:53,154 INFO spawnerr: unknown error making dispatchers for 'nginx': EACCES
2025-10-28 16:16:53,154 INFO spawnerr: unknown error making dispatchers for 'php-fpm': EACCES
```
### Diagnose
```bash
# Container User checken
docker exec web whoami
# Falls nicht "root", dann ist das der Issue
# Docker Compose Config checken
docker inspect web | grep -i user
# Zeigt inherited user von base config
```
### Root Cause
- `web` service in `docker-compose.prod.yml` hat **kein** `user: root` gesetzt
- Inherited `user: 1000:1000` oder `user: www-data` von base `docker-compose.yml`
- Supervisor benötigt root um nginx/php-fpm master processes zu starten
### Lösung
**Fix**: `user: root` explizit setzen
`docker-compose.prod.yml`:
```yaml
web:
image: 94.16.110.151:5000/framework:latest
user: root # ← HINZUFÜGEN
# ... rest der config
```
Auch für `php` und `queue-worker` services hinzufügen:
```yaml
php:
image: 94.16.110.151:5000/framework:latest
user: root # ← HINZUFÜGEN
queue-worker:
image: 94.16.110.151:5000/framework:latest
user: root # ← HINZUFÜGEN
```
### Warum user: root?
- **Container läuft als root**: Supervisor master process
- **Nginx master**: root (worker processes als www-data via nginx.conf)
- **PHP-FPM master**: root (pool workers als www-data via php-fpm.conf)
`docker/php/zz-docker.production.conf`:
```ini
[www]
user = www-data # ← Worker processes laufen als www-data
group = www-data
```
### Verification
```bash
docker exec web whoami
# root
docker exec web ps aux | grep -E 'nginx|php-fpm'
# root 1 supervisord
# root 7 nginx: master process
# www-data 10 nginx: worker process
# root 8 php-fpm: master process
# www-data 11 php-fpm: pool www
```
### Related Files
- `docker-compose.prod.yml` (web, php, queue-worker services)
- `docker/php/zz-docker.production.conf`
- `docker/nginx/nginx.production.conf`
---
## Issue 3: Docker Entrypoint Override funktioniert nicht
### Symptom
Container command zeigt Entrypoint prepended:
```bash
docker ps
# COMMAND: "/usr/local/bin/docker-entrypoint.sh /usr/bin/supervisord -c ..."
```
Supervisor wird nicht direkt gestartet, sondern durch einen wrapper script.
### Diagnose
```bash
# Container Command checken
docker inspect web --format='{{.Config.Entrypoint}}'
# [/usr/local/bin/docker-entrypoint.sh]
docker inspect web --format='{{.Config.Cmd}}'
# [/usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf]
```
### Root Cause
1. Base `docker-compose.yml` hat `web` service mit separate build:
```yaml
web:
build:
context: docker/nginx
dockerfile: Dockerfile
```
2. Production override setzt `image:` aber cleared **nicht** den inherited ENTRYPOINT:
```yaml
web:
image: 94.16.110.151:5000/framework:latest
command: ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
```
3. Base PHP image hat ENTRYPOINT der prepended wird
4. Docker Compose merge: ENTRYPOINT + CMD = final command
### Lösung - Iteration 1 (FUNKTIONIERT NICHT)
❌ **Versuch**: Nur `command:` setzen
`docker-compose.prod.yml`:
```yaml
web:
image: 94.16.110.151:5000/framework:latest
command: ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
```
**Result**: Entrypoint wird trotzdem prepended
### Lösung - Iteration 2 (FUNKTIONIERT NICHT)
❌ **Versuch**: `pull_policy: always` hinzufügen
`docker-compose.prod.yml`:
```yaml
web:
image: 94.16.110.151:5000/framework:latest
pull_policy: always # Force registry pull
command: ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
```
**Result**: Image wird von Registry gepullt, aber Entrypoint wird trotzdem prepended
### Lösung - Iteration 3 (ERFOLGREICH)
✅ **Fix**: `entrypoint: []` explizit clearen
`docker-compose.prod.yml`:
```yaml
web:
image: 94.16.110.151:5000/framework:latest
pull_policy: always # Always pull from registry, never build
entrypoint: [] # ← WICHTIG: Entrypoint clearen
command: ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
user: root
```
**Warum `entrypoint: []`?**
- Leeres Array cleared den inherited entrypoint komplett
- `command:` wird dann direkt als PID 1 gestartet
- Keine wrapper scripts, keine indirection
### Verification
```bash
docker inspect web --format='{{.Config.Entrypoint}}'
# [] ← Leer!
docker inspect web --format='{{.Config.Cmd}}'
# [/usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf]
docker exec web ps aux
# PID 1: /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
# Kein entrypoint wrapper!
```
### Related Files
- `docker-compose.prod.yml` (web service)
### Docker Compose Override Rules
```
Base Config + Override = Final Config
Base:
web:
build: docker/nginx
→ inherited ENTRYPOINT from base image
Override (insufficient):
web:
image: 94.16.110.151:5000/framework:latest
command: [...]
→ ENTRYPOINT still prepended to command
Override (correct):
web:
image: 94.16.110.151:5000/framework:latest
entrypoint: [] ← Clears inherited entrypoint
command: [...] ← Runs directly as PID 1
```
---
## Issue 4: Queue Worker Container Restarts
### Symptom
```bash
docker ps
# queue-worker Restarting (1) 5 seconds ago
```
Container restart loop, nie healthy.
### Diagnose
```bash
docker logs queue-worker
# Error: /var/www/html/worker.php not found
# oder
# php: command not found
```
### Root Cause
Base `docker-compose.yml` hat Queue Worker Command für Development:
```yaml
queue-worker:
command: ["php", "/var/www/html/worker.php"]
```
`worker.php` existiert nicht im Production Image.
### Lösung - Option 1: Service deaktivieren
✅ **Quick Fix**: Queue Worker deaktivieren
`docker-compose.prod.yml`:
```yaml
queue-worker:
deploy:
replicas: 0 # Disable service
```
### Lösung - Option 2: Richtigen Command setzen
✅ **Proper Fix**: Console Command verwenden
`docker-compose.prod.yml`:
```yaml
queue-worker:
image: 94.16.110.151:5000/framework:latest
user: root
command: ["php", "/var/www/html/console.php", "queue:work"]
# oder für Supervisor-managed:
# entrypoint: []
# command: ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/queue-worker-supervisord.conf"]
```
### Verification
```bash
docker logs queue-worker
# [timestamp] INFO Queue worker started
# [timestamp] INFO Processing job: ...
```
### Related Files
- `docker-compose.yml` (base queue-worker definition)
- `docker-compose.prod.yml` (production override)
- `console.php` (framework console application)
---
## Issue 5: HTTP Port 80 nicht erreichbar
### Symptom
```bash
curl http://94.16.110.151:8888/
# curl: (7) Failed to connect to 94.16.110.151 port 8888: Connection refused
docker exec web curl http://localhost/
# curl: (7) Failed to connect to localhost port 80: Connection refused
```
### Diagnose
```bash
# Nginx listening ports checken
docker exec web netstat -tlnp | grep nginx
# Zeigt nur: 0.0.0.0:443
# Nginx Config checken
docker exec web cat /etc/nginx/http.d/default.conf
# Kein "listen 80;" block
```
### Root Cause - Option 1: Intentional HTTPS-only
Möglicherweise ist HTTP absichtlich disabled (Security Best Practice).
### Root Cause - Option 2: Missing HTTP Block
Nginx config hat keinen HTTP listener, nur HTTPS.
### Lösung - HTTP→HTTPS Redirect hinzufügen
✅ **Fix**: HTTP Redirect konfigurieren
`docker/nginx/default.production.conf`:
```nginx
# HTTP → HTTPS Redirect
server {
listen 80;
server_name _;
location / {
return 301 https://$host$request_uri;
}
}
# HTTPS Server
server {
listen 443 ssl http2;
server_name _;
ssl_certificate /var/www/ssl/cert.pem;
ssl_certificate_key /var/www/ssl/key.pem;
root /var/www/html/public;
index index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
```
### Verification
```bash
curl -I http://94.16.110.151:8888/
# HTTP/1.1 301 Moved Permanently
# Location: https://94.16.110.151:8888/
curl -k -I https://94.16.110.151:8443/
# HTTP/2 200
# server: nginx
```
### Related Files
- `docker/nginx/default.production.conf`
- `Dockerfile.production` (COPY nginx config)
---
## General Debugging Commands
### Container Inspection
```bash
# Alle Container Status
docker-compose -f docker-compose.yml -f docker-compose.prod.yml ps
# Container Details
docker inspect web
# Container Logs
docker logs -f web
docker logs --tail 100 web
# Inside Container
docker exec -it web sh
docker exec -it php sh
```
### Supervisor Debugging
```bash
# Supervisor Status
docker exec web supervisorctl status
# Supervisor Logs
docker exec web tail -f /dev/null # Logs gehen nach stdout/stderr
# Supervisor Config testen
docker exec web supervisord -c /etc/supervisor/conf.d/supervisord.conf -n
```
### Nginx Debugging
```bash
# Nginx Config testen
docker exec web nginx -t
# Nginx reload
docker exec web nginx -s reload
# Nginx listening ports
docker exec web netstat -tlnp | grep nginx
# Nginx processes
docker exec web ps aux | grep nginx
```
### PHP-FPM Debugging
```bash
# PHP-FPM Status
docker exec web curl http://localhost/php-fpm-status
# PHP-FPM Config testen
docker exec web php-fpm -t
# PHP-FPM processes
docker exec web ps aux | grep php-fpm
# PHP Version
docker exec web php -v
# PHP Modules
docker exec web php -m
```
### Network Debugging
```bash
# Port listening
docker exec web netstat -tlnp
# DNS resolution
docker exec web nslookup db
docker exec web nslookup redis
# Network connectivity
docker exec web ping db
docker exec web ping redis
# HTTP request
docker exec web curl http://localhost/
```
### Database Debugging
```bash
# PostgreSQL Connection
docker exec php php -r "new PDO('pgsql:host=db;dbname=framework_db', 'framework_user', 'password');"
# Database Logs
docker logs db
# Connect to DB
docker exec -it db psql -U framework_user -d framework_db
# Check connections
docker exec db psql -U framework_user -d framework_db -c "SELECT count(*) FROM pg_stat_activity;"
```
### Performance Monitoring
```bash
# Container Resource Usage
docker stats
# Disk Usage
docker system df
# Image Sizes
docker images
# Volume Sizes
docker system df -v
```
---
## Checklist für erfolgreichen Deploy
### Pre-Deployment
- [ ] Image gebaut: `docker build -f Dockerfile.production -t 94.16.110.151:5000/framework:latest .`
- [ ] Image gepusht: `docker push 94.16.110.151:5000/framework:latest`
- [ ] Registry verfügbar: `curl http://94.16.110.151:5000/v2/_catalog`
- [ ] WireGuard VPN aktiv: `wg show`
- [ ] `.env.production` auf Server aktuell
- [ ] `docker-compose.prod.yml` auf Server aktuell
### Deployment
- [ ] SSH auf Server: `ssh deploy@94.16.110.151`
- [ ] Image pullen: `docker-compose -f docker-compose.yml -f docker-compose.prod.yml pull`
- [ ] Stack starten: `docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d`
### Post-Deployment Verification
- [ ] Container laufen: `docker-compose ps` zeigt alle "Up (healthy)"
- [ ] Supervisor Status: `docker exec web supervisorctl status` zeigt nginx/php-fpm RUNNING
- [ ] Nginx lauscht: `docker exec web netstat -tlnp | grep :443`
- [ ] PHP-FPM lauscht: `docker exec web netstat -tlnp | grep :9000`
- [ ] Application erreichbar: `curl -k -I https://94.16.110.151:8443/` → HTTP/2 200
- [ ] Database erreichbar: `docker exec php php -r "new PDO(...);"`
- [ ] Redis erreichbar: `docker exec php php -r "new Redis()->connect('redis', 6379);"`
- [ ] Logs sauber: `docker logs web` zeigt keine Errors
### Monitoring
- [ ] Prometheus: http://10.8.0.1:9090 erreichbar
- [ ] Grafana: http://10.8.0.1:3000 erreichbar
- [ ] Portainer: https://10.8.0.1:9443 erreichbar
- [ ] Watchtower aktiv: `docker logs watchtower` zeigt Checks
---
## Quick Reference
### Häufigste Fehlerursachen
1. **Supervisor Logging**: Verwende `logfile=/dev/null` + `silent=false`
2. **User Permissions**: Setze `user: root` in docker-compose.prod.yml
3. **Entrypoint Override**: Setze `entrypoint: []` um inherited entrypoint zu clearen
4. **Pull Policy**: Verwende `pull_policy: always` um registry image zu forcen
### Wichtigste Config-Änderungen
- `docker/supervisor/supervisord.conf`: `logfile=/dev/null`, `silent=false`
- `docker-compose.prod.yml`: `user: root`, `entrypoint: []`, `pull_policy: always`
- `docker/php/zz-docker.production.conf`: `user = www-data`, `group = www-data`

View File

@@ -21,7 +21,8 @@ Edit `app.ini` (usually in `/etc/gitea/app.ini` or `custom/conf/app.ini`):
```ini
[actions]
ENABLED = true
DEFAULT_ACTIONS_URL = https://github.com
# Do NOT set DEFAULT_ACTIONS_URL - Gitea will automatically use its own instance
# Setting DEFAULT_ACTIONS_URL to a custom URL is no longer supported
```
Restart Gitea:

View File

@@ -0,0 +1,183 @@
#!/bin/bash
# Interaktives Script zum Setzen der Gitea Secrets via API
# Fragt nach Token falls nicht vorhanden
set -euo pipefail
# Repository Info aus git remote
REPO_FULL=$(git remote get-url origin 2>/dev/null | sed -E 's/.*[:/]([^/]+)\/([^/]+)\.git$/\1\/\2/' || echo "michael/michaelschiemer")
REPO_OWNER=$(echo "$REPO_FULL" | cut -d'/' -f1)
REPO_NAME=$(echo "$REPO_FULL" | cut -d'/' -f2)
GITEA_URL="${GITEA_URL:-https://git.michaelschiemer.de}"
GITEA_TOKEN="${GITEA_TOKEN:-}"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
echo -e "${BLUE}════════════════════════════════════════════════════════════${NC}"
echo -e "${BLUE}Gitea Repository Secrets Setup via API${NC}"
echo -e "${BLUE}════════════════════════════════════════════════════════════${NC}"
echo ""
echo -e "Repository: ${GREEN}${REPO_OWNER}/${REPO_NAME}${NC}"
echo -e "Gitea URL: ${GREEN}${GITEA_URL}${NC}"
echo ""
# Prüfe ob Token vorhanden
if [ -z "$GITEA_TOKEN" ]; then
echo -e "${YELLOW}⚠️ GITEA_TOKEN nicht gefunden${NC}"
echo ""
echo -e "${BLUE}Du musst zuerst einen Gitea Access Token erstellen:${NC}"
echo ""
echo "1. Öffne im Browser:"
echo -e " ${GREEN}${GITEA_URL}/user/settings/applications${NC}"
echo ""
echo "2. Scrolle zu 'Generate New Token'"
echo ""
echo "3. Konfiguration:"
echo " - Name: z.B. 'CI/CD Secrets Setup'"
echo " - Scopes: ✅ write:repository (mindestens)"
echo ""
echo "4. Klicke 'Generate Token'"
echo ""
echo "5. Kopiere den Token (wird nur einmal angezeigt!)"
echo ""
echo -e "${YELLOW}Dann füge den Token hier ein (wird nicht angezeigt):${NC}"
read -s -p "Gitea Token: " GITEA_TOKEN
echo ""
echo ""
fi
# Funktion zum Setzen eines Secrets
set_secret() {
local secret_name=$1
local secret_value=$2
echo -n "Setting $secret_name... "
local response=$(curl -s -w "\n%{http_code}" \
-X PUT \
-H "Authorization: token ${GITEA_TOKEN}" \
-H "Content-Type: application/json" \
"${GITEA_URL}/api/v1/repos/${REPO_OWNER}/${REPO_NAME}/actions/secrets/${secret_name}" \
-d "{
\"data\": \"${secret_value}\"
}" 2>&1)
local http_code=$(echo "$response" | tail -n1)
local body=$(echo "$response" | sed '$d')
if [ "$http_code" = "204" ] || [ "$http_code" = "201" ]; then
echo -e "${GREEN}✅ OK${NC}"
return 0
elif [ "$http_code" = "404" ]; then
echo -e "${RED}❌ Repository nicht gefunden (404)${NC}"
echo " Prüfe: REPO_OWNER=${REPO_OWNER}, REPO_NAME=${REPO_NAME}"
return 1
elif [ "$http_code" = "403" ]; then
echo -e "${RED}❌ Keine Berechtigung (403)${NC}"
echo " Token benötigt 'write:repository' Scope"
return 1
elif [ "$http_code" = "401" ]; then
echo -e "${RED}❌ Ungültiger Token (401)${NC}"
return 1
else
echo -e "${RED}❌ FAILED (HTTP $http_code)${NC}"
echo "Response: $body"
return 1
fi
}
# Prüfe Token-Gültigkeit
echo -e "${BLUE}Prüfe Token-Gültigkeit...${NC}"
TOKEN_CHECK=$(curl -s -o /dev/null -w "%{http_code}" \
-H "Authorization: token ${GITEA_TOKEN}" \
"${GITEA_URL}/api/v1/user" 2>&1)
if [ "$TOKEN_CHECK" != "200" ]; then
echo -e "${RED}❌ Token ist ungültig oder hat keine ausreichenden Berechtigungen${NC}"
echo "HTTP Status: $TOKEN_CHECK"
exit 1
fi
echo -e "${GREEN}✅ Token ist gültig${NC}"
echo ""
# Registry Password
REGISTRY_PASSWORD="${REGISTRY_PASSWORD:-registry-secure-password-2025}"
# SSH Private Key
SSH_KEY_PATH="${SSH_KEY_PATH:-$HOME/.ssh/production}"
if [ -f "$SSH_KEY_PATH" ]; then
SSH_PRIVATE_KEY=$(cat "$SSH_KEY_PATH")
echo -e "${GREEN}✓ SSH private key gefunden: ${SSH_KEY_PATH}${NC}"
else
echo -e "${YELLOW}⚠️ SSH private key nicht gefunden: ${SSH_KEY_PATH}${NC}"
echo ""
read -p "SSH Key Pfad (oder Enter für Skip): " custom_ssh_path
if [ -n "$custom_ssh_path" ] && [ -f "$custom_ssh_path" ]; then
SSH_PRIVATE_KEY=$(cat "$custom_ssh_path")
echo -e "${GREEN}✓ SSH private key geladen${NC}"
else
echo -e "${YELLOW}⚠️ Überspringe SSH_PRIVATE_KEY${NC}"
SSH_PRIVATE_KEY=""
fi
fi
echo ""
echo -e "${BLUE}Setze Secrets für Repository: ${REPO_OWNER}/${REPO_NAME}${NC}"
echo ""
# Setze Secrets
ERRORS=0
echo -e "${BLUE}Secret 1/3: REGISTRY_USER${NC}"
if set_secret "REGISTRY_USER" "admin"; then
echo ""
else
ERRORS=$((ERRORS + 1))
fi
echo -e "${BLUE}Secret 2/3: REGISTRY_PASSWORD${NC}"
if set_secret "REGISTRY_PASSWORD" "$REGISTRY_PASSWORD"; then
echo ""
else
ERRORS=$((ERRORS + 1))
fi
if [ -n "$SSH_PRIVATE_KEY" ]; then
echo -e "${BLUE}Secret 3/3: SSH_PRIVATE_KEY${NC}"
# Escape JSON special characters
SSH_PRIVATE_KEY_ESCAPED=$(echo "$SSH_PRIVATE_KEY" | sed 's/\\/\\\\/g' | sed 's/"/\\"/g' | sed ':a;N;$!ba;s/\n/\\n/g')
if set_secret "SSH_PRIVATE_KEY" "$SSH_PRIVATE_KEY_ESCAPED"; then
echo ""
else
ERRORS=$((ERRORS + 1))
fi
else
echo -e "${YELLOW}⚠️ Überspringe SSH_PRIVATE_KEY (nicht gefunden)${NC}"
echo ""
fi
# Zusammenfassung
echo -e "${BLUE}════════════════════════════════════════════════════════════${NC}"
if [ $ERRORS -eq 0 ]; then
echo -e "${GREEN}✅ Secrets Setup erfolgreich abgeschlossen!${NC}"
echo ""
echo -e "Verifizierung:"
echo -e " - Gehe zu: ${GREEN}${GITEA_URL}/${REPO_OWNER}/${REPO_NAME}/settings${NC}"
echo -e " - Oder teste den Workflow: ${GREEN}Repository → Actions → Test Registry Credentials${NC}"
exit 0
else
echo -e "${RED}❌ Fehler beim Setzen von $ERRORS Secret(s)${NC}"
echo ""
echo "Troubleshooting:"
echo " - Prüfe Token-Berechtigungen (benötigt: write:repository)"
echo " - Prüfe Repository-Name: ${REPO_OWNER}/${REPO_NAME}"
echo " - Prüfe ob Actions für das Repository aktiviert ist"
exit 1
fi

View File

@@ -0,0 +1,328 @@
#!/bin/bash
#
# Script zum Testen der Docker Registry Credentials
# Testet sowohl HTTP als auch HTTPS Zugriff auf die Registry
#
set -euo pipefail
# Farben für Output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Default-Werte
REGISTRY_USER="${REGISTRY_USER:-admin}"
REGISTRY_PASSWORD="${REGISTRY_PASSWORD:-}"
REGISTRY_DOMAIN="${REGISTRY_DOMAIN:-registry.michaelschiemer.de}"
REGISTRY_HOST="${REGISTRY_HOST:-94.16.110.151}"
REGISTRY_PORT="${REGISTRY_PORT:-5000}"
# Funktionen
print_header() {
echo ""
echo -e "${BLUE}════════════════════════════════════════════════════════════${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}════════════════════════════════════════════════════════════${NC}"
echo ""
}
print_success() {
echo -e "${GREEN}$1${NC}"
}
print_error() {
echo -e "${RED}$1${NC}"
}
print_warning() {
echo -e "${YELLOW}⚠️ $1${NC}"
}
print_info() {
echo -e "${BLUE} $1${NC}"
}
# Prüfe ob Docker verfügbar ist
check_docker() {
if ! command -v docker >/dev/null 2>&1; then
print_error "Docker ist nicht installiert oder nicht im PATH"
exit 1
fi
if ! docker info >/dev/null 2>&1; then
print_error "Docker daemon läuft nicht oder keine Berechtigung"
exit 1
fi
print_success "Docker ist verfügbar"
}
# Prüfe ob curl verfügbar ist
check_curl() {
if ! command -v curl >/dev/null 2>&1; then
print_warning "curl ist nicht verfügbar, installiere..."
if command -v apk >/dev/null 2>&1; then
apk add --no-cache curl ca-certificates >/dev/null 2>&1
elif command -v apt-get >/dev/null 2>&1; then
apt-get update >/dev/null 2>&1 && apt-get install -y curl ca-certificates >/dev/null 2>&1
else
print_error "curl kann nicht automatisch installiert werden"
return 1
fi
fi
print_success "curl ist verfügbar"
}
# Teste HTTP-Zugriff auf Registry
test_http_connectivity() {
local test_url="$1"
print_info "Teste HTTP-Zugriff auf $test_url..."
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "http://${test_url}/v2/" 2>&1 || echo "000")
if [ "$HTTP_CODE" = "401" ]; then
print_success "Registry erreichbar über HTTP (Status: 401 - Auth erforderlich, das ist gut!)"
return 0
elif [ "$HTTP_CODE" = "200" ]; then
print_success "Registry erreichbar über HTTP (Status: 200 - keine Auth erforderlich)"
return 0
elif [ "$HTTP_CODE" = "000" ]; then
print_error "Registry nicht erreichbar über HTTP (curl Fehler)"
return 1
else
print_warning "Registry antwortet über HTTP (Status: $HTTP_CODE)"
return 1
fi
}
# Teste HTTPS-Zugriff auf Registry
test_https_connectivity() {
local test_url="$1"
print_info "Teste HTTPS-Zugriff auf $test_url..."
HTTPS_CODE=$(curl -k -s -o /dev/null -w "%{http_code}" "https://${test_url}/v2/" 2>&1 || echo "000")
if [ "$HTTPS_CODE" = "401" ]; then
print_success "Registry erreichbar über HTTPS (Status: 401 - Auth erforderlich, das ist gut!)"
return 0
elif [ "$HTTPS_CODE" = "200" ]; then
print_success "Registry erreichbar über HTTPS (Status: 200 - keine Auth erforderlich)"
return 0
elif [ "$HTTPS_CODE" = "404" ]; then
print_warning "Registry Route nicht gefunden über HTTPS (Status: 404)"
print_info "Möglicherweise ist Traefik-Routing nicht richtig konfiguriert"
return 1
elif [ "$HTTPS_CODE" = "000" ]; then
print_error "Registry nicht erreichbar über HTTPS (curl Fehler)"
return 1
else
print_warning "Registry antwortet über HTTPS (Status: $HTTPS_CODE)"
return 1
fi
}
# Teste Docker Login
test_docker_login() {
local registry_url="$1"
local use_http="${2:-false}"
print_info "Teste Docker Login bei $registry_url..."
if [ -z "$REGISTRY_PASSWORD" ]; then
print_error "REGISTRY_PASSWORD ist nicht gesetzt!"
print_info "Setze es mit: export REGISTRY_PASSWORD='dein-passwort'"
return 1
fi
# Docker Login versuchen
set +e
LOGIN_OUTPUT=$(echo "$REGISTRY_PASSWORD" | docker login "$registry_url" -u "$REGISTRY_USER" --password-stdin 2>&1)
LOGIN_EXIT_CODE=$?
set -e
if [ $LOGIN_EXIT_CODE -eq 0 ]; then
print_success "Docker Login erfolgreich!"
echo "$LOGIN_OUTPUT" | grep -i "Login Succeeded" || true
return 0
else
print_error "Docker Login fehlgeschlagen (Exit Code: $LOGIN_EXIT_CODE)"
if echo "$LOGIN_OUTPUT" | grep -qi "unauthorized\|401"; then
print_warning "Fehler: Unauthorized (401)"
print_info "Die Credentials sind falsch:"
print_info " - Username: $REGISTRY_USER"
print_info " - Password Länge: ${#REGISTRY_PASSWORD} Zeichen"
print_info ""
print_info "Mögliche Lösungen:"
print_info " 1. Prüfe REGISTRY_USER in Gitea Secrets (sollte 'admin' sein)"
print_info " 2. Prüfe REGISTRY_PASSWORD in Gitea Secrets"
print_info " 3. Prüfe das Passwort in deployment/stacks/registry/auth/htpasswd auf dem Server"
fi
if echo "$LOGIN_OUTPUT" | grep -qi "HTTP response to HTTPS client"; then
print_warning "Fehler: Docker versucht HTTPS, aber Registry läuft auf HTTP"
print_info "Lösung: Docker-daemon muss mit --insecure-registry=$registry_url konfiguriert werden"
fi
if echo "$LOGIN_OUTPUT" | grep -qi "certificate\|tls"; then
print_warning "Fehler: SSL/TLS Problem"
print_info "Lösung: Prüfe SSL-Zertifikat-Konfiguration"
fi
echo ""
echo "Vollständiger Fehler-Output:"
echo "$LOGIN_OUTPUT" | while IFS= read -r line; do
echo " $line"
done
return 1
fi
}
# Teste Registry API Zugriff
test_registry_api() {
local registry_url="$1"
local protocol="${2:-http}"
print_info "Teste Registry API Zugriff über $protocol..."
API_URL="${protocol}://${registry_url}/v2/_catalog"
if [ "$protocol" = "https" ]; then
API_RESPONSE=$(curl -k -u "${REGISTRY_USER}:${REGISTRY_PASSWORD}" -s "$API_URL" 2>&1)
else
API_RESPONSE=$(curl -u "${REGISTRY_USER}:${REGISTRY_PASSWORD}" -s "$API_URL" 2>&1)
fi
if echo "$API_RESPONSE" | grep -qi "repositories"; then
print_success "Registry API Zugriff erfolgreich!"
echo "$API_RESPONSE" | jq '.' 2>/dev/null || echo "$API_RESPONSE"
return 0
elif echo "$API_RESPONSE" | grep -qi "unauthorized\|401"; then
print_error "Registry API Zugriff fehlgeschlagen: Unauthorized"
return 1
else
print_warning "Registry API Antwort: $API_RESPONSE"
return 1
fi
}
# Hauptfunktion
main() {
print_header "Docker Registry Credentials Test"
# Prüfungen
check_docker
check_curl
# Zeige Konfiguration
print_info "Verwendete Konfiguration:"
echo " REGISTRY_USER: $REGISTRY_USER"
echo " REGISTRY_PASSWORD: ${REGISTRY_PASSWORD:+*** (${#REGISTRY_PASSWORD} Zeichen)}"
echo " REGISTRY_DOMAIN: $REGISTRY_DOMAIN"
echo " REGISTRY_HOST: $REGISTRY_HOST"
echo " REGISTRY_PORT: $REGISTRY_PORT"
if [ -z "$REGISTRY_PASSWORD" ]; then
print_error ""
print_error "REGISTRY_PASSWORD ist nicht gesetzt!"
print_info ""
print_info "Verwendung:"
echo " export REGISTRY_PASSWORD='dein-passwort'"
echo " ./scripts/test-registry-credentials.sh"
echo ""
print_info "Oder in CI/CD:"
echo " REGISTRY_PASSWORD=\"\${{ secrets.REGISTRY_PASSWORD }}\" ./scripts/test-registry-credentials.sh"
exit 1
fi
echo ""
# Test-Ergebnisse
HTTP_AVAILABLE=false
HTTPS_AVAILABLE=false
HTTP_LOGIN_SUCCESS=false
HTTPS_LOGIN_SUCCESS=false
# Test 1: HTTP Connectivity
print_header "Test 1: HTTP Connectivity"
if test_http_connectivity "${REGISTRY_HOST}:${REGISTRY_PORT}"; then
HTTP_AVAILABLE=true
fi
# Test 2: HTTPS Connectivity
print_header "Test 2: HTTPS Connectivity"
if test_https_connectivity "$REGISTRY_DOMAIN"; then
HTTPS_AVAILABLE=true
fi
# Test 3: Docker Login über HTTP
if [ "$HTTP_AVAILABLE" = true ]; then
print_header "Test 3: Docker Login über HTTP"
if test_docker_login "${REGISTRY_HOST}:${REGISTRY_PORT}" "http"; then
HTTP_LOGIN_SUCCESS=true
fi
else
print_warning "Überspringe HTTP Login Test (Registry nicht erreichbar)"
fi
# Test 4: Docker Login über HTTPS
if [ "$HTTPS_AVAILABLE" = true ]; then
print_header "Test 4: Docker Login über HTTPS"
if test_docker_login "$REGISTRY_DOMAIN" "https"; then
HTTPS_LOGIN_SUCCESS=true
fi
else
print_warning "Überspringe HTTPS Login Test (Registry nicht erreichbar)"
fi
# Test 5: Registry API (nur wenn Login erfolgreich)
if [ "$HTTP_LOGIN_SUCCESS" = true ] || [ "$HTTPS_LOGIN_SUCCESS" = true ]; then
print_header "Test 5: Registry API Zugriff"
if [ "$HTTP_LOGIN_SUCCESS" = true ]; then
test_registry_api "${REGISTRY_HOST}:${REGISTRY_PORT}" "http" || true
fi
if [ "$HTTPS_LOGIN_SUCCESS" = true ]; then
test_registry_api "$REGISTRY_DOMAIN" "https" || true
fi
fi
# Zusammenfassung
print_header "Zusammenfassung"
if [ "$HTTP_LOGIN_SUCCESS" = true ] || [ "$HTTPS_LOGIN_SUCCESS" = true ]; then
print_success "✅ Credentials sind korrekt und funktionieren!"
if [ "$HTTPS_LOGIN_SUCCESS" = true ]; then
print_success "✅ HTTPS Login funktioniert (empfohlen)"
print_info "Verwende in Workflows: registry.michaelschiemer.de"
fi
if [ "$HTTP_LOGIN_SUCCESS" = true ]; then
print_warning "⚠️ HTTP Login funktioniert (Fallback)"
print_info "Verwende in Workflows: ${REGISTRY_HOST}:${REGISTRY_PORT}"
print_info "HINWEIS: Benötigt insecure-registry Konfiguration im Docker-daemon"
fi
exit 0
else
print_error "❌ Credentials funktionieren nicht!"
print_info ""
print_info "Nächste Schritte:"
print_info "1. Prüfe REGISTRY_USER in Gitea Secrets"
print_info "2. Prüfe REGISTRY_PASSWORD in Gitea Secrets"
print_info "3. Prüfe das Passwort in deployment/stacks/registry/auth/htpasswd auf dem Server"
print_info "4. Prüfe ob die Registry läuft: docker ps | grep registry"
print_info "5. Prüfe Registry-Logs: docker logs registry"
exit 1
fi
}
# Script ausführen
main "$@"

View File

@@ -0,0 +1,58 @@
#!/bin/bash
# Quick test script to verify Dockerfile.production contains Git and Composer
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
DOCKERFILE="$PROJECT_ROOT/Dockerfile.production"
echo "🧪 Testing Dockerfile.production for Git and Composer"
echo ""
# Check if Dockerfile exists
if [ ! -f "$DOCKERFILE" ]; then
echo "❌ Dockerfile.production not found at $DOCKERFILE"
exit 1
fi
echo "✅ Found Dockerfile.production"
echo ""
# Check for git installation
echo "📋 Checking for git installation..."
if grep -q "git" "$DOCKERFILE"; then
echo "✅ git found in Dockerfile"
echo " Relevant lines:"
grep -n "git" "$DOCKERFILE" | head -3
else
echo "❌ git NOT found in Dockerfile"
exit 1
fi
# Check for composer installation
echo ""
echo "📋 Checking for composer installation..."
if grep -q "composer" "$DOCKERFILE"; then
echo "✅ composer found in Dockerfile"
echo " Relevant lines:"
grep -n "composer" "$DOCKERFILE" | head -3
else
echo "❌ composer NOT found in Dockerfile"
exit 1
fi
# Check for entrypoint copy
echo ""
echo "📋 Checking for entrypoint.sh copy..."
if grep -q "entrypoint.sh\|ENTRYPOINT" "$DOCKERFILE"; then
echo "✅ entrypoint.sh copy/ENTRYPOINT found"
echo " Relevant lines:"
grep -n "entrypoint\|ENTRYPOINT" "$DOCKERFILE" | head -3
else
echo "❌ entrypoint.sh copy/ENTRYPOINT NOT found"
exit 1
fi
echo ""
echo "✅ All Dockerfile checks passed!"

View File

@@ -0,0 +1,64 @@
#!/bin/bash
# Quick test script to verify entrypoint.sh Git functionality
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
ENTRYPOINT="$PROJECT_ROOT/docker/entrypoint.sh"
echo "🧪 Testing entrypoint.sh Git functionality"
echo ""
# Check if entrypoint.sh exists
if [ ! -f "$ENTRYPOINT" ]; then
echo "❌ entrypoint.sh not found at $ENTRYPOINT"
exit 1
fi
echo "✅ Found entrypoint.sh"
echo ""
# Check for GIT_REPOSITORY_URL handling
echo "📋 Checking for GIT_REPOSITORY_URL handling..."
if grep -q "GIT_REPOSITORY_URL" "$ENTRYPOINT"; then
echo "✅ GIT_REPOSITORY_URL found in entrypoint.sh"
else
echo "❌ GIT_REPOSITORY_URL NOT found in entrypoint.sh"
exit 1
fi
# Check for git clone
echo "📋 Checking for git clone functionality..."
if grep -q "git clone" "$ENTRYPOINT"; then
echo "✅ git clone found"
else
echo "❌ git clone NOT found"
exit 1
fi
# Check for git pull
echo "📋 Checking for git pull functionality..."
if grep -q "git.*pull\|git fetch\|git reset" "$ENTRYPOINT"; then
echo "✅ git pull/fetch/reset found"
else
echo "❌ git pull/fetch/reset NOT found"
exit 1
fi
# Check for composer install
echo "📋 Checking for composer install..."
if grep -q "composer install" "$ENTRYPOINT"; then
echo "✅ composer install found"
else
echo "❌ composer install NOT found"
exit 1
fi
echo ""
echo "📝 Relevant entrypoint.sh sections:"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
grep -A 60 "Git Clone/Pull functionality" "$ENTRYPOINT" | head -65
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo ""
echo "✅ All checks passed! Entrypoint.sh contains Git functionality."