feat: CI/CD pipeline setup complete - Ansible playbooks updated, secrets configured, workflow ready
This commit is contained in:
62
deployment/.gitignore
vendored
62
deployment/.gitignore
vendored
@@ -1,62 +0,0 @@
|
||||
# Deployment .gitignore
|
||||
# Exclude sensitive files and generated content
|
||||
|
||||
# Vault password files
|
||||
.vault_pass*
|
||||
.vault-pass*
|
||||
vault-password-file
|
||||
|
||||
# Generated environment files
|
||||
**/.env.*
|
||||
**/environments/.env.*
|
||||
!**/environments/*.env.template
|
||||
|
||||
# Ansible logs
|
||||
infrastructure/logs/*.log
|
||||
*.log
|
||||
|
||||
# SSH keys
|
||||
*.pem
|
||||
*.key
|
||||
*_rsa*
|
||||
*_ed25519*
|
||||
|
||||
# Temporary files
|
||||
*.tmp
|
||||
*.temp
|
||||
.tmp/
|
||||
.temp/
|
||||
|
||||
# Backup files
|
||||
*.bak
|
||||
*.backup
|
||||
*~
|
||||
|
||||
# Local configuration overrides
|
||||
local.yml
|
||||
local.env
|
||||
override.yml
|
||||
|
||||
# Docker volumes and data
|
||||
volumes/
|
||||
data/
|
||||
|
||||
# OS generated files
|
||||
.DS_Store
|
||||
.DS_Store?
|
||||
._*
|
||||
.Spotlight-V100
|
||||
.Trashes
|
||||
ehthumbs.db
|
||||
Thumbs.db
|
||||
|
||||
# IDE files
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# Runtime files
|
||||
*.pid
|
||||
*.socket
|
||||
430
deployment/DEPLOYMENT-STATUS.md
Normal file
430
deployment/DEPLOYMENT-STATUS.md
Normal file
@@ -0,0 +1,430 @@
|
||||
# Deployment Status - Gitea Actions Runner Setup
|
||||
|
||||
**Status**: 🚧 BLOCKED - Phase 1 Step 1.1
|
||||
**Last Updated**: 2025-10-30
|
||||
**Target Server**: 94.16.110.151 (Netcup)
|
||||
|
||||
---
|
||||
|
||||
## Aktueller Status
|
||||
|
||||
### ✅ Abgeschlossen
|
||||
|
||||
**Phase 1 - Teilschritte Erledigt**:
|
||||
1. ✅ Runner-Verzeichnisstruktur verifiziert: `/home/michael/dev/michaelschiemer/deployment/gitea-runner/`
|
||||
2. ✅ `.env.example` Template analysiert (23 Zeilen)
|
||||
3. ✅ `docker-compose.yml` Architektur verstanden (47 Zeilen, Docker-in-Docker)
|
||||
4. ✅ `.env` Datei erstellt via: `cp deployment/gitea-runner/.env.example deployment/gitea-runner/.env`
|
||||
|
||||
### ⚠️ BLOCKER - Kritischer Fehler
|
||||
|
||||
**Problem**: Gitea Admin Panel nicht erreichbar
|
||||
|
||||
**URL**: `https://git.michaelschiemer.de/admin/actions/runners`
|
||||
**Fehler**: `404 page not found`
|
||||
|
||||
**Impact**:
|
||||
- ❌ Kann Registration Token nicht abrufen (Phase 1, Step 1.1)
|
||||
- ❌ Kann `.env` nicht komplettieren (Step 1.2)
|
||||
- ❌ Kann Runner nicht registrieren (Step 1.3)
|
||||
- ❌ Alle nachfolgenden Phasen (2-8) blockiert
|
||||
|
||||
---
|
||||
|
||||
## Dateistatus
|
||||
|
||||
### `/home/michael/dev/michaelschiemer/deployment/gitea-runner/.env`
|
||||
|
||||
**Status**: ✅ Erstellt (diese Session)
|
||||
**Quelle**: Kopie von `.env.example`
|
||||
**Problem**: `GITEA_RUNNER_REGISTRATION_TOKEN` ist leer
|
||||
|
||||
**Aktueller Inhalt**:
|
||||
```bash
|
||||
# Gitea Actions Runner Configuration
|
||||
|
||||
# Gitea Instance URL (must be accessible from runner)
|
||||
GITEA_INSTANCE_URL=https://git.michaelschiemer.de
|
||||
|
||||
# Runner Registration Token (get from Gitea: Admin > Actions > Runners)
|
||||
# To generate: Gitea UI > Site Administration > Actions > Runners > Create New Runner
|
||||
GITEA_RUNNER_REGISTRATION_TOKEN= # ← LEER - BLOCKIERT durch 404
|
||||
|
||||
# Runner Name (appears in Gitea UI)
|
||||
GITEA_RUNNER_NAME=dev-runner-01
|
||||
|
||||
# Runner Labels (comma-separated)
|
||||
# Format: label:image
|
||||
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye,debian-latest:docker://debian:bullseye
|
||||
|
||||
# Optional: Custom Docker registry for job images
|
||||
# DOCKER_REGISTRY_MIRROR=https://registry.michaelschiemer.de
|
||||
|
||||
# Optional: Runner capacity (max concurrent jobs)
|
||||
# GITEA_RUNNER_CAPACITY=1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fehleranalyse: 404 auf Gitea Admin Panel
|
||||
|
||||
### Wahrscheinliche Ursachen (nach Priorität)
|
||||
|
||||
#### 1. Gitea noch nicht deployed ⚠️ **HÖCHSTE WAHRSCHEINLICHKEIT**
|
||||
|
||||
**Problem**: Phasen-Reihenfolge-Konflikt in SETUP-GUIDE.md
|
||||
|
||||
- Phase 1 erfordert Gitea erreichbar
|
||||
- Phase 3 deployed Gitea auf Production Server
|
||||
- Klassisches Henne-Ei-Problem
|
||||
|
||||
**Beweis**: SETUP-GUIDE.md Phase 3, Step 3.1 zeigt:
|
||||
```markdown
|
||||
# 4. Gitea (Git Server + MySQL + Redis)
|
||||
cd ../gitea
|
||||
docker compose up -d
|
||||
docker compose logs -f
|
||||
# Wait for "Listen: http://0.0.0.0:3000"
|
||||
```
|
||||
|
||||
**Lösung**: Phase 3 VOR Phase 1 ausführen
|
||||
|
||||
#### 2. Gitea Actions Feature deaktiviert
|
||||
|
||||
**Problem**: Actions in `app.ini` nicht enabled
|
||||
|
||||
**Check benötigt**:
|
||||
```bash
|
||||
ssh deploy@94.16.110.151
|
||||
cat ~/deployment/stacks/gitea/data/gitea/conf/app.ini | grep -A 5 "[actions]"
|
||||
```
|
||||
|
||||
**Erwartetes Ergebnis**:
|
||||
```ini
|
||||
[actions]
|
||||
ENABLED = true
|
||||
```
|
||||
|
||||
#### 3. Falsche URL (andere Gitea Version)
|
||||
|
||||
**Mögliche alternative URLs**:
|
||||
- `https://git.michaelschiemer.de/admin`
|
||||
- `https://git.michaelschiemer.de/user/settings/actions`
|
||||
- `https://git.michaelschiemer.de/admin/runners`
|
||||
|
||||
#### 4. Authentication/Authorization Problem
|
||||
|
||||
**Mögliche Ursachen**:
|
||||
- User nicht eingeloggt in Gitea
|
||||
- User hat keine Admin-Rechte
|
||||
- Session abgelaufen
|
||||
|
||||
#### 5. Gitea Service nicht gestartet
|
||||
|
||||
**Check benötigt**:
|
||||
```bash
|
||||
ssh deploy@94.16.110.151
|
||||
cd ~/deployment/stacks/gitea
|
||||
docker compose ps
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Untersuchungsplan
|
||||
|
||||
### Step 1: Base Gitea Accessibility prüfen
|
||||
|
||||
```bash
|
||||
# Test ob Gitea überhaupt läuft
|
||||
curl -I https://git.michaelschiemer.de
|
||||
```
|
||||
|
||||
**Erwartetes Ergebnis**:
|
||||
- HTTP 200 → Gitea läuft
|
||||
- Connection Error → Gitea nicht deployed
|
||||
|
||||
### Step 2: Browser Verification
|
||||
|
||||
1. `https://git.michaelschiemer.de` direkt öffnen
|
||||
2. Homepage-Load verifizieren
|
||||
3. Login-Status prüfen
|
||||
4. Admin-Rechte verifizieren
|
||||
|
||||
### Step 3: Alternative Admin Panel URLs testen
|
||||
|
||||
```bash
|
||||
# Try different paths
|
||||
https://git.michaelschiemer.de/admin
|
||||
https://git.michaelschiemer.de/user/settings/actions
|
||||
https://git.michaelschiemer.de/admin/runners
|
||||
```
|
||||
|
||||
### Step 4: Gitea Configuration prüfen (SSH benötigt)
|
||||
|
||||
```bash
|
||||
ssh deploy@94.16.110.151
|
||||
cat ~/deployment/stacks/gitea/data/gitea/conf/app.ini | grep -A 5 "\[actions\]"
|
||||
```
|
||||
|
||||
### Step 5: Gitea Stack Status prüfen (SSH benötigt)
|
||||
|
||||
```bash
|
||||
ssh deploy@94.16.110.151
|
||||
cd ~/deployment/stacks/gitea
|
||||
docker compose ps
|
||||
docker compose logs gitea --tail 50
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Alternative Lösungsansätze
|
||||
|
||||
### Option A: Phasen-Reihenfolge ändern ⭐ **EMPFOHLEN**
|
||||
|
||||
**Ansatz**: Phase 3 zuerst ausführen, dann Phase 1
|
||||
|
||||
**Begründung**:
|
||||
- Gitea muss deployed sein bevor Runner registriert werden kann
|
||||
- Phase 3 deployed komplette Infrastructure (Traefik, PostgreSQL, Registry, **Gitea**, Monitoring)
|
||||
- Danach kann Phase 1 normal durchgeführt werden
|
||||
|
||||
**Ablauf**:
|
||||
1. Phase 3 komplett ausführen (Infrastructure deployment)
|
||||
2. Gitea Accessibility verifizieren
|
||||
3. Gitea Actions in UI enablen
|
||||
4. Zurück zu Phase 1 für Runner Setup
|
||||
5. Weiter mit Phasen 2, 4-8
|
||||
|
||||
### Option B: CLI-basierte Runner Registration
|
||||
|
||||
**Ansatz**: Runner über Gitea CLI registrieren statt Web UI
|
||||
|
||||
```bash
|
||||
# Auf Production Server
|
||||
ssh deploy@94.16.110.151
|
||||
docker exec gitea gitea admin actions generate-runner-token
|
||||
|
||||
# Token zurück zu Dev Machine kopieren
|
||||
# In .env eintragen
|
||||
```
|
||||
|
||||
### Option C: Manual Token Generation
|
||||
|
||||
**Ansatz**: Token direkt in Gitea Database generieren (nur als letzter Ausweg)
|
||||
|
||||
**WARNUNG**: Nur verwenden wenn alle anderen Optionen fehlschlagen
|
||||
|
||||
---
|
||||
|
||||
## Docker-in-Docker Architektur (Referenz)
|
||||
|
||||
### Services
|
||||
|
||||
**gitea-runner**:
|
||||
- Image: `gitea/act_runner:latest`
|
||||
- Purpose: Hauptrunner-Service
|
||||
- Volumes:
|
||||
- `./data:/data` (Runner-Daten)
|
||||
- `/var/run/docker.sock:/var/run/docker.sock` (Host Docker Socket)
|
||||
- `./config.yaml:/config.yaml:ro` (Konfiguration)
|
||||
- Environment: Variablen aus `.env` File
|
||||
- Network: `gitea-runner` Bridge Network
|
||||
|
||||
**docker-dind**:
|
||||
- Image: `docker:dind`
|
||||
- Purpose: Isolierte Docker-Daemon für Job-Execution
|
||||
- Privileged: `true` (benötigt für nested containerization)
|
||||
- TLS: `DOCKER_TLS_CERTDIR=/certs`
|
||||
- Volumes:
|
||||
- `docker-certs:/certs` (TLS Zertifikate)
|
||||
- `docker-data:/var/lib/docker` (Docker Layer Storage)
|
||||
- Command: `dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2376 --tlsverify`
|
||||
|
||||
### Networks
|
||||
|
||||
**gitea-runner** Bridge Network:
|
||||
- Isoliert Runner-Infrastructure vom Host
|
||||
- Secure TLS Communication zwischen Services
|
||||
|
||||
### Volumes
|
||||
|
||||
- `docker-certs`: Shared TLS Certificates für runner ↔ dind
|
||||
- `docker-data`: Persistent Docker Layer Storage
|
||||
|
||||
---
|
||||
|
||||
## 8-Phasen Deployment Prozess (Übersicht)
|
||||
|
||||
### Phase 1: Gitea Runner Setup (Development Machine) - **⚠️ BLOCKIERT**
|
||||
**Status**: Kann nicht starten wegen 404 auf Admin Panel
|
||||
**Benötigt**: Gitea erreichbar und Actions enabled
|
||||
|
||||
### Phase 2: Ansible Vault Secrets Setup - **⏳ WARTET**
|
||||
**Status**: Kann nicht starten bis Phase 1 komplett
|
||||
**Tasks**:
|
||||
- Vault Password erstellen (`.vault_pass`)
|
||||
- `production.vault.yml` mit Secrets erstellen
|
||||
- Encryption Keys generieren
|
||||
- Vault File verschlüsseln
|
||||
|
||||
### Phase 3: Production Server Initial Setup - **⏳ KÖNNTE ZUERST AUSGEFÜHRT WERDEN**
|
||||
**Status**: Sollte möglicherweise VOR Phase 1 ausgeführt werden
|
||||
**Tasks**:
|
||||
- SSH zu Production Server
|
||||
- Deploy Infrastructure Stacks:
|
||||
1. Traefik (Reverse Proxy & SSL)
|
||||
2. PostgreSQL (Database)
|
||||
3. Docker Registry (Private Registry)
|
||||
4. **Gitea (Git Server + MySQL + Redis)** ← Benötigt für Phase 1!
|
||||
5. Monitoring (Portainer + Grafana + Prometheus)
|
||||
|
||||
### Phase 4: Application Secrets Deployment - **⏳ WARTET**
|
||||
**Status**: Wartet auf Phase 1-3
|
||||
**Tasks**: Secrets via Ansible zu Production deployen
|
||||
|
||||
### Phase 5: Gitea CI/CD Secrets Configuration - **⏳ WARTET**
|
||||
**Status**: Wartet auf Phase 1-4
|
||||
**Tasks**: Repository Secrets in Gitea konfigurieren
|
||||
|
||||
### Phase 6: First Deployment Test - **⏳ WARTET**
|
||||
**Status**: Wartet auf Phase 1-5
|
||||
**Tasks**: CI/CD Pipeline triggern und testen
|
||||
|
||||
### Phase 7: Monitoring & Health Checks - **⏳ WARTET**
|
||||
**Status**: Wartet auf Phase 1-6
|
||||
**Tasks**: Monitoring Tools konfigurieren und Alerting einrichten
|
||||
|
||||
### Phase 8: Backup & Rollback Testing - **⏳ WARTET**
|
||||
**Status**: Wartet auf Phase 1-7
|
||||
**Tasks**: Backup-Mechanismus und Rollback testen
|
||||
|
||||
---
|
||||
|
||||
## Empfohlener Nächster Schritt
|
||||
|
||||
### ⭐ Option A: Phase 3 zuerst ausführen (Empfohlen)
|
||||
|
||||
**Begründung**:
|
||||
- Behebt die Grundursache (Gitea nicht deployed)
|
||||
- Folgt logischer Abhängigkeitskette
|
||||
- Erlaubt normalen Fortschritt durch alle Phasen
|
||||
|
||||
**Ablauf**:
|
||||
```bash
|
||||
# 1. SSH zu Production Server
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# 2. Navigate zu stacks
|
||||
cd ~/deployment/stacks
|
||||
|
||||
# 3. Deploy Traefik
|
||||
cd traefik
|
||||
docker compose up -d
|
||||
docker compose logs -f # Warten auf "Configuration loaded"
|
||||
|
||||
# 4. Deploy PostgreSQL
|
||||
cd ../postgresql
|
||||
docker compose up -d
|
||||
docker compose logs -f # Warten auf "database system is ready"
|
||||
|
||||
# 5. Deploy Registry
|
||||
cd ../registry
|
||||
docker compose up -d
|
||||
docker compose logs -f # Warten auf "listening on [::]:5000"
|
||||
|
||||
# 6. Deploy Gitea ← KRITISCH für Phase 1
|
||||
cd ../gitea
|
||||
docker compose up -d
|
||||
docker compose logs -f # Warten auf "Listen: http://0.0.0.0:3000"
|
||||
|
||||
# 7. Deploy Monitoring
|
||||
cd ../monitoring
|
||||
docker compose up -d
|
||||
docker compose logs -f
|
||||
|
||||
# 8. Verify all stacks
|
||||
docker ps
|
||||
|
||||
# 9. Test Gitea Accessibility
|
||||
curl -I https://git.michaelschiemer.de
|
||||
```
|
||||
|
||||
**Nach Erfolg**:
|
||||
1. Gitea Web UI öffnen: `https://git.michaelschiemer.de`
|
||||
2. Initial Setup Wizard durchlaufen
|
||||
3. Admin Account erstellen
|
||||
4. Actions in Settings enablen
|
||||
5. **Zurück zu Phase 1**: Jetzt kann Admin Panel erreicht werden
|
||||
6. Registration Token holen
|
||||
7. `.env` komplettieren
|
||||
8. Runner registrieren und starten
|
||||
|
||||
---
|
||||
|
||||
## Technische Details
|
||||
|
||||
### Gitea Actions Architecture
|
||||
|
||||
**Components**:
|
||||
- **act_runner**: Gitea's self-hosted runner (basiert auf nektos/act)
|
||||
- **Docker-in-Docker**: Isolierte Job-Execution Environment
|
||||
- **TLS Communication**: Secure runner ↔ dind via certificates
|
||||
|
||||
**Runner Registration**:
|
||||
1. Generate Token in Gitea Admin Panel
|
||||
2. Add Token zu `.env`: `GITEA_RUNNER_REGISTRATION_TOKEN=<token>`
|
||||
3. Run `./register.sh` (registriert runner mit Gitea instance)
|
||||
4. Start services: `docker compose up -d`
|
||||
5. Verify in Gitea UI: Runner shows as "Idle" or "Active"
|
||||
|
||||
**Runner Labels**:
|
||||
Define welche Execution Environments unterstützt werden:
|
||||
```bash
|
||||
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye,debian-latest:docker://debian:bullseye
|
||||
```
|
||||
|
||||
Format: `label:docker://image`
|
||||
|
||||
---
|
||||
|
||||
## Dateireferenzen
|
||||
|
||||
### Wichtige Dateien
|
||||
|
||||
| Datei | Status | Beschreibung |
|
||||
|-------|--------|--------------|
|
||||
| `SETUP-GUIDE.md` | ✅ Vorhanden | Komplette 8-Phasen Deployment Anleitung (708 Zeilen) |
|
||||
| `deployment/gitea-runner/.env.example` | ✅ Vorhanden | Template für Runner Configuration (23 Zeilen) |
|
||||
| `deployment/gitea-runner/.env` | ✅ Erstellt | Active Configuration - **Token fehlt** |
|
||||
| `deployment/gitea-runner/docker-compose.yml` | ✅ Vorhanden | Two-Service Architecture Definition (47 Zeilen) |
|
||||
|
||||
### Code Snippets Location
|
||||
|
||||
**Runner Configuration** (`.env`):
|
||||
- Zeilen 1-23: Komplette Environment Variables Definition
|
||||
- Zeile 8: `GITEA_RUNNER_REGISTRATION_TOKEN=` ← **KRITISCH: LEER**
|
||||
|
||||
**Docker Compose** (`docker-compose.yml`):
|
||||
- Zeilen 4-20: `gitea-runner` Service Definition
|
||||
- Zeilen 23-34: `docker-dind` Service Definition
|
||||
- Zeilen 37-40: Network Configuration
|
||||
- Zeilen 43-47: Volume Definitions
|
||||
|
||||
**Setup Guide** (SETUP-GUIDE.md):
|
||||
- Zeilen 36-108: Phase 1 Komplette Anleitung
|
||||
- Zeilen 236-329: Phase 3 Infrastructure Deployment (inkl. Gitea)
|
||||
|
||||
---
|
||||
|
||||
## Support Kontakte
|
||||
|
||||
**Bei Problemen**:
|
||||
- Framework Issues: Siehe `docs/claude/troubleshooting.md`
|
||||
- Gitea Documentation: https://docs.gitea.io/
|
||||
- act_runner Documentation: https://docs.gitea.io/en-us/usage/actions/act-runner/
|
||||
|
||||
---
|
||||
|
||||
**Erstellt**: 2025-10-30
|
||||
**Letzte Änderung**: 2025-10-30
|
||||
**Status**: BLOCKED - Awaiting Gitea Deployment (Phase 3)
|
||||
@@ -1,307 +0,0 @@
|
||||
# Enhanced Deployment System
|
||||
|
||||
**Complete Automated Deployment for Custom PHP Framework**
|
||||
|
||||
The deployment system has been significantly enhanced with production-ready automation, security tools, and user-friendly interfaces that eliminate manual configuration steps.
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Option 1: Interactive Setup Wizard (Recommended)
|
||||
```bash
|
||||
cd deployment
|
||||
./setup-wizard.sh
|
||||
```
|
||||
|
||||
The wizard guides you through:
|
||||
- Environment selection (development/staging/production)
|
||||
- Domain and SSL configuration
|
||||
- Server connection setup
|
||||
- SSH key generation and testing
|
||||
- Secure credential generation
|
||||
- Complete configuration validation
|
||||
|
||||
### Option 2: One-Command Production Setup
|
||||
```bash
|
||||
cd deployment
|
||||
./setup-production.sh --server 94.16.110.151 --domain michaelschiemer.de --auto-yes
|
||||
```
|
||||
|
||||
### Option 3: Using the Unified CLI
|
||||
```bash
|
||||
cd deployment
|
||||
./deploy-cli.sh wizard # Interactive setup
|
||||
./deploy-cli.sh production # One-command production
|
||||
./deploy-cli.sh deploy production # Deploy to production
|
||||
```
|
||||
|
||||
## 📁 Enhanced System Structure
|
||||
|
||||
```
|
||||
deployment/
|
||||
├── deploy-cli.sh # 🆕 Unified CLI interface
|
||||
├── setup-wizard.sh # 🆕 Interactive setup wizard
|
||||
├── setup-production.sh # 🆕 One-command production setup
|
||||
├── deploy.sh # ✨ Enhanced deployment orchestrator
|
||||
├── setup.sh # Original setup script
|
||||
├── lib/ # 🆕 Library modules
|
||||
│ ├── config-manager.sh # Configuration management system
|
||||
│ └── security-tools.sh # Security and password tools
|
||||
├── applications/
|
||||
│ ├── environments/
|
||||
│ │ ├── .env.production # 🔒 Generated configurations
|
||||
│ │ ├── .env.staging
|
||||
│ │ └── templates/ # Environment templates
|
||||
│ └── docker-compose.*.yml
|
||||
├── infrastructure/
|
||||
│ └── ... # Ansible infrastructure
|
||||
├── .credentials/ # 🔒 Secure credential storage
|
||||
├── .security/ # 🔒 Security tools and audit logs
|
||||
└── .backups/ # Configuration backups
|
||||
```
|
||||
|
||||
## 🎯 Key Enhancements
|
||||
|
||||
### 1. **Setup Wizard** - Interactive Configuration Guide
|
||||
- **8-step guided process** with progress indicators
|
||||
- **Automatic password generation** with cryptographic security
|
||||
- **SSH key creation and testing** with server connectivity validation
|
||||
- **Environment file creation** from templates with smart defaults
|
||||
- **Real-time validation** and error handling
|
||||
- **Professional UI** with clear instructions and feedback
|
||||
|
||||
### 2. **One-Command Production Setup** - Complete Automation
|
||||
- **12-step automated process** from setup to deployment
|
||||
- **Zero-downtime deployment** with health validation
|
||||
- **Comprehensive security configuration** with fail2ban and firewall
|
||||
- **SSL certificate automation** with Let's Encrypt
|
||||
- **Database migration and setup** with rollback capability
|
||||
- **Production readiness validation** with metrics and monitoring
|
||||
|
||||
### 3. **Configuration Management System** - Template-Based Configuration
|
||||
- **Secure credential generation** with industry-standard entropy
|
||||
- **Template validation** with required field checking
|
||||
- **Environment-specific settings** with automatic optimization
|
||||
- **Configuration backup** with versioned storage
|
||||
- **Credential rotation** with deployment integration
|
||||
|
||||
### 4. **Security Tools** - Enterprise-Grade Security
|
||||
- **Password generation** with configurable strength and character sets
|
||||
- **SSH key management** with automated testing and validation
|
||||
- **SSL certificate handling** for development and production
|
||||
- **Security scanning** with vulnerability detection
|
||||
- **File encryption/decryption** with AES-256 encryption
|
||||
- **Audit logging** with comprehensive security event tracking
|
||||
|
||||
### 5. **Enhanced Deploy Script** - Production-Ready Orchestration
|
||||
- **Environment detection** with automatic configuration suggestions
|
||||
- **Health check system** with scoring and validation
|
||||
- **Better error handling** with specific troubleshooting guidance
|
||||
- **Progress tracking** with detailed status reporting
|
||||
- **Integration** with all new security and configuration tools
|
||||
|
||||
### 6. **Unified CLI Interface** - One Tool for Everything
|
||||
- **Intuitive command structure** with 25+ deployment operations
|
||||
- **Context-aware help** with examples and documentation
|
||||
- **Environment management** with easy switching and validation
|
||||
- **Docker operations** with simplified container management
|
||||
- **Database tools** with backup and migration support
|
||||
- **Maintenance commands** with automated cleanup and health checks
|
||||
|
||||
## 🔐 Security Features
|
||||
|
||||
### Automated Security Hardening
|
||||
- **Cryptographically secure passwords** (25-32 characters, configurable)
|
||||
- **SSH key pairs** with ED25519 or RSA-4096 encryption
|
||||
- **SSL/TLS certificates** with Let's Encrypt automation
|
||||
- **Firewall configuration** with fail2ban intrusion prevention
|
||||
- **File permission enforcement** with 600/700 security model
|
||||
- **Audit logging** with tamper-evident security event tracking
|
||||
|
||||
### Security Tools Available
|
||||
```bash
|
||||
./lib/security-tools.sh generate-password 32 mixed
|
||||
./lib/security-tools.sh generate-ssh production ed25519
|
||||
./lib/security-tools.sh security-scan /path/to/deployment
|
||||
./lib/security-tools.sh report production
|
||||
```
|
||||
|
||||
### Credential Management
|
||||
- **Separated credential storage** in `.credentials/` directory
|
||||
- **Environment-specific passwords** with automatic rotation capability
|
||||
- **Backup and restore** with encrypted storage options
|
||||
- **Template integration** with automatic application to configurations
|
||||
|
||||
## 📊 Deployment Health Monitoring
|
||||
|
||||
### Pre-Deployment Health Checks
|
||||
- **Environment configuration validation** (25% weight)
|
||||
- **Docker daemon connectivity** (25% weight)
|
||||
- **Network connectivity testing** (25% weight)
|
||||
- **Project file validation** (25% weight)
|
||||
- **Overall health scoring** with pass/fail thresholds
|
||||
|
||||
### Post-Deployment Validation
|
||||
- **HTTPS connectivity testing** with certificate validation
|
||||
- **API endpoint health checks** with response validation
|
||||
- **Docker container status** with restart policy validation
|
||||
- **Database connectivity** with migration status verification
|
||||
- **Performance metrics** with response time monitoring
|
||||
|
||||
## 🔧 Configuration Management
|
||||
|
||||
### Environment Configuration
|
||||
```bash
|
||||
./lib/config-manager.sh generate-credentials production
|
||||
./lib/config-manager.sh apply-config production michaelschiemer.de kontakt@michaelschiemer.de
|
||||
./lib/config-manager.sh validate production
|
||||
./lib/config-manager.sh list
|
||||
```
|
||||
|
||||
### Template System
|
||||
- **Production-ready templates** with security best practices
|
||||
- **Environment-specific optimizations** (debug, logging, performance)
|
||||
- **Automatic substitution** with domain, email, and credential integration
|
||||
- **Validation system** with required field checking and security analysis
|
||||
|
||||
## 🚀 Deployment Workflows
|
||||
|
||||
### Development Workflow
|
||||
```bash
|
||||
./deploy-cli.sh setup # Initial setup
|
||||
./deploy-cli.sh config development # Configure development
|
||||
./deploy-cli.sh up development # Start containers
|
||||
./deploy-cli.sh db:migrate development # Run migrations
|
||||
./deploy-cli.sh health development # Health check
|
||||
```
|
||||
|
||||
### Staging Workflow
|
||||
```bash
|
||||
./deploy-cli.sh config staging # Configure staging
|
||||
./deploy-cli.sh deploy staging --verbose # Deploy with detailed output
|
||||
./deploy-cli.sh logs staging # Monitor deployment
|
||||
./deploy-cli.sh health staging # Validate deployment
|
||||
```
|
||||
|
||||
### Production Workflow
|
||||
```bash
|
||||
./setup-wizard.sh # Interactive production setup
|
||||
# OR
|
||||
./setup-production.sh --auto-yes # Automated production setup
|
||||
./deploy-cli.sh status production # Check status
|
||||
./deploy-cli.sh security-report production # Security validation
|
||||
```
|
||||
|
||||
## 🔄 Maintenance and Operations
|
||||
|
||||
### Regular Maintenance
|
||||
```bash
|
||||
./deploy-cli.sh update production # Update to latest code
|
||||
./deploy-cli.sh db:backup production # Create database backup
|
||||
./deploy-cli.sh security-scan # Security vulnerability scan
|
||||
./deploy-cli.sh cleanup # Clean up old files and containers
|
||||
```
|
||||
|
||||
### Monitoring and Debugging
|
||||
```bash
|
||||
./deploy-cli.sh logs production # Real-time logs
|
||||
./deploy-cli.sh shell production # Access container shell
|
||||
./deploy-cli.sh db:status production # Database status
|
||||
./deploy-cli.sh info production # Environment information
|
||||
```
|
||||
|
||||
### Emergency Operations
|
||||
```bash
|
||||
./deploy-cli.sh rollback production # Rollback deployment
|
||||
./deploy-cli.sh db:restore production backup.sql # Restore database
|
||||
./lib/security-tools.sh rotate production # Rotate credentials
|
||||
```
|
||||
|
||||
## 🏗️ Infrastructure Integration
|
||||
|
||||
### Ansible Integration
|
||||
- **Automatic inventory updates** with server configuration
|
||||
- **Infrastructure deployment** with security hardening
|
||||
- **SSL certificate automation** with Let's Encrypt
|
||||
- **System monitoring setup** with health check automation
|
||||
|
||||
### Docker Integration
|
||||
- **Multi-stage builds** with production optimization
|
||||
- **Environment-specific overlays** with resource limits
|
||||
- **Health check configuration** with automatic restart policies
|
||||
- **Performance tuning** with OPcache and memory optimization
|
||||
|
||||
## 📈 Benefits of Enhanced System
|
||||
|
||||
### For Developers
|
||||
- **Reduced setup time** from hours to minutes
|
||||
- **Eliminated manual errors** with automated configuration
|
||||
- **Consistent deployments** across all environments
|
||||
- **Easy debugging** with comprehensive logging and health checks
|
||||
|
||||
### For Operations
|
||||
- **Production-ready security** with industry best practices
|
||||
- **Automated monitoring** with health scoring and alerting
|
||||
- **Easy maintenance** with built-in tools and workflows
|
||||
- **Audit compliance** with comprehensive logging and reporting
|
||||
|
||||
### For Business
|
||||
- **Faster time to market** with streamlined deployment
|
||||
- **Reduced deployment risks** with validation and rollback
|
||||
- **Lower operational costs** with automation and monitoring
|
||||
- **Better security posture** with enterprise-grade practices
|
||||
|
||||
## 🆘 Troubleshooting
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
**SSH Connection Failed**
|
||||
```bash
|
||||
./lib/security-tools.sh test-ssh ~/.ssh/production user@server
|
||||
ssh-copy-id -i ~/.ssh/production.pub user@server
|
||||
```
|
||||
|
||||
**Configuration Incomplete**
|
||||
```bash
|
||||
./deploy-cli.sh validate production
|
||||
./deploy-cli.sh credentials production
|
||||
```
|
||||
|
||||
**Docker Issues**
|
||||
```bash
|
||||
./deploy-cli.sh health development
|
||||
docker system prune -f
|
||||
```
|
||||
|
||||
**SSL Certificate Problems**
|
||||
```bash
|
||||
./lib/security-tools.sh validate-ssl /path/to/cert.pem
|
||||
```
|
||||
|
||||
### Getting Help
|
||||
```bash
|
||||
./deploy-cli.sh help # General help
|
||||
./deploy-cli.sh help deploy # Command-specific help
|
||||
./lib/security-tools.sh help # Security tools help
|
||||
./lib/config-manager.sh help # Configuration help
|
||||
```
|
||||
|
||||
## 🎉 Next Steps
|
||||
|
||||
After successful deployment:
|
||||
|
||||
1. **Monitor Performance**: Use built-in health checks and metrics
|
||||
2. **Regular Maintenance**: Schedule automated backups and security scans
|
||||
3. **Security Updates**: Keep system and dependencies updated
|
||||
4. **Scale Planning**: Monitor resource usage and plan for growth
|
||||
5. **Team Training**: Share deployment knowledge with team members
|
||||
|
||||
## 📞 Support
|
||||
|
||||
- **Documentation**: Check deployment/docs/ directory
|
||||
- **Logs**: Review deployment/infrastructure/logs/
|
||||
- **Security**: Check deployment/.security/audit.log
|
||||
- **Health Checks**: Use ./deploy-cli.sh health <environment>
|
||||
|
||||
---
|
||||
|
||||
**🎯 The enhanced deployment system transforms manual deployment processes into a professional, automated, and secure workflow that meets enterprise standards while remaining developer-friendly.**
|
||||
@@ -1,352 +0,0 @@
|
||||
# Custom PHP Framework Deployment Makefile
|
||||
# Domain: michaelschiemer.de | Email: kontakt@michaelschiemer.de | PHP: 8.4
|
||||
|
||||
# Default environment
|
||||
ENV ?= staging
|
||||
|
||||
# Colors for output
|
||||
RED = \033[0;31m
|
||||
GREEN = \033[0;32m
|
||||
YELLOW = \033[1;33m
|
||||
BLUE = \033[0;34m
|
||||
PURPLE = \033[0;35m
|
||||
CYAN = \033[0;36m
|
||||
WHITE = \033[1;37m
|
||||
NC = \033[0m
|
||||
|
||||
# Directories
|
||||
DEPLOYMENT_DIR = $(CURDIR)
|
||||
PROJECT_ROOT = $(CURDIR)/..
|
||||
INFRASTRUCTURE_DIR = $(DEPLOYMENT_DIR)/infrastructure
|
||||
APPLICATIONS_DIR = $(DEPLOYMENT_DIR)/applications
|
||||
|
||||
.PHONY: help deploy deploy-dry deploy-infrastructure deploy-application
|
||||
|
||||
##@ Deployment Commands
|
||||
|
||||
help: ## Display this help message
|
||||
@echo "$(WHITE)Custom PHP Framework Deployment System$(NC)"
|
||||
@echo "$(CYAN)Domain: michaelschiemer.de | Email: kontakt@michaelschiemer.de | PHP: 8.4$(NC)"
|
||||
@echo ""
|
||||
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make $(CYAN)<target>$(NC) [ENV=<environment>]\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " $(CYAN)%-20s$(NC) %s\n", $$1, $$2 } /^##@/ { printf "\n$(WHITE)%s$(NC)\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
|
||||
@echo ""
|
||||
@echo "$(WHITE)Examples:$(NC)"
|
||||
@echo " $(CYAN)make deploy ENV=staging$(NC) # Deploy to staging"
|
||||
@echo " $(CYAN)make deploy-production$(NC) # Deploy to production"
|
||||
@echo " $(CYAN)make deploy-dry ENV=production$(NC) # Dry run for production"
|
||||
@echo " $(CYAN)make infrastructure ENV=staging$(NC) # Deploy only infrastructure"
|
||||
@echo " $(CYAN)make application ENV=production$(NC) # Deploy only application"
|
||||
@echo ""
|
||||
|
||||
deploy: ## Deploy full stack to specified environment (default: staging)
|
||||
@echo "$(GREEN)🚀 Deploying full stack to $(ENV) environment$(NC)"
|
||||
@./deploy.sh $(ENV)
|
||||
|
||||
deploy-dry: ## Perform dry run deployment to specified environment
|
||||
@echo "$(BLUE)🧪 Performing dry run deployment to $(ENV) environment$(NC)"
|
||||
@./deploy.sh $(ENV) --dry-run --verbose
|
||||
|
||||
deploy-force: ## Force deploy (skip validations) to specified environment
|
||||
@echo "$(YELLOW)⚠️ Force deploying to $(ENV) environment$(NC)"
|
||||
@./deploy.sh $(ENV) --force
|
||||
|
||||
deploy-quick: ## Quick deploy (skip tests and backup) to specified environment
|
||||
@echo "$(YELLOW)⚡ Quick deploying to $(ENV) environment$(NC)"
|
||||
@./deploy.sh $(ENV) --skip-tests --skip-backup
|
||||
|
||||
##@ Environment-Specific Shortcuts
|
||||
|
||||
deploy-development: ## Deploy to development environment
|
||||
@echo "$(GREEN)🔧 Deploying to development environment$(NC)"
|
||||
@./deploy.sh development --non-interactive
|
||||
|
||||
deploy-staging: ## Deploy to staging environment
|
||||
@echo "$(GREEN)🧪 Deploying to staging environment$(NC)"
|
||||
@./deploy.sh staging
|
||||
|
||||
deploy-production: ## Deploy to production environment (with confirmations)
|
||||
@echo "$(RED)🌟 Deploying to production environment$(NC)"
|
||||
@./deploy.sh production
|
||||
|
||||
##@ Partial Deployment Commands
|
||||
|
||||
infrastructure: ## Deploy only infrastructure (Ansible) to specified environment
|
||||
@echo "$(PURPLE)🏗️ Deploying infrastructure to $(ENV) environment$(NC)"
|
||||
@./deploy.sh $(ENV) --infrastructure-only
|
||||
|
||||
application: ## Deploy only application (Docker Compose) to specified environment
|
||||
@echo "$(CYAN)📦 Deploying application to $(ENV) environment$(NC)"
|
||||
@./deploy.sh $(ENV) --application-only
|
||||
|
||||
infrastructure-dry: ## Dry run infrastructure deployment
|
||||
@echo "$(BLUE)🧪 Dry run infrastructure deployment to $(ENV) environment$(NC)"
|
||||
@./deploy.sh $(ENV) --infrastructure-only --dry-run
|
||||
|
||||
application-dry: ## Dry run application deployment
|
||||
@echo "$(BLUE)🧪 Dry run application deployment to $(ENV) environment$(NC)"
|
||||
@./deploy.sh $(ENV) --application-only --dry-run
|
||||
|
||||
##@ Setup and Maintenance Commands
|
||||
|
||||
setup: ## First-time setup for deployment environment
|
||||
@echo "$(GREEN)🔧 Setting up deployment environment$(NC)"
|
||||
@./setup.sh
|
||||
|
||||
check-prerequisites: ## Check deployment prerequisites
|
||||
@echo "$(BLUE)🔍 Checking deployment prerequisites$(NC)"
|
||||
@./deploy.sh $(ENV) --dry-run --infrastructure-only --application-only 2>/dev/null || echo "$(YELLOW)Run 'make setup' to install missing dependencies$(NC)"
|
||||
|
||||
validate-config: ## Validate configuration files for specified environment
|
||||
@echo "$(BLUE)✅ Validating configuration for $(ENV) environment$(NC)"
|
||||
@if [ -f "$(APPLICATIONS_DIR)/environments/.env.$(ENV)" ]; then \
|
||||
echo "$(GREEN)✓ Application environment file found$(NC)"; \
|
||||
grep -q "*** REQUIRED" "$(APPLICATIONS_DIR)/environments/.env.$(ENV)" && \
|
||||
echo "$(RED)❌ Found unfilled template values$(NC)" && \
|
||||
grep "*** REQUIRED" "$(APPLICATIONS_DIR)/environments/.env.$(ENV)" || \
|
||||
echo "$(GREEN)✓ No template placeholders found$(NC)"; \
|
||||
else \
|
||||
echo "$(RED)❌ Application environment file not found$(NC)"; \
|
||||
echo "$(YELLOW)Copy from template: cp $(APPLICATIONS_DIR)/environments/.env.$(ENV).template $(APPLICATIONS_DIR)/environments/.env.$(ENV)$(NC)"; \
|
||||
fi
|
||||
@if [ -f "$(INFRASTRUCTURE_DIR)/inventories/$(ENV)/hosts.yml" ]; then \
|
||||
echo "$(GREEN)✓ Ansible inventory found$(NC)"; \
|
||||
else \
|
||||
echo "$(YELLOW)⚠️ Ansible inventory not found (infrastructure deployment will be skipped)$(NC)"; \
|
||||
fi
|
||||
|
||||
##@ Health and Status Commands
|
||||
|
||||
status: ## Show deployment status for specified environment
|
||||
@echo "$(BLUE)📊 Checking deployment status for $(ENV) environment$(NC)"
|
||||
@$(MAKE) validate-config ENV=$(ENV)
|
||||
@if [ -f "$(APPLICATIONS_DIR)/scripts/health-check.sh" ]; then \
|
||||
echo "$(BLUE)Running health checks...$(NC)"; \
|
||||
$(APPLICATIONS_DIR)/scripts/health-check.sh $(ENV) 2>/dev/null || echo "$(YELLOW)Health checks failed or services not running$(NC)"; \
|
||||
else \
|
||||
echo "$(YELLOW)Health check script not found$(NC)"; \
|
||||
fi
|
||||
|
||||
health: ## Run health checks for specified environment
|
||||
@echo "$(GREEN)🏥 Running health checks for $(ENV) environment$(NC)"
|
||||
@if [ -f "$(APPLICATIONS_DIR)/scripts/health-check.sh" ]; then \
|
||||
$(APPLICATIONS_DIR)/scripts/health-check.sh $(ENV); \
|
||||
else \
|
||||
echo "$(RED)❌ Health check script not found$(NC)"; \
|
||||
fi
|
||||
|
||||
##@ Log Management
|
||||
|
||||
logs: ## Show logs for all services (ENV=staging SERVICE=all FOLLOW=no)
|
||||
@./deploy-cli.sh logs $(ENV)
|
||||
|
||||
logs-follow: ## Follow logs for all services in real-time
|
||||
@./deploy-cli.sh logs $(ENV) "" --follow
|
||||
|
||||
logs-php: ## Show PHP service logs
|
||||
@./deploy-cli.sh logs $(ENV) php
|
||||
|
||||
logs-php-follow: ## Follow PHP service logs in real-time
|
||||
@./deploy-cli.sh logs $(ENV) php --follow
|
||||
|
||||
logs-nginx: ## Show Nginx service logs
|
||||
@./deploy-cli.sh logs $(ENV) web
|
||||
|
||||
logs-nginx-follow: ## Follow Nginx service logs in real-time
|
||||
@./deploy-cli.sh logs $(ENV) web --follow
|
||||
|
||||
logs-db: ## Show database service logs
|
||||
@./deploy-cli.sh logs $(ENV) db
|
||||
|
||||
logs-db-follow: ## Follow database service logs in real-time
|
||||
@./deploy-cli.sh logs $(ENV) db --follow
|
||||
|
||||
logs-redis: ## Show Redis service logs
|
||||
@./deploy-cli.sh logs $(ENV) redis
|
||||
|
||||
logs-redis-follow: ## Follow Redis service logs in real-time
|
||||
@./deploy-cli.sh logs $(ENV) redis --follow
|
||||
|
||||
logs-worker: ## Show queue worker service logs
|
||||
@./deploy-cli.sh logs $(ENV) queue-worker
|
||||
|
||||
logs-worker-follow: ## Follow queue worker service logs in real-time
|
||||
@./deploy-cli.sh logs $(ENV) queue-worker --follow
|
||||
|
||||
# Production shortcuts
|
||||
logs-prod: ## Show production logs (all services)
|
||||
@./deploy-cli.sh logs production
|
||||
|
||||
logs-prod-php: ## Show production PHP logs
|
||||
@./deploy-cli.sh logs production php
|
||||
|
||||
logs-prod-nginx: ## Show production Nginx logs
|
||||
@./deploy-cli.sh logs production web
|
||||
|
||||
logs-prod-follow: ## Follow production logs (PHP service)
|
||||
@./deploy-cli.sh logs production php --follow
|
||||
|
||||
##@ Development and Testing Commands
|
||||
|
||||
test: ## Run deployment tests
|
||||
@echo "$(GREEN)🧪 Running deployment tests$(NC)"
|
||||
@cd $(PROJECT_ROOT) && [ -f vendor/bin/pest ] && vendor/bin/pest || echo "$(YELLOW)No test framework found$(NC)"
|
||||
|
||||
test-infrastructure: ## Test Ansible playbook syntax
|
||||
@echo "$(BLUE)🔍 Testing Ansible playbook syntax$(NC)"
|
||||
@if [ -f "$(INFRASTRUCTURE_DIR)/inventories/$(ENV)/hosts.yml" ]; then \
|
||||
cd $(INFRASTRUCTURE_DIR) && ansible-playbook \
|
||||
-i inventories/$(ENV)/hosts.yml \
|
||||
site.yml \
|
||||
--syntax-check; \
|
||||
else \
|
||||
echo "$(RED)❌ Ansible inventory not found for $(ENV)$(NC)"; \
|
||||
fi
|
||||
|
||||
build-assets: ## Build frontend assets
|
||||
@echo "$(CYAN)🎨 Building frontend assets$(NC)"
|
||||
@cd $(PROJECT_ROOT) && [ -f package.json ] && npm ci && npm run build || echo "$(YELLOW)No package.json found$(NC)"
|
||||
|
||||
##@ Configuration Management Commands
|
||||
|
||||
init-config: ## Initialize configuration files from templates
|
||||
@echo "$(GREEN)📝 Initializing configuration files$(NC)"
|
||||
@for env in development staging production; do \
|
||||
if [ ! -f "$(APPLICATIONS_DIR)/environments/.env.$$env" ] && [ -f "$(APPLICATIONS_DIR)/environments/.env.$$env.template" ]; then \
|
||||
echo "$(YELLOW)Creating .env.$$env from template$(NC)"; \
|
||||
cp "$(APPLICATIONS_DIR)/environments/.env.$$env.template" "$(APPLICATIONS_DIR)/environments/.env.$$env"; \
|
||||
else \
|
||||
echo "$(BLUE).env.$$env already exists or template not found$(NC)"; \
|
||||
fi; \
|
||||
done
|
||||
|
||||
edit-config: ## Edit configuration file for specified environment
|
||||
@echo "$(CYAN)📝 Editing configuration for $(ENV) environment$(NC)"
|
||||
@if [ -f "$(APPLICATIONS_DIR)/environments/.env.$(ENV)" ]; then \
|
||||
${EDITOR:-nano} "$(APPLICATIONS_DIR)/environments/.env.$(ENV)"; \
|
||||
else \
|
||||
echo "$(RED)❌ Configuration file not found: .env.$(ENV)$(NC)"; \
|
||||
echo "$(YELLOW)Run 'make init-config' first$(NC)"; \
|
||||
fi
|
||||
|
||||
show-config: ## Display configuration for specified environment (safe values only)
|
||||
@echo "$(BLUE)📋 Configuration for $(ENV) environment$(NC)"
|
||||
@if [ -f "$(APPLICATIONS_DIR)/environments/.env.$(ENV)" ]; then \
|
||||
echo "$(CYAN)Safe configuration values:$(NC)"; \
|
||||
grep -E '^(APP_|DB_HOST|DB_PORT|DB_NAME|DOMAIN)' "$(APPLICATIONS_DIR)/environments/.env.$(ENV)" | grep -v -E '(PASSWORD|SECRET|KEY)' || true; \
|
||||
echo "$(YELLOW)Sensitive values hidden for security$(NC)"; \
|
||||
else \
|
||||
echo "$(RED)❌ Configuration file not found$(NC)"; \
|
||||
fi
|
||||
|
||||
##@ Backup and Recovery Commands
|
||||
|
||||
backup: ## Create backup before deployment
|
||||
@echo "$(GREEN)💾 Creating backup for $(ENV) environment$(NC)"
|
||||
@mkdir -p $(PROJECT_ROOT)/storage/backups
|
||||
@cd $(PROJECT_ROOT) && docker-compose \
|
||||
-f docker-compose.yml \
|
||||
-f $(APPLICATIONS_DIR)/docker-compose.$(ENV).yml \
|
||||
--env-file $(APPLICATIONS_DIR)/environments/.env.$(ENV) \
|
||||
exec -T db sh -c 'mariadb-dump -u root -p$$DB_ROOT_PASSWORD --all-databases' \
|
||||
> $(PROJECT_ROOT)/storage/backups/backup_$(ENV)_$(shell date +%Y%m%d_%H%M%S).sql
|
||||
@echo "$(GREEN)✓ Backup created$(NC)"
|
||||
|
||||
restore: ## Restore from latest backup (use with caution!)
|
||||
@echo "$(RED)⚠️ RESTORING FROM BACKUP - THIS WILL OVERWRITE CURRENT DATA$(NC)"
|
||||
@read -p "Are you sure? Type 'RESTORE' to confirm: " confirm && [ "$$confirm" = "RESTORE" ] || (echo "Cancelled" && exit 1)
|
||||
@latest_backup=$$(ls -t $(PROJECT_ROOT)/storage/backups/backup_$(ENV)_*.sql 2>/dev/null | head -n1); \
|
||||
if [ -n "$$latest_backup" ]; then \
|
||||
echo "$(YELLOW)Restoring from: $$latest_backup$(NC)"; \
|
||||
cd $(PROJECT_ROOT) && docker-compose \
|
||||
-f docker-compose.yml \
|
||||
-f $(APPLICATIONS_DIR)/docker-compose.$(ENV).yml \
|
||||
--env-file $(APPLICATIONS_DIR)/environments/.env.$(ENV) \
|
||||
exec -T db sh -c 'mysql -u root -p$$DB_ROOT_PASSWORD' < "$$latest_backup"; \
|
||||
echo "$(GREEN)✓ Database restored$(NC)"; \
|
||||
else \
|
||||
echo "$(RED)❌ No backup files found for $(ENV)$(NC)"; \
|
||||
fi
|
||||
|
||||
##@ Utility Commands
|
||||
|
||||
clean: ## Clean up deployment artifacts and logs
|
||||
@echo "$(YELLOW)🧹 Cleaning deployment artifacts$(NC)"
|
||||
@rm -rf $(INFRASTRUCTURE_DIR)/logs/*
|
||||
@docker system prune -f
|
||||
@echo "$(GREEN)✓ Cleanup completed$(NC)"
|
||||
|
||||
version: ## Show version information
|
||||
@echo "$(WHITE)Custom PHP Framework Deployment System$(NC)"
|
||||
@echo "Version: 1.0.0"
|
||||
@echo "Domain: michaelschiemer.de"
|
||||
@echo "Email: kontakt@michaelschiemer.de"
|
||||
@echo "PHP Version: 8.4"
|
||||
|
||||
info: ## Show deployment information and available environments
|
||||
@echo "$(WHITE)📋 Deployment Information$(NC)"
|
||||
@echo ""
|
||||
@echo "$(CYAN)Project Details:$(NC)"
|
||||
@echo " Domain: michaelschiemer.de"
|
||||
@echo " Email: kontakt@michaelschiemer.de"
|
||||
@echo " PHP Version: 8.4"
|
||||
@echo " Framework: Custom PHP Framework"
|
||||
@echo ""
|
||||
@echo "$(CYAN)Available Environments:$(NC)"
|
||||
@for env in development staging production; do \
|
||||
echo -n " $$env: "; \
|
||||
if [ -f "$(APPLICATIONS_DIR)/environments/.env.$$env" ]; then \
|
||||
echo "$(GREEN)✓ Configured$(NC)"; \
|
||||
else \
|
||||
echo "$(RED)❌ Not configured$(NC)"; \
|
||||
fi; \
|
||||
done
|
||||
@echo ""
|
||||
@echo "$(CYAN)Deployment Modes:$(NC)"
|
||||
@echo " • Full Stack: Infrastructure + Application"
|
||||
@echo " • Infrastructure Only: Ansible deployment"
|
||||
@echo " • Application Only: Docker Compose deployment"
|
||||
@echo ""
|
||||
@echo "$(CYAN)Quick Commands:$(NC)"
|
||||
@echo " make deploy-staging # Deploy to staging"
|
||||
@echo " make deploy-production # Deploy to production"
|
||||
@echo " make deploy-dry ENV=prod # Dry run for production"
|
||||
@echo " make status ENV=staging # Check staging status"
|
||||
@echo ""
|
||||
|
||||
##@ Emergency Commands
|
||||
|
||||
emergency-stop: ## Emergency stop all services for specified environment
|
||||
@echo "$(RED)🚨 EMERGENCY STOP: Stopping all services for $(ENV) environment$(NC)"
|
||||
@cd $(PROJECT_ROOT) && docker-compose \
|
||||
-f docker-compose.yml \
|
||||
-f $(APPLICATIONS_DIR)/docker-compose.$(ENV).yml \
|
||||
--env-file $(APPLICATIONS_DIR)/environments/.env.$(ENV) \
|
||||
down
|
||||
@echo "$(YELLOW)✓ All services stopped$(NC)"
|
||||
|
||||
emergency-restart: ## Emergency restart all services for specified environment
|
||||
@echo "$(YELLOW)🔄 EMERGENCY RESTART: Restarting all services for $(ENV) environment$(NC)"
|
||||
@$(MAKE) emergency-stop ENV=$(ENV)
|
||||
@sleep 5
|
||||
@cd $(PROJECT_ROOT) && docker-compose \
|
||||
-f docker-compose.yml \
|
||||
-f $(APPLICATIONS_DIR)/docker-compose.$(ENV).yml \
|
||||
--env-file $(APPLICATIONS_DIR)/environments/.env.$(ENV) \
|
||||
up -d
|
||||
@echo "$(GREEN)✓ All services restarted$(NC)"
|
||||
|
||||
rollback: ## Rollback to previous deployment (use with extreme caution!)
|
||||
@echo "$(RED)⚠️ ROLLBACK: This will attempt to restore the previous deployment$(NC)"
|
||||
@echo "$(YELLOW)This is a destructive operation that should only be used in emergencies$(NC)"
|
||||
@read -p "Are you sure? Type 'ROLLBACK' to confirm: " confirm && [ "$$confirm" = "ROLLBACK" ] || (echo "Cancelled" && exit 1)
|
||||
@echo "$(YELLOW)Performing emergency rollback...$(NC)"
|
||||
@$(MAKE) backup ENV=$(ENV)
|
||||
@$(MAKE) restore ENV=$(ENV)
|
||||
@echo "$(GREEN)✓ Rollback completed$(NC)"
|
||||
@echo "$(YELLOW)Please verify system functionality immediately$(NC)"
|
||||
|
||||
# Include environment-specific makefiles if they exist
|
||||
-include $(DEPLOYMENT_DIR)/environments/$(ENV).mk
|
||||
|
||||
# Default target
|
||||
.DEFAULT_GOAL := help
|
||||
@@ -1,313 +0,0 @@
|
||||
# Production Deployment Setup
|
||||
|
||||
Guide for deploying the Custom PHP Framework to production on Netcup VPS.
|
||||
|
||||
## Server Details
|
||||
|
||||
- **IP Address**: 94.16.110.151
|
||||
- **Domain**: michaelschiemer.de
|
||||
- **Email**: kontakt@michaelschiemer.de
|
||||
- **SSH Key**: /home/michael/.ssh/production
|
||||
- **OS**: Fresh Ubuntu 22.04 or Debian 12
|
||||
|
||||
## Initial Server Setup
|
||||
|
||||
### 1. First-time Server Configuration
|
||||
|
||||
Run the initial server setup (only once on fresh server):
|
||||
|
||||
```bash
|
||||
cd deployment/infrastructure
|
||||
|
||||
# Run initial setup as root user
|
||||
ansible-playbook -i inventories/production/hosts.yml setup-fresh-server.yml
|
||||
```
|
||||
|
||||
This will:
|
||||
- Create the `deploy` user with sudo privileges
|
||||
- Configure SSH key authentication
|
||||
- Harden SSH security
|
||||
- Set up firewall (UFW)
|
||||
- Configure fail2ban
|
||||
- Install essential packages
|
||||
- Create directory structure
|
||||
|
||||
### 2. Update Inventory Configuration
|
||||
|
||||
After initial setup, update `inventories/production/hosts.yml`:
|
||||
|
||||
```yaml
|
||||
# Change from:
|
||||
ansible_user: root
|
||||
fresh_server_setup: true
|
||||
|
||||
# To:
|
||||
ansible_user: deploy
|
||||
fresh_server_setup: false
|
||||
```
|
||||
|
||||
### 3. Full Infrastructure Deployment
|
||||
|
||||
Deploy the complete infrastructure:
|
||||
|
||||
```bash
|
||||
# Deploy infrastructure only
|
||||
ansible-playbook -i inventories/production/hosts.yml site.yml
|
||||
|
||||
# Or use the orchestration script
|
||||
./deploy.sh production --infrastructure-only
|
||||
```
|
||||
|
||||
## Environment Configuration
|
||||
|
||||
### 1. Configure Production Environment
|
||||
|
||||
Edit the production environment file:
|
||||
|
||||
```bash
|
||||
nano applications/environments/.env.production
|
||||
```
|
||||
|
||||
Update these required values:
|
||||
|
||||
```env
|
||||
# Database passwords (generate strong passwords)
|
||||
DB_PASSWORD=*** SET_STRONG_PASSWORD ***
|
||||
DB_ROOT_PASSWORD=*** SET_STRONG_ROOT_PASSWORD ***
|
||||
|
||||
# Redis password
|
||||
REDIS_PASSWORD=*** SET_STRONG_PASSWORD ***
|
||||
|
||||
# Application security key (generate: openssl rand -base64 32)
|
||||
APP_KEY=*** GENERATE_KEY ***
|
||||
|
||||
# Mail configuration (configure with your SMTP provider)
|
||||
MAIL_HOST=*** YOUR_SMTP_HOST ***
|
||||
MAIL_USERNAME=*** YOUR_SMTP_USERNAME ***
|
||||
MAIL_PASSWORD=*** YOUR_SMTP_PASSWORD ***
|
||||
|
||||
# External API keys
|
||||
SHOPIFY_WEBHOOK_SECRET=*** YOUR_WEBHOOK_SECRET ***
|
||||
RAPIDMAIL_USERNAME=*** IF_USING_RAPIDMAIL ***
|
||||
RAPIDMAIL_PASSWORD=*** IF_USING_RAPIDMAIL ***
|
||||
|
||||
# Monitoring
|
||||
GRAFANA_ADMIN_PASSWORD=*** SET_STRONG_PASSWORD ***
|
||||
```
|
||||
|
||||
### 2. Generate Required Keys
|
||||
|
||||
```bash
|
||||
# Generate application key
|
||||
openssl rand -base64 32
|
||||
|
||||
# Generate secure passwords
|
||||
openssl rand -base64 24
|
||||
```
|
||||
|
||||
## Deployment Process
|
||||
|
||||
### Full Deployment
|
||||
|
||||
Deploy both infrastructure and application:
|
||||
|
||||
```bash
|
||||
./deploy.sh production
|
||||
```
|
||||
|
||||
### Infrastructure Only
|
||||
|
||||
Deploy only the infrastructure (server setup, Nginx, Docker, etc.):
|
||||
|
||||
```bash
|
||||
./deploy.sh production --infrastructure-only
|
||||
```
|
||||
|
||||
### Application Only
|
||||
|
||||
Deploy only the application code:
|
||||
|
||||
```bash
|
||||
./deploy.sh production --application-only
|
||||
```
|
||||
|
||||
### Dry Run
|
||||
|
||||
Test deployment without making changes:
|
||||
|
||||
```bash
|
||||
./deploy.sh production --dry-run
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### SSH Access
|
||||
|
||||
- Root login disabled after initial setup
|
||||
- Only `deploy` user has access
|
||||
- SSH key authentication required
|
||||
- Password authentication disabled
|
||||
|
||||
### Firewall Rules
|
||||
|
||||
- Only ports 22 (SSH), 80 (HTTP), 443 (HTTPS) open
|
||||
- UFW configured with default deny
|
||||
- Fail2ban protecting SSH
|
||||
|
||||
### SSL/TLS
|
||||
|
||||
- Let's Encrypt SSL certificates
|
||||
- HTTPS enforced
|
||||
- Modern TLS configuration (TLS 1.2/1.3)
|
||||
- HSTS headers
|
||||
|
||||
## Post-Deployment
|
||||
|
||||
### 1. Verify Deployment
|
||||
|
||||
Check services are running:
|
||||
|
||||
```bash
|
||||
# SSH into the server
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# Check Docker containers
|
||||
docker ps
|
||||
|
||||
# Check Nginx
|
||||
sudo systemctl status nginx
|
||||
|
||||
# Check firewall
|
||||
sudo ufw status
|
||||
|
||||
# Check fail2ban
|
||||
sudo fail2ban-client status
|
||||
```
|
||||
|
||||
### 2. Test Application
|
||||
|
||||
- Visit https://michaelschiemer.de
|
||||
- Check health endpoint: https://michaelschiemer.de/health.php
|
||||
- Verify SSL certificate
|
||||
|
||||
### 3. DNS Configuration
|
||||
|
||||
Make sure your DNS points to the server:
|
||||
|
||||
```bash
|
||||
# Check DNS resolution
|
||||
dig michaelschiemer.de
|
||||
nslookup michaelschiemer.de
|
||||
```
|
||||
|
||||
## Monitoring and Maintenance
|
||||
|
||||
### Log Locations
|
||||
|
||||
- Application logs: `/var/log/custom-php-framework/`
|
||||
- Nginx logs: `/var/log/nginx/`
|
||||
- Docker logs: `docker logs <container_name>`
|
||||
|
||||
### Health Checks
|
||||
|
||||
- Health endpoint: `/health.php`
|
||||
- Prometheus metrics: `:9090/metrics` (if enabled)
|
||||
|
||||
### Backups
|
||||
|
||||
- Database backups run daily at 2 AM
|
||||
- Backups retained for 30 days
|
||||
- Location: `/var/www/backups/`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Permission denied**: Check SSH key permissions
|
||||
2. **Connection refused**: Verify firewall rules
|
||||
3. **SSL certificate issues**: Check Let's Encrypt logs
|
||||
4. **Docker issues**: Check Docker service status
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Run deployment with verbose output:
|
||||
|
||||
```bash
|
||||
./deploy.sh production --verbose
|
||||
```
|
||||
|
||||
### Manual Commands
|
||||
|
||||
```bash
|
||||
# SSH into server
|
||||
ssh -i /home/michael/.ssh/production deploy@94.16.110.151
|
||||
|
||||
# Check system status
|
||||
sudo systemctl status nginx docker fail2ban
|
||||
|
||||
# View Docker containers
|
||||
docker ps -a
|
||||
|
||||
# Check logs
|
||||
sudo tail -f /var/log/nginx/error.log
|
||||
docker logs php-container
|
||||
```
|
||||
|
||||
## Security Updates
|
||||
|
||||
### Regular Maintenance
|
||||
|
||||
1. Update system packages monthly
|
||||
2. Review fail2ban logs for suspicious activity
|
||||
3. Monitor SSL certificate expiration
|
||||
4. Check for security updates
|
||||
|
||||
### Update Commands
|
||||
|
||||
```bash
|
||||
# Update system packages
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
|
||||
# Update Docker containers
|
||||
cd /var/www/html
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
|
||||
# Renew SSL certificates (automatic with certbot)
|
||||
sudo certbot renew
|
||||
```
|
||||
|
||||
## Recovery Procedures
|
||||
|
||||
### Rollback Deployment
|
||||
|
||||
If issues occur:
|
||||
|
||||
```bash
|
||||
# Stop application
|
||||
docker-compose down
|
||||
|
||||
# Restore from backup
|
||||
sudo rsync -av /var/www/backups/latest/ /var/www/html/
|
||||
|
||||
# Restart application
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Emergency Access
|
||||
|
||||
If SSH key issues occur:
|
||||
|
||||
1. Access via Netcup VPS console
|
||||
2. Re-enable password authentication temporarily
|
||||
3. Fix SSH key configuration
|
||||
4. Disable password authentication again
|
||||
|
||||
## Support and Documentation
|
||||
|
||||
- Framework documentation: `/docs/`
|
||||
- Deployment logs: Check Ansible output
|
||||
- System logs: `journalctl -xe`
|
||||
- Application logs: Docker container logs
|
||||
|
||||
For issues, check the troubleshooting guide in `deployment/docs/TROUBLESHOOTING.md`.
|
||||
@@ -1,373 +0,0 @@
|
||||
# Production-Ready Deployment Infrastructure
|
||||
|
||||
This directory contains production-ready deployment infrastructure for the Custom PHP Framework using containerized deployment with Ansible automation.
|
||||
|
||||
## 🚀 Key Features
|
||||
|
||||
- **Container-Based Deployments**: Pre-built images, no build on production servers
|
||||
- **Idempotent Operations**: Repeatable deployments with consistent results
|
||||
- **Zero-Downtime Deployments**: Smart container recreation with health checks
|
||||
- **Rollback Support**: Quick rollback to previous versions with tag management
|
||||
- **Security Hardened**: No secrets in repo, vault-encrypted sensitive data
|
||||
- **Optional CDN Integration**: Flag-based CDN configuration updates
|
||||
- **Comprehensive Health Checks**: Container and HTTP health validation
|
||||
- **Backup Management**: Configurable backup creation and retention
|
||||
|
||||
## 📁 Directory Structure
|
||||
|
||||
```
|
||||
deployment/
|
||||
├── deploy-production.sh # Production deployment script
|
||||
├── rollback-production.sh # Production rollback script
|
||||
├── infrastructure/ # Ansible automation
|
||||
│ ├── ansible.cfg # Production-hardened Ansible config
|
||||
│ ├── inventories/ # Environment-specific inventories
|
||||
│ │ └── production/
|
||||
│ │ └── hosts.yml # Production servers and configuration
|
||||
│ ├── group_vars/ # Shared variables
|
||||
│ │ └── all/
|
||||
│ │ ├── main.yml # Global configuration
|
||||
│ │ └── vault.yml # Encrypted secrets
|
||||
│ ├── templates/ # Environment file templates
|
||||
│ │ ├── production.env.template
|
||||
│ │ └── staging.env.template
|
||||
│ └── playbooks/ # Ansible automation playbooks
|
||||
│ ├── deploy-application.yml # Main deployment playbook
|
||||
│ ├── rollback.yml # Rollback playbook
|
||||
│ └── update-cdn.yml # Optional CDN update
|
||||
├── applications/ # Docker Compose configurations
|
||||
│ ├── docker-compose.yml # Base compose file
|
||||
│ ├── docker-compose.production.yml # Production overlay
|
||||
│ └── environments/ # Environment templates (for reference)
|
||||
└── .gitignore # Excludes sensitive files
|
||||
```
|
||||
|
||||
## 🛠 Prerequisites
|
||||
|
||||
### Required Tools
|
||||
- **Ansible 2.9+** with `community.docker` collection
|
||||
- **Docker** on target servers
|
||||
- **SSH access** to production servers with key-based authentication
|
||||
- **Vault password file** for encrypted secrets
|
||||
|
||||
### Infrastructure Requirements
|
||||
- Pre-built container images in registry
|
||||
- Production server: `94.16.110.151` with `deploy` user
|
||||
- Domain: `michaelschiemer.de` with SSL certificates
|
||||
- SSH key: `~/.ssh/deploy_key`
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### 1. Vault Password File
|
||||
|
||||
Create vault password file (not in repo):
|
||||
```bash
|
||||
# Create vault password file
|
||||
echo "your_vault_password" > ~/.ansible_vault_pass
|
||||
chmod 600 ~/.ansible_vault_pass
|
||||
|
||||
# Set environment variable
|
||||
export ANSIBLE_VAULT_PASSWORD_FILE=~/.ansible_vault_pass
|
||||
```
|
||||
|
||||
### 2. SSH Key Setup
|
||||
|
||||
Ensure your SSH key is properly configured:
|
||||
```bash
|
||||
# Copy your SSH key to the expected location
|
||||
cp ~/.ssh/your_production_key ~/.ssh/deploy_key
|
||||
chmod 600 ~/.ssh/deploy_key
|
||||
|
||||
# Test connection
|
||||
ssh -i ~/.ssh/deploy_key deploy@94.16.110.151
|
||||
```
|
||||
|
||||
### 3. Ansible Collections
|
||||
|
||||
Install required Ansible collections:
|
||||
```bash
|
||||
ansible-galaxy collection install community.docker
|
||||
```
|
||||
|
||||
## 🚀 Deployment
|
||||
|
||||
### Production Deployment
|
||||
|
||||
Deploy a specific version to production:
|
||||
|
||||
```bash
|
||||
# Basic deployment
|
||||
./deploy-production.sh 1.2.3
|
||||
|
||||
# Deployment with CDN update
|
||||
./deploy-production.sh 1.2.3 --cdn-update
|
||||
|
||||
# Deployment without backup
|
||||
./deploy-production.sh 1.2.3 --no-backup
|
||||
|
||||
# Custom backup retention
|
||||
./deploy-production.sh 1.2.3 --retention-days 60
|
||||
|
||||
# Using environment variables
|
||||
IMAGE_TAG=1.2.3 CDN_UPDATE=true ./deploy-production.sh
|
||||
```
|
||||
|
||||
### Rollback
|
||||
|
||||
Rollback to a previous version:
|
||||
|
||||
```bash
|
||||
# Rollback to previous version
|
||||
./rollback-production.sh 1.2.2
|
||||
|
||||
# Force rollback without confirmation
|
||||
./rollback-production.sh 1.2.2 --force
|
||||
|
||||
# Using environment variables
|
||||
ROLLBACK_TAG=1.2.2 ./rollback-production.sh
|
||||
```
|
||||
|
||||
### Manual Ansible Commands
|
||||
|
||||
Run specific playbooks manually:
|
||||
|
||||
```bash
|
||||
cd infrastructure
|
||||
|
||||
# Deploy application
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-application.yml \
|
||||
-e IMAGE_TAG=1.2.3 -e DOMAIN_NAME=michaelschiemer.de
|
||||
|
||||
# Rollback application
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/rollback.yml \
|
||||
-e ROLLBACK_TAG=1.2.2
|
||||
|
||||
# Update CDN only
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/update-cdn.yml \
|
||||
-e CDN_UPDATE=true
|
||||
```
|
||||
|
||||
## 📊 Environment Variables
|
||||
|
||||
### Deployment Variables
|
||||
```bash
|
||||
IMAGE_TAG=1.2.3 # Container image tag (required)
|
||||
DOMAIN_NAME=michaelschiemer.de # Target domain
|
||||
CDN_UPDATE=true # Enable CDN configuration update
|
||||
BACKUP_ENABLED=true # Enable pre-deployment backup
|
||||
BACKUP_RETENTION_DAYS=30 # Backup retention period
|
||||
```
|
||||
|
||||
### Rollback Variables
|
||||
```bash
|
||||
ROLLBACK_TAG=1.2.2 # Target rollback version (required)
|
||||
DOMAIN_NAME=michaelschiemer.de # Target domain
|
||||
```
|
||||
|
||||
### CI/CD Variables
|
||||
```bash
|
||||
ANSIBLE_VAULT_PASSWORD_FILE=~/.ansible_vault_pass # Vault password file
|
||||
DB_PASSWORD=secretpassword # Database password (encrypted in vault)
|
||||
REDIS_PASSWORD=redispass # Redis password (encrypted in vault)
|
||||
MAIL_PASSWORD=mailpass # Mail service password (encrypted in vault)
|
||||
```
|
||||
|
||||
## 🔒 Security
|
||||
|
||||
### Secrets Management
|
||||
- All secrets stored in `group_vars/all/vault.yml` (encrypted)
|
||||
- No secrets in repository or playbooks
|
||||
- Vault password via file or environment variable
|
||||
- SSH key-based authentication only
|
||||
|
||||
### Access Control
|
||||
- Deploy user with limited sudo privileges
|
||||
- Container security policies enforced
|
||||
- Firewall configured via base-security role
|
||||
- SSH hardening enabled
|
||||
|
||||
### Environment File Security
|
||||
- Environment files rendered from templates at deploy time
|
||||
- Sensitive values injected from vault
|
||||
- File permissions: 0600 (owner read/write only)
|
||||
- Backup of previous versions created
|
||||
|
||||
## 📈 Monitoring & Health Checks
|
||||
|
||||
### Container Health Checks
|
||||
- Individual container health monitoring
|
||||
- HTTP health endpoint validation
|
||||
- Configurable retry count and intervals
|
||||
- Automatic failure detection
|
||||
|
||||
### Application Monitoring
|
||||
- Health endpoint: `https://michaelschiemer.de/health`
|
||||
- Container status monitoring
|
||||
- Log aggregation via Docker logging drivers
|
||||
- Optional Prometheus metrics
|
||||
|
||||
### Backup Monitoring
|
||||
- Configurable backup retention
|
||||
- Automatic cleanup of old backups
|
||||
- Backup success/failure tracking
|
||||
- Emergency recovery tag storage
|
||||
|
||||
## 🔄 CI/CD Integration
|
||||
|
||||
### GitHub Actions Example
|
||||
|
||||
```yaml
|
||||
name: Deploy to Production
|
||||
|
||||
on:
|
||||
push:
|
||||
tags: ['v*']
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Setup Ansible
|
||||
run: |
|
||||
pip install ansible
|
||||
ansible-galaxy collection install community.docker
|
||||
|
||||
- name: Deploy to Production
|
||||
run: |
|
||||
echo "${{ secrets.ANSIBLE_VAULT_PASSWORD }}" > ~/.ansible_vault_pass
|
||||
chmod 600 ~/.ansible_vault_pass
|
||||
cd deployment
|
||||
./deploy-production.sh ${GITHUB_REF#refs/tags/v} --no-backup
|
||||
env:
|
||||
ANSIBLE_VAULT_PASSWORD_FILE: ~/.ansible_vault_pass
|
||||
ANSIBLE_HOST_KEY_CHECKING: false
|
||||
```
|
||||
|
||||
### Jenkins Pipeline Example
|
||||
|
||||
```groovy
|
||||
pipeline {
|
||||
agent any
|
||||
|
||||
environment {
|
||||
ANSIBLE_VAULT_PASSWORD_FILE = credentials('ansible-vault-password')
|
||||
IMAGE_TAG = "${params.VERSION}"
|
||||
}
|
||||
|
||||
parameters {
|
||||
string(name: 'VERSION', defaultValue: '', description: 'Version to deploy')
|
||||
booleanParam(name: 'CDN_UPDATE', defaultValue: false, description: 'Update CDN')
|
||||
}
|
||||
|
||||
stages {
|
||||
stage('Deploy') {
|
||||
steps {
|
||||
dir('deployment') {
|
||||
sh """
|
||||
./deploy-production.sh ${IMAGE_TAG} \
|
||||
${params.CDN_UPDATE ? '--cdn-update' : ''}
|
||||
"""
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**1. Vault Password Issues**
|
||||
```bash
|
||||
# Error: vault password file not found
|
||||
export ANSIBLE_VAULT_PASSWORD_FILE=~/.ansible_vault_pass
|
||||
```
|
||||
|
||||
**2. SSH Connection Issues**
|
||||
```bash
|
||||
# Test SSH connection
|
||||
ssh -i ~/.ssh/deploy_key deploy@94.16.110.151
|
||||
|
||||
# Update SSH key path in inventory if needed
|
||||
```
|
||||
|
||||
**3. Container Health Check Failures**
|
||||
```bash
|
||||
# Check container logs
|
||||
docker logs <container_name>
|
||||
|
||||
# Check health endpoint manually
|
||||
curl -k https://michaelschiemer.de/health
|
||||
```
|
||||
|
||||
**4. Permission Issues**
|
||||
```bash
|
||||
# Fix deployment script permissions
|
||||
chmod +x deploy-production.sh rollback-production.sh
|
||||
```
|
||||
|
||||
### Emergency Procedures
|
||||
|
||||
**1. Complete Rollback**
|
||||
```bash
|
||||
# Get last successful release
|
||||
cat /var/www/html/.last_successful_release
|
||||
|
||||
# Rollback to last known good version
|
||||
./rollback-production.sh <last_good_version> --force
|
||||
```
|
||||
|
||||
**2. Manual Container Restart**
|
||||
```bash
|
||||
# SSH to server and restart containers
|
||||
ssh deploy@94.16.110.151
|
||||
cd /var/www/html
|
||||
docker-compose restart
|
||||
```
|
||||
|
||||
**3. Emergency Recovery**
|
||||
```bash
|
||||
# Use emergency recovery tag if available
|
||||
cat /var/www/html/.emergency_recovery_tag
|
||||
./rollback-production.sh <emergency_recovery_tag> --force
|
||||
```
|
||||
|
||||
## 📝 Best Practices
|
||||
|
||||
### Deployment Best Practices
|
||||
1. Always use specific version tags (not `latest`)
|
||||
2. Test deployments on staging first
|
||||
3. Monitor application health after deployment
|
||||
4. Keep backup enabled for production
|
||||
5. Use CDN updates only when static assets change
|
||||
|
||||
### Security Best Practices
|
||||
1. Rotate vault passwords regularly
|
||||
2. Keep SSH keys secure and rotated
|
||||
3. Use minimal privilege principles
|
||||
4. Monitor access logs
|
||||
5. Keep Ansible and collections updated
|
||||
|
||||
### Operational Best Practices
|
||||
1. Document all deployment changes
|
||||
2. Monitor resource usage trends
|
||||
3. Plan for backup storage requirements
|
||||
4. Test rollback procedures regularly
|
||||
5. Keep deployment logs for auditing
|
||||
|
||||
## 📞 Support
|
||||
|
||||
For issues with deployment infrastructure:
|
||||
|
||||
1. Check troubleshooting section above
|
||||
2. Review Ansible logs in `infrastructure/logs/`
|
||||
3. Verify container health and logs
|
||||
4. Test connectivity and permissions
|
||||
5. Contact system administrator if needed
|
||||
|
||||
---
|
||||
|
||||
**Note**: This infrastructure follows production-ready practices with security, reliability, and operability in mind. Always test changes in staging before applying to production.
|
||||
@@ -1,218 +1,208 @@
|
||||
# Custom PHP Framework Deployment System
|
||||
# Pragmatic Production Deployment Setup
|
||||
|
||||
Complete deployment automation system for the Custom PHP Framework with infrastructure provisioning and application deployment.
|
||||
## Architecture Overview
|
||||
|
||||
## Project Information
|
||||
- **Domain**: michaelschiemer.de
|
||||
- **Email**: kontakt@michaelschiemer.de
|
||||
- **PHP Version**: 8.4
|
||||
- **Framework**: Custom PHP Framework
|
||||
This deployment setup uses separate Docker Compose stacks for better maintainability and clear separation of concerns.
|
||||
|
||||
## 🚀 Quick Start
|
||||
### Infrastructure Components
|
||||
|
||||
```bash
|
||||
# First-time setup
|
||||
./setup.sh
|
||||
```
|
||||
Production Server (94.16.110.151)
|
||||
├── Stack 1: Traefik (Reverse Proxy & SSL)
|
||||
├── Stack 2: Gitea (Git Server + MySQL + Redis)
|
||||
├── Stack 3: Docker Registry (Private Registry)
|
||||
├── Stack 4: Application (PHP + Nginx + Redis + Queue Workers)
|
||||
├── Stack 5: PostgreSQL (Database)
|
||||
└── Stack 6: Monitoring (Portainer + Grafana + Prometheus)
|
||||
|
||||
# Deploy to staging
|
||||
make deploy-staging
|
||||
|
||||
# Deploy to production
|
||||
make deploy-production
|
||||
Development Machine
|
||||
└── Gitea Actions Runner (local, Docker-in-Docker)
|
||||
```
|
||||
|
||||
## Architecture
|
||||
## Deployment Flow
|
||||
|
||||
The deployment system uses a hybrid approach combining:
|
||||
- **Ansible** for infrastructure provisioning (security, Docker, Nginx, SSL)
|
||||
- **Docker Compose** for application deployment (PHP 8.4, database, assets)
|
||||
- **Automation Scripts** for orchestrated deployment workflows
|
||||
```
|
||||
Developer → git push
|
||||
↓
|
||||
Gitea (Production)
|
||||
↓
|
||||
Gitea Actions (Dev Machine)
|
||||
↓
|
||||
Build Docker Image
|
||||
↓
|
||||
Push to Private Registry
|
||||
↓
|
||||
SSH/Ansible → Production Server
|
||||
↓
|
||||
docker compose pull
|
||||
↓
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
deployment/
|
||||
├── deploy.sh # Main deployment orchestrator
|
||||
├── setup.sh # First-time environment setup
|
||||
├── Makefile # Convenient deployment commands
|
||||
├── docs/ # Documentation
|
||||
│ ├── QUICKSTART.md # Quick start guide
|
||||
│ ├── ENVIRONMENTS.md # Environment configuration
|
||||
│ └── TROUBLESHOOTING.md # Troubleshooting guide
|
||||
├── infrastructure/ # Ansible infrastructure provisioning
|
||||
│ ├── inventories/ # Environment-specific inventories
|
||||
│ │ ├── development/ # Development inventory
|
||||
│ │ ├── staging/ # Staging inventory
|
||||
│ │ └── production/ # Production inventory
|
||||
│ ├── roles/ # Reusable Ansible roles
|
||||
│ │ ├── base-security/ # Security hardening
|
||||
│ │ ├── docker-runtime/ # Docker and PHP 8.4 setup
|
||||
│ │ ├── nginx-proxy/ # Nginx reverse proxy with SSL
|
||||
│ │ └── monitoring/ # System monitoring
|
||||
│ ├── playbooks/ # Infrastructure playbooks
|
||||
│ ├── group_vars/ # Environment variables
|
||||
│ └── site.yml # Main infrastructure playbook
|
||||
└── applications/ # Docker Compose application deployment
|
||||
├── docker-compose.*.yml # Environment overlays
|
||||
├── environments/ # Environment configurations
|
||||
│ ├── .env.production.template # Production settings template
|
||||
│ └── .env.staging.template # Staging settings template
|
||||
└── scripts/ # Application deployment scripts
|
||||
├── deploy-app.sh # Main application deployment script
|
||||
└── health-check.sh # Post-deployment health validation
|
||||
├── stacks/ # Docker Compose stacks
|
||||
│ ├── traefik/ # Reverse proxy with SSL
|
||||
│ ├── gitea/ # Git server
|
||||
│ ├── registry/ # Private Docker registry
|
||||
│ ├── application/ # Main PHP application
|
||||
│ ├── postgres/ # Database
|
||||
│ └── monitoring/ # Portainer + Grafana + Prometheus
|
||||
├── ansible/ # Automation playbooks
|
||||
│ ├── playbooks/ # Deployment automation
|
||||
│ ├── inventory/ # Server inventory
|
||||
│ └── secrets/ # Ansible Vault secrets
|
||||
├── runner/ # Gitea Actions runner (dev machine)
|
||||
├── scripts/ # Helper scripts
|
||||
└── docs/ # Deployment documentation
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
### 🔒 Security First
|
||||
- Automated security hardening with fail2ban and UFW firewall
|
||||
- SSL certificates with Let's Encrypt integration
|
||||
- IP-based authentication for admin routes
|
||||
- OWASP security event logging
|
||||
- Secure password generation and management
|
||||
|
||||
### ⚡ Performance Optimized
|
||||
- PHP 8.4 with OPcache and performance tuning
|
||||
- Nginx reverse proxy with optimization
|
||||
- Database connection pooling and query optimization
|
||||
- Asset optimization with Vite build system
|
||||
- Health checks and monitoring
|
||||
|
||||
### 🛠️ Developer Friendly
|
||||
- One-command deployment with `make deploy-staging`
|
||||
- Dry-run mode for testing deployments
|
||||
- Comprehensive logging and error handling
|
||||
- Database backups and rollback capabilities
|
||||
- Multi-environment support
|
||||
|
||||
### 🌍 Production Ready
|
||||
- Zero-downtime deployments
|
||||
- Automated database migrations
|
||||
- Health checks and validation
|
||||
- Emergency stop/restart procedures
|
||||
- Monitoring and alerting setup
|
||||
|
||||
## Available Commands
|
||||
|
||||
### Main Deployment Commands
|
||||
|
||||
```bash
|
||||
make deploy-staging # Deploy to staging
|
||||
make deploy-production # Deploy to production
|
||||
make deploy-dry ENV=production # Dry run deployment
|
||||
make infrastructure ENV=staging # Deploy only infrastructure
|
||||
make application ENV=staging # Deploy only application
|
||||
```
|
||||
|
||||
### Management Commands
|
||||
|
||||
```bash
|
||||
make status ENV=staging # Check deployment status
|
||||
make health ENV=production # Run health checks
|
||||
make logs ENV=staging # View application logs
|
||||
make backup ENV=production # Create database backup
|
||||
make restore ENV=production # Restore from backup
|
||||
```
|
||||
|
||||
### Configuration Commands
|
||||
|
||||
```bash
|
||||
make init-config # Initialize configuration files
|
||||
make edit-config ENV=staging # Edit environment configuration
|
||||
make validate-config ENV=prod # Validate configuration
|
||||
make show-config ENV=staging # Show safe configuration values
|
||||
```
|
||||
|
||||
### Emergency Commands
|
||||
|
||||
```bash
|
||||
make emergency-stop ENV=staging # Emergency stop all services
|
||||
make emergency-restart ENV=prod # Emergency restart services
|
||||
make rollback ENV=production # Emergency rollback
|
||||
```
|
||||
|
||||
## Environment Configuration
|
||||
|
||||
The system supports three environments:
|
||||
|
||||
- **Development**: Local development with relaxed security
|
||||
- **Staging**: Pre-production testing with production-like settings
|
||||
- **Production**: Live production with maximum security and performance
|
||||
|
||||
Each environment has its own:
|
||||
- Docker Compose overlay configuration
|
||||
- Environment variables file
|
||||
- Ansible inventory
|
||||
- SSL certificate configuration
|
||||
|
||||
## Deployment Flow
|
||||
|
||||
1. **Validation**: Prerequisites, configuration, and test validation
|
||||
2. **Infrastructure**: Ansible deploys security, Docker, Nginx, SSL
|
||||
3. **Application**: Docker Compose deploys PHP app, database, assets
|
||||
4. **Health Checks**: Comprehensive deployment validation
|
||||
|
||||
## Safety Features
|
||||
|
||||
- **Production Confirmations**: Double confirmation for production deployments
|
||||
- **Automated Backups**: Database backups before deployment
|
||||
- **Dry Run Mode**: Test deployments without making changes
|
||||
- **Health Validation**: Verify deployment success before completion
|
||||
- **Rollback Capability**: Emergency rollback procedures
|
||||
- **Error Handling**: Comprehensive error handling and logging
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. **First-Time Setup**:
|
||||
### Prerequisites
|
||||
|
||||
**Production Server:**
|
||||
- Docker & Docker Compose installed
|
||||
- Firewall configured (ports 80, 443, 2222)
|
||||
- User `deploy` with Docker permissions
|
||||
- SSH access configured
|
||||
|
||||
**Development Machine:**
|
||||
- Docker & Docker Compose installed
|
||||
- Ansible installed
|
||||
- SSH key configured for production server
|
||||
|
||||
### Initial Setup
|
||||
|
||||
1. **Deploy Infrastructure Stacks (Production)**
|
||||
```bash
|
||||
./setup.sh
|
||||
cd deployment/stacks/traefik && docker compose up -d
|
||||
cd ../postgres && docker compose up -d
|
||||
cd ../registry && docker compose up -d
|
||||
cd ../gitea && docker compose up -d
|
||||
cd ../monitoring && docker compose up -d
|
||||
```
|
||||
|
||||
2. **Configure Environments**:
|
||||
2. **Setup Gitea Runner (Development)**
|
||||
```bash
|
||||
make init-config
|
||||
make edit-config ENV=staging
|
||||
cd deployment/runner
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
3. **Test Deployment**:
|
||||
3. **Deploy Application**
|
||||
```bash
|
||||
make deploy-dry ENV=staging
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/deploy-application.yml
|
||||
```
|
||||
|
||||
4. **Deploy to Staging**:
|
||||
```bash
|
||||
make deploy-staging
|
||||
```
|
||||
## Stack Documentation
|
||||
|
||||
5. **Deploy to Production**:
|
||||
```bash
|
||||
make deploy-production
|
||||
```
|
||||
Each stack has its own README with detailed configuration:
|
||||
|
||||
## Documentation
|
||||
- [Traefik](stacks/traefik/README.md) - Reverse proxy setup
|
||||
- [Gitea](stacks/gitea/README.md) - Git server configuration
|
||||
- [Registry](stacks/registry/README.md) - Private registry setup
|
||||
- [Application](stacks/application/README.md) - Application deployment
|
||||
- [PostgreSQL](stacks/postgres/README.md) - Database configuration
|
||||
- [Monitoring](stacks/monitoring/README.md) - Monitoring stack
|
||||
|
||||
- [**Quick Start Guide**](docs/QUICKSTART.md) - Get up and running quickly
|
||||
- [**Environment Configuration**](docs/ENVIRONMENTS.md) - Detailed environment setup
|
||||
- [**Troubleshooting Guide**](docs/TROUBLESHOOTING.md) - Common issues and solutions
|
||||
## Deployment Commands
|
||||
|
||||
## Migration from Old System
|
||||
### Manual Deployment
|
||||
```bash
|
||||
./scripts/deploy.sh
|
||||
```
|
||||
|
||||
The old deployment configurations have been preserved in `.deployment-backup/` for reference. The new system provides:
|
||||
### Rollback to Previous Version
|
||||
```bash
|
||||
./scripts/rollback.sh
|
||||
```
|
||||
|
||||
- **Improved Security**: Modern security practices and automated hardening
|
||||
- **Better Organization**: Clear separation between infrastructure and application
|
||||
- **Enhanced Automation**: One-command deployments with comprehensive validation
|
||||
- **Multi-Environment**: Proper staging and production environment management
|
||||
- **Modern Stack**: PHP 8.4, latest Docker practices, and optimized configurations
|
||||
### Update Specific Stack
|
||||
```bash
|
||||
cd stacks/<stack-name>
|
||||
docker compose pull
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## CI/CD Pipeline
|
||||
|
||||
The CI/CD pipeline is defined in `.gitea/workflows/deploy.yml` and runs on push to main branch:
|
||||
|
||||
1. **Build Stage**: Build Docker image
|
||||
2. **Push Stage**: Push to private registry
|
||||
3. **Deploy Stage**: Deploy to production via Ansible
|
||||
|
||||
## Monitoring
|
||||
|
||||
Access monitoring tools:
|
||||
|
||||
- **Portainer**: https://portainer.yourdomain.com
|
||||
- **Grafana**: https://grafana.yourdomain.com
|
||||
- **Prometheus**: https://prometheus.yourdomain.com
|
||||
|
||||
## Backup & Recovery
|
||||
|
||||
### Automated Backups
|
||||
|
||||
- **PostgreSQL**: Daily backups with 7-day retention
|
||||
- **Gitea Data**: Weekly backups
|
||||
- **Registry Images**: On-demand backups
|
||||
|
||||
### Manual Backup
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml playbooks/backup.yml
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml playbooks/restore.yml
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
- All external services behind Traefik with HTTPS
|
||||
- Private registry with BasicAuth
|
||||
- Secrets managed via Ansible Vault
|
||||
- Regular security updates via Watchtower
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Check Stack Health
|
||||
```bash
|
||||
cd stacks/<stack-name>
|
||||
docker compose ps
|
||||
docker compose logs -f
|
||||
```
|
||||
|
||||
### Check Service Connectivity
|
||||
```bash
|
||||
curl -I https://app.yourdomain.com
|
||||
docker network inspect traefik-public
|
||||
```
|
||||
|
||||
### View Logs
|
||||
```bash
|
||||
# Application logs
|
||||
docker compose -f stacks/application/docker-compose.yml logs -f app-php
|
||||
|
||||
# Traefik logs
|
||||
docker compose -f stacks/traefik/docker-compose.yml logs -f
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For deployment issues or questions:
|
||||
1. Check the [Troubleshooting Guide](docs/TROUBLESHOOTING.md)
|
||||
2. Run diagnostics: `make status ENV=your-environment`
|
||||
3. Review logs: `make logs ENV=your-environment`
|
||||
4. Test with dry-run: `make deploy-dry ENV=your-environment`
|
||||
For issues and questions, see:
|
||||
- [Troubleshooting Guide](docs/troubleshooting.md)
|
||||
- [FAQ](docs/faq.md)
|
||||
- [Migration Guide](docs/migration.md)
|
||||
|
||||
---
|
||||
## Migration from Docker Swarm
|
||||
|
||||
**Domain**: michaelschiemer.de | **Email**: kontakt@michaelschiemer.de | **PHP**: 8.4
|
||||
See [Migration Guide](docs/migration-from-swarm.md) for detailed instructions on migrating from the old Docker Swarm setup.
|
||||
|
||||
## License
|
||||
|
||||
This deployment configuration is part of the Custom PHP Framework project.
|
||||
|
||||
707
deployment/SETUP-GUIDE.md
Normal file
707
deployment/SETUP-GUIDE.md
Normal file
@@ -0,0 +1,707 @@
|
||||
# Production Deployment - Complete Setup Guide
|
||||
|
||||
**Status**: 🚧 In Progress
|
||||
**Last Updated**: 2025-10-30
|
||||
**Target Server**: 94.16.110.151 (Netcup)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This guide walks through the complete setup of production deployment from scratch, covering:
|
||||
1. Gitea Runner (Development Machine)
|
||||
2. Ansible Vault Secrets
|
||||
3. Production Server Initial Setup
|
||||
4. CI/CD Pipeline Testing
|
||||
5. Monitoring & Health Checks
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
**Development Machine:**
|
||||
- ✅ Docker & Docker Compose installed
|
||||
- ✅ Ansible installed (`pip install ansible`)
|
||||
- ✅ SSH key for production server
|
||||
- ✅ Access to Gitea admin panel
|
||||
|
||||
**Production Server (94.16.110.151):**
|
||||
- ✅ Docker & Docker Compose installed
|
||||
- ✅ User `deploy` created with Docker permissions
|
||||
- ✅ SSH access configured
|
||||
- ✅ Firewall configured (ports 80, 443, 2222)
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Gitea Runner Setup (Development Machine)
|
||||
|
||||
### Step 1.1: Get Gitea Registration Token
|
||||
|
||||
1. Navigate to Gitea admin panel:
|
||||
```
|
||||
https://git.michaelschiemer.de/admin/actions/runners
|
||||
```
|
||||
|
||||
2. Click **"Create New Runner"**
|
||||
|
||||
3. Copy the registration token (format: `<long-random-string>`)
|
||||
|
||||
### Step 1.2: Configure Runner Environment
|
||||
|
||||
```bash
|
||||
cd deployment/gitea-runner
|
||||
|
||||
# Copy environment template
|
||||
cp .env.example .env
|
||||
|
||||
# Edit configuration
|
||||
nano .env
|
||||
```
|
||||
|
||||
**Required Configuration in `.env`:**
|
||||
```bash
|
||||
# Gitea Instance URL
|
||||
GITEA_INSTANCE_URL=https://git.michaelschiemer.de
|
||||
|
||||
# Registration Token (from Step 1.1)
|
||||
GITEA_RUNNER_REGISTRATION_TOKEN=<your-token-from-gitea>
|
||||
|
||||
# Runner Name (appears in Gitea UI)
|
||||
GITEA_RUNNER_NAME=dev-runner-01
|
||||
|
||||
# Runner Labels (environments this runner supports)
|
||||
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:20-bullseye,ubuntu-22.04:docker://catthehacker/ubuntu:act-22.04
|
||||
|
||||
# Runner Capacity (concurrent jobs)
|
||||
GITEA_RUNNER_CAPACITY=1
|
||||
|
||||
# Docker-in-Docker settings
|
||||
DOCKER_HOST=tcp://docker-dind:2376
|
||||
DOCKER_TLS_VERIFY=1
|
||||
```
|
||||
|
||||
### Step 1.3: Register and Start Runner
|
||||
|
||||
```bash
|
||||
# Register runner with Gitea
|
||||
./register.sh
|
||||
|
||||
# Expected output:
|
||||
# ✅ Starting Gitea Runner services...
|
||||
# ✅ Runner registered successfully
|
||||
# ✅ Runner is now active
|
||||
|
||||
# Verify runner is running
|
||||
docker compose ps
|
||||
|
||||
# Check logs
|
||||
docker compose logs -f gitea-runner
|
||||
```
|
||||
|
||||
### Step 1.4: Verify Runner in Gitea
|
||||
|
||||
1. Go to: https://git.michaelschiemer.de/admin/actions/runners
|
||||
2. You should see `dev-runner-01` listed as **"Idle"** or **"Active"**
|
||||
3. Status should be green/online
|
||||
|
||||
**✅ Checkpoint**: Runner visible in Gitea UI and showing as "Idle"
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Ansible Vault Secrets Setup
|
||||
|
||||
### Step 2.1: Create Vault Password
|
||||
|
||||
```bash
|
||||
cd deployment/ansible/secrets
|
||||
|
||||
# Create vault password file (gitignored)
|
||||
echo "your-secure-vault-password-here" > .vault_pass
|
||||
|
||||
# Secure the file
|
||||
chmod 600 .vault_pass
|
||||
```
|
||||
|
||||
**⚠️ IMPORTANT**: Store this vault password in your password manager! You'll need it for all Ansible operations.
|
||||
|
||||
### Step 2.2: Create Production Secrets File
|
||||
|
||||
```bash
|
||||
# Copy example template
|
||||
cp production.vault.yml.example production.vault.yml
|
||||
|
||||
# Edit with your actual secrets
|
||||
nano production.vault.yml
|
||||
```
|
||||
|
||||
**Required Secrets in `production.vault.yml`:**
|
||||
```yaml
|
||||
---
|
||||
# Docker Registry Credentials
|
||||
docker_registry_user: "admin"
|
||||
docker_registry_password: "your-registry-password"
|
||||
|
||||
# Application Environment Variables
|
||||
app_key: "base64:generated-32-character-key"
|
||||
app_env: "production"
|
||||
app_debug: "false"
|
||||
|
||||
# Database Credentials
|
||||
db_host: "postgres"
|
||||
db_port: "5432"
|
||||
db_name: "framework_production"
|
||||
db_user: "framework_user"
|
||||
db_password: "your-secure-db-password"
|
||||
|
||||
# Redis Configuration
|
||||
redis_host: "redis"
|
||||
redis_port: "6379"
|
||||
redis_password: "your-secure-redis-password"
|
||||
|
||||
# Cache Configuration
|
||||
cache_driver: "redis"
|
||||
cache_prefix: "framework"
|
||||
|
||||
# Queue Configuration
|
||||
queue_connection: "redis"
|
||||
queue_name: "default"
|
||||
|
||||
# Session Configuration
|
||||
session_driver: "redis"
|
||||
session_lifetime: "120"
|
||||
|
||||
# Encryption Keys
|
||||
encryption_key: "base64:your-32-byte-encryption-key"
|
||||
state_encryption_key: "base64:your-32-byte-state-encryption-key"
|
||||
|
||||
# SMTP Configuration (Optional)
|
||||
mail_mailer: "smtp"
|
||||
mail_host: "smtp.example.com"
|
||||
mail_port: "587"
|
||||
mail_username: "noreply@michaelschiemer.de"
|
||||
mail_password: "your-smtp-password"
|
||||
mail_encryption: "tls"
|
||||
mail_from_address: "noreply@michaelschiemer.de"
|
||||
mail_from_name: "Framework"
|
||||
|
||||
# Admin IPs (comma-separated)
|
||||
admin_allowed_ips: "127.0.0.1,::1"
|
||||
|
||||
# Rate Limiting
|
||||
rate_limit_enabled: "true"
|
||||
rate_limit_default: "60"
|
||||
rate_limit_window: "60"
|
||||
```
|
||||
|
||||
### Step 2.3: Generate Encryption Keys
|
||||
|
||||
```bash
|
||||
# Generate app_key (32 bytes base64)
|
||||
php -r "echo 'base64:' . base64_encode(random_bytes(32)) . PHP_EOL;"
|
||||
|
||||
# Generate encryption_key (32 bytes base64)
|
||||
php -r "echo 'base64:' . base64_encode(random_bytes(32)) . PHP_EOL;"
|
||||
|
||||
# Generate state_encryption_key (32 bytes base64)
|
||||
php -r "echo 'base64:' . base64_encode(random_bytes(32)) . PHP_EOL;"
|
||||
|
||||
# Copy these values into production.vault.yml
|
||||
```
|
||||
|
||||
### Step 2.4: Encrypt Secrets File
|
||||
|
||||
```bash
|
||||
# Encrypt the secrets file
|
||||
ansible-vault encrypt production.vault.yml \
|
||||
--vault-password-file .vault_pass
|
||||
|
||||
# Verify encryption worked
|
||||
file production.vault.yml
|
||||
# Should output: production.vault.yml: ASCII text
|
||||
|
||||
# View encrypted content (should show encrypted data)
|
||||
cat production.vault.yml
|
||||
|
||||
# Test decryption (view content)
|
||||
ansible-vault view production.vault.yml \
|
||||
--vault-password-file .vault_pass
|
||||
```
|
||||
|
||||
**✅ Checkpoint**: `production.vault.yml` is encrypted and can be decrypted with vault password
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Production Server Initial Setup
|
||||
|
||||
### Step 3.1: Deploy Infrastructure Stacks
|
||||
|
||||
**On Production Server (SSH as deploy user):**
|
||||
|
||||
```bash
|
||||
# SSH to production server
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# Navigate to stacks directory
|
||||
cd ~/deployment/stacks
|
||||
|
||||
# Deploy stacks in order
|
||||
|
||||
# 1. Traefik (Reverse Proxy & SSL)
|
||||
cd traefik
|
||||
docker compose up -d
|
||||
docker compose logs -f
|
||||
# Wait for "Configuration loaded" message
|
||||
# Ctrl+C to exit logs
|
||||
|
||||
# 2. PostgreSQL (Database)
|
||||
cd ../postgresql
|
||||
docker compose up -d
|
||||
docker compose logs -f
|
||||
# Wait for "database system is ready to accept connections"
|
||||
# Ctrl+C to exit logs
|
||||
|
||||
# 3. Docker Registry (Private Registry)
|
||||
cd ../registry
|
||||
docker compose up -d
|
||||
docker compose logs -f
|
||||
# Wait for "listening on [::]:5000"
|
||||
# Ctrl+C to exit logs
|
||||
|
||||
# 4. Gitea (Git Server + MySQL + Redis)
|
||||
cd ../gitea
|
||||
docker compose up -d
|
||||
docker compose logs -f
|
||||
# Wait for "Listen: http://0.0.0.0:3000"
|
||||
# Ctrl+C to exit logs
|
||||
|
||||
# 5. Monitoring (Portainer + Grafana + Prometheus)
|
||||
cd ../monitoring
|
||||
docker compose up -d
|
||||
docker compose logs -f
|
||||
# Wait for all services to start
|
||||
# Ctrl+C to exit logs
|
||||
|
||||
# Verify all stacks are running
|
||||
docker ps
|
||||
```
|
||||
|
||||
### Step 3.2: Configure Gitea
|
||||
|
||||
1. Access Gitea: https://git.michaelschiemer.de
|
||||
2. Complete initial setup wizard:
|
||||
- Database: Use MySQL from stack
|
||||
- Admin account: Create admin user
|
||||
- Repository root: `/data/git/repositories`
|
||||
- Enable Actions in admin settings
|
||||
|
||||
### Step 3.3: Create Docker Registry User
|
||||
|
||||
```bash
|
||||
# SSH to production server
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# Create registry htpasswd entry
|
||||
cd ~/deployment/stacks/registry
|
||||
docker compose exec registry htpasswd -Bbn admin your-registry-password >> auth/htpasswd
|
||||
|
||||
# Test login
|
||||
docker login git.michaelschiemer.de:5000
|
||||
# Username: admin
|
||||
# Password: your-registry-password
|
||||
```
|
||||
|
||||
### Step 3.4: Setup SSH Keys for Ansible
|
||||
|
||||
**On Development Machine:**
|
||||
|
||||
```bash
|
||||
# Generate SSH key if not exists
|
||||
ssh-keygen -t ed25519 -f ~/.ssh/production -C "ansible-deploy"
|
||||
|
||||
# Copy public key to production server
|
||||
ssh-copy-id -i ~/.ssh/production.pub deploy@94.16.110.151
|
||||
|
||||
# Test SSH connection
|
||||
ssh -i ~/.ssh/production deploy@94.16.110.151 "echo 'SSH works!'"
|
||||
```
|
||||
|
||||
**✅ Checkpoint**: All infrastructure stacks running, SSH access configured
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Deploy Application Secrets
|
||||
|
||||
### Step 4.1: Deploy Secrets to Production
|
||||
|
||||
**On Development Machine:**
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
|
||||
# Test Ansible connectivity
|
||||
ansible production -m ping
|
||||
|
||||
# Deploy secrets to production server
|
||||
ansible-playbook playbooks/setup-production-secrets.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
|
||||
# Expected output:
|
||||
# PLAY [Deploy Production Secrets] ***
|
||||
# TASK [Ensure secrets directory exists] *** ok
|
||||
# TASK [Deploy environment file] *** changed
|
||||
# PLAY RECAP *** production: ok=2 changed=1
|
||||
```
|
||||
|
||||
### Step 4.2: Verify Secrets Deployed
|
||||
|
||||
```bash
|
||||
# SSH to production server
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# Check secrets directory
|
||||
ls -la ~/secrets/
|
||||
|
||||
# Verify .env.production exists (do NOT cat - contains secrets!)
|
||||
file ~/secrets/.env.production
|
||||
# Should output: .env.production: ASCII text
|
||||
|
||||
# Check file permissions
|
||||
stat ~/secrets/.env.production
|
||||
# Should be 600 (readable only by deploy user)
|
||||
```
|
||||
|
||||
**✅ Checkpoint**: Secrets deployed to production server in ~/secrets/.env.production
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Setup Gitea Secrets for CI/CD
|
||||
|
||||
### Step 5.1: Configure Repository Secrets
|
||||
|
||||
1. Go to repository settings in Gitea:
|
||||
```
|
||||
https://git.michaelschiemer.de/<username>/michaelschiemer/settings/secrets
|
||||
```
|
||||
|
||||
2. Add the following secrets:
|
||||
|
||||
**REGISTRY_USER**
|
||||
```
|
||||
admin
|
||||
```
|
||||
|
||||
**REGISTRY_PASSWORD**
|
||||
```
|
||||
<your-registry-password>
|
||||
```
|
||||
|
||||
**SSH_PRIVATE_KEY**
|
||||
```
|
||||
<content-of-~/.ssh/production>
|
||||
```
|
||||
|
||||
**ANSIBLE_VAULT_PASSWORD**
|
||||
```
|
||||
<your-vault-password-from-step-2.1>
|
||||
```
|
||||
|
||||
### Step 5.2: Verify Secrets in Gitea
|
||||
|
||||
1. Check secrets are visible in repository settings
|
||||
2. Each secret should show "Hidden" value with green checkmark
|
||||
|
||||
**✅ Checkpoint**: All required secrets configured in Gitea repository
|
||||
|
||||
---
|
||||
|
||||
## Phase 6: First Deployment Test
|
||||
|
||||
### Step 6.1: Manual Deployment Dry-Run
|
||||
|
||||
**On Development Machine:**
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
|
||||
# Test deployment (check mode - no changes)
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/deploy-update.yml \
|
||||
-e "image_tag=test-$(date +%s)" \
|
||||
-e "git_commit_sha=test123" \
|
||||
-e "deployment_timestamp=$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
|
||||
-e "docker_registry_username=admin" \
|
||||
-e "docker_registry_password=your-registry-password" \
|
||||
--check
|
||||
|
||||
# Expected: Should show what would be changed
|
||||
```
|
||||
|
||||
### Step 6.2: Trigger CI/CD Pipeline
|
||||
|
||||
**Option A: Push to main branch**
|
||||
```bash
|
||||
# Make a small change (add comment to file)
|
||||
echo "# Deployment test $(date)" >> deployment/DEPLOYMENT_TEST.txt
|
||||
|
||||
# Commit and push to main
|
||||
git add deployment/DEPLOYMENT_TEST.txt
|
||||
git commit -m "test(deployment): trigger CI/CD pipeline"
|
||||
git push origin main
|
||||
```
|
||||
|
||||
**Option B: Manual trigger**
|
||||
1. Go to Gitea repository: Actions tab
|
||||
2. Select workflow: "Production Deployment Pipeline"
|
||||
3. Click "Run workflow"
|
||||
4. Select branch: main
|
||||
5. Click "Run"
|
||||
|
||||
### Step 6.3: Monitor Pipeline Execution
|
||||
|
||||
1. Go to: https://git.michaelschiemer.de/<username>/michaelschiemer/actions
|
||||
2. Find the running workflow
|
||||
3. Click to view details
|
||||
4. Monitor each job:
|
||||
- ✅ Test: Tests & quality checks pass
|
||||
- ✅ Build: Docker image built and pushed
|
||||
- ✅ Deploy: Application deployed to production
|
||||
|
||||
### Step 6.4: Verify Deployment
|
||||
|
||||
```bash
|
||||
# Test health endpoint
|
||||
curl -k https://michaelschiemer.de/health
|
||||
|
||||
# Expected response:
|
||||
# {"status":"healthy","timestamp":"2025-10-30T14:30:00Z"}
|
||||
|
||||
# Check application logs
|
||||
ssh deploy@94.16.110.151 "docker compose -f ~/application/docker-compose.yml logs -f app-php"
|
||||
```
|
||||
|
||||
**✅ Checkpoint**: CI/CD pipeline executed successfully, application running on production
|
||||
|
||||
---
|
||||
|
||||
## Phase 7: Monitoring & Health Checks
|
||||
|
||||
### Step 7.1: Access Monitoring Tools
|
||||
|
||||
**Portainer**
|
||||
```
|
||||
https://portainer.michaelschiemer.de
|
||||
```
|
||||
- View all running containers
|
||||
- Monitor resource usage
|
||||
- Check logs
|
||||
|
||||
**Grafana**
|
||||
```
|
||||
https://grafana.michaelschiemer.de
|
||||
```
|
||||
- Username: admin
|
||||
- Password: (set during setup)
|
||||
- View application metrics
|
||||
- Setup alerts
|
||||
|
||||
**Prometheus**
|
||||
```
|
||||
https://prometheus.michaelschiemer.de
|
||||
```
|
||||
- Query metrics
|
||||
- Check targets
|
||||
- Verify scraping
|
||||
|
||||
### Step 7.2: Configure Alerting
|
||||
|
||||
**In Grafana:**
|
||||
|
||||
1. Go to Alerting > Contact points
|
||||
2. Add email notification channel
|
||||
3. Create alert rules:
|
||||
- High CPU usage (>80% for 5 minutes)
|
||||
- High memory usage (>80%)
|
||||
- Application down (health check fails)
|
||||
- Database connection failures
|
||||
|
||||
### Step 7.3: Setup Health Check Monitoring
|
||||
|
||||
```bash
|
||||
# Create cron job on production server
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# Add health check script
|
||||
crontab -e
|
||||
|
||||
# Add line:
|
||||
*/5 * * * * curl -f https://michaelschiemer.de/health || echo "Health check failed" | mail -s "Production Health Check Failed" admin@michaelschiemer.de
|
||||
```
|
||||
|
||||
**✅ Checkpoint**: Monitoring tools accessible, alerts configured
|
||||
|
||||
---
|
||||
|
||||
## Phase 8: Backup & Rollback Testing
|
||||
|
||||
### Step 8.1: Verify Backups
|
||||
|
||||
```bash
|
||||
# SSH to production server
|
||||
ssh deploy@94.16.110.151
|
||||
|
||||
# Check backup directory
|
||||
ls -lh ~/backups/
|
||||
|
||||
# Should see backup folders with timestamps
|
||||
# Example: 2025-10-30T14-30-00/
|
||||
```
|
||||
|
||||
### Step 8.2: Test Rollback
|
||||
|
||||
```bash
|
||||
# On development machine
|
||||
cd deployment/ansible
|
||||
|
||||
# Rollback to previous version
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/rollback.yml
|
||||
|
||||
# Verify rollback worked
|
||||
curl -k https://michaelschiemer.de/health
|
||||
```
|
||||
|
||||
**✅ Checkpoint**: Backups created, rollback mechanism tested
|
||||
|
||||
---
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
### Infrastructure
|
||||
- [ ] Traefik running and routing HTTPS
|
||||
- [ ] PostgreSQL accessible and accepting connections
|
||||
- [ ] Docker Registry accessible at git.michaelschiemer.de:5000
|
||||
- [ ] Gitea accessible at git.michaelschiemer.de
|
||||
- [ ] Monitoring stack (Portainer, Grafana, Prometheus) running
|
||||
|
||||
### Deployment
|
||||
- [ ] Gitea Runner registered and showing "Idle" in UI
|
||||
- [ ] Ansible Vault secrets encrypted and deployable
|
||||
- [ ] SSH access configured for Ansible
|
||||
- [ ] Repository secrets configured in Gitea
|
||||
- [ ] CI/CD pipeline runs successfully end-to-end
|
||||
|
||||
### Application
|
||||
- [ ] Application accessible at https://michaelschiemer.de
|
||||
- [ ] Health endpoint returns 200 OK
|
||||
- [ ] Database migrations ran successfully
|
||||
- [ ] Queue workers processing jobs
|
||||
- [ ] Logs showing no errors
|
||||
|
||||
### Monitoring
|
||||
- [ ] Portainer shows all containers running
|
||||
- [ ] Grafana dashboards displaying metrics
|
||||
- [ ] Prometheus scraping all targets
|
||||
- [ ] Alerts configured and sending notifications
|
||||
|
||||
### Security
|
||||
- [ ] All secrets encrypted with Ansible Vault
|
||||
- [ ] SSH keys secured (600 permissions)
|
||||
- [ ] Registry requires authentication
|
||||
- [ ] HTTPS enforced on all public endpoints
|
||||
- [ ] Firewall configured correctly
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Gitea Runner Not Registering
|
||||
|
||||
**Symptoms**: Runner not appearing in Gitea UI after running `./register.sh`
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Check runner logs
|
||||
docker compose logs gitea-runner
|
||||
|
||||
# Verify registration token is correct
|
||||
nano .env
|
||||
# Check GITEA_RUNNER_REGISTRATION_TOKEN
|
||||
|
||||
# Unregister and re-register
|
||||
./unregister.sh
|
||||
./register.sh
|
||||
```
|
||||
|
||||
### Ansible Connection Failed
|
||||
|
||||
**Symptoms**: `Failed to connect to the host via ssh`
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Test SSH manually
|
||||
ssh -i ~/.ssh/production deploy@94.16.110.151
|
||||
|
||||
# Check SSH key permissions
|
||||
chmod 600 ~/.ssh/production
|
||||
|
||||
# Verify SSH key is added to server
|
||||
ssh-copy-id -i ~/.ssh/production.pub deploy@94.16.110.151
|
||||
```
|
||||
|
||||
### Docker Registry Authentication Failed
|
||||
|
||||
**Symptoms**: `unauthorized: authentication required`
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Verify credentials
|
||||
docker login git.michaelschiemer.de:5000
|
||||
# Username: admin
|
||||
# Password: <your-registry-password>
|
||||
|
||||
# Check htpasswd file on server
|
||||
ssh deploy@94.16.110.151 "cat ~/deployment/stacks/registry/auth/htpasswd"
|
||||
```
|
||||
|
||||
### Deployment Health Check Failed
|
||||
|
||||
**Symptoms**: Health check returns 404 or times out
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Check application logs
|
||||
ssh deploy@94.16.110.151 "docker compose -f ~/application/docker-compose.yml logs app-php"
|
||||
|
||||
# Verify application stack is running
|
||||
ssh deploy@94.16.110.151 "docker ps"
|
||||
|
||||
# Check Traefik routing
|
||||
ssh deploy@94.16.110.151 "docker compose -f ~/deployment/stacks/traefik/docker-compose.yml logs"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
After successful deployment:
|
||||
|
||||
1. **Configure DNS**: Point michaelschiemer.de to 94.16.110.151
|
||||
2. **SSL Certificates**: Traefik will automatically request Let's Encrypt certificates
|
||||
3. **Monitoring**: Review Grafana dashboards and setup additional alerts
|
||||
4. **Backups**: Configure automated database backups
|
||||
5. **Performance**: Review application performance and optimize
|
||||
6. **Documentation**: Update team documentation with production procedures
|
||||
|
||||
---
|
||||
|
||||
## Support Contacts
|
||||
|
||||
- **Infrastructure Issues**: Check Portainer logs
|
||||
- **Deployment Issues**: Review Gitea Actions logs
|
||||
- **Application Issues**: Check application logs in Portainer
|
||||
- **Emergency Rollback**: Run `ansible-playbook playbooks/rollback.yml`
|
||||
|
||||
---
|
||||
|
||||
**Setup Status**: 🚧 In Progress
|
||||
**Next Action**: Start with Phase 1 - Gitea Runner Setup
|
||||
312
deployment/ansible/README.md
Normal file
312
deployment/ansible/README.md
Normal file
@@ -0,0 +1,312 @@
|
||||
# Ansible Deployment Configuration
|
||||
|
||||
This directory contains Ansible playbooks and configuration for deploying the Custom PHP Framework to production.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
deployment/ansible/
|
||||
├── ansible.cfg # Ansible configuration
|
||||
├── inventory/
|
||||
│ └── production.yml # Production server inventory
|
||||
├── playbooks/
|
||||
│ ├── setup-production-secrets.yml # Deploy secrets
|
||||
│ ├── deploy-update.yml # Deploy application updates
|
||||
│ └── rollback.yml # Rollback deployments
|
||||
├── secrets/
|
||||
│ ├── .gitignore # Prevent committing secrets
|
||||
│ └── production.vault.yml.example # Example vault file
|
||||
└── templates/
|
||||
└── .env.production.j2 # Environment file template
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Ansible Installed**:
|
||||
```bash
|
||||
pip install ansible
|
||||
```
|
||||
|
||||
2. **SSH Access**:
|
||||
- SSH key configured at `~/.ssh/production`
|
||||
- Key added to production server's authorized_keys for `deploy` user
|
||||
|
||||
3. **Ansible Vault Password**:
|
||||
- Create `.vault_pass` file in `secrets/` directory
|
||||
- Add vault password to this file (one line)
|
||||
- File is gitignored for security
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
### 1. Create Production Secrets
|
||||
|
||||
```bash
|
||||
cd deployment/ansible/secrets
|
||||
|
||||
# Copy example file
|
||||
cp production.vault.yml.example production.vault.yml
|
||||
|
||||
# Edit with your actual secrets
|
||||
nano production.vault.yml
|
||||
|
||||
# Encrypt the file
|
||||
ansible-vault encrypt production.vault.yml
|
||||
# Enter vault password when prompted
|
||||
```
|
||||
|
||||
### 2. Store Vault Password
|
||||
|
||||
```bash
|
||||
# Create vault password file
|
||||
echo "your-vault-password-here" > secrets/.vault_pass
|
||||
|
||||
# Secure the file
|
||||
chmod 600 secrets/.vault_pass
|
||||
```
|
||||
|
||||
### 3. Configure SSH Key
|
||||
|
||||
```bash
|
||||
# Generate SSH key if needed
|
||||
ssh-keygen -t ed25519 -f ~/.ssh/production -C "ansible-deploy"
|
||||
|
||||
# Copy public key to production server
|
||||
ssh-copy-id -i ~/.ssh/production.pub deploy@94.16.110.151
|
||||
```
|
||||
|
||||
## Running Playbooks
|
||||
|
||||
### Deploy Production Secrets
|
||||
|
||||
**First-time setup** - Deploy secrets to production server:
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/setup-production-secrets.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### Deploy Application Update
|
||||
|
||||
**Automated via Gitea Actions** - Or run manually:
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/deploy-update.yml \
|
||||
-e "image_tag=sha-abc123" \
|
||||
-e "git_commit_sha=abc123" \
|
||||
-e "deployment_timestamp=$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
|
||||
-e "docker_registry_username=gitea-user" \
|
||||
-e "docker_registry_password=your-registry-password"
|
||||
```
|
||||
|
||||
### Rollback Deployment
|
||||
|
||||
**Rollback to previous version**:
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/rollback.yml
|
||||
```
|
||||
|
||||
**Rollback to specific version**:
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/rollback.yml \
|
||||
-e "rollback_to_version=2025-01-28T15-30-00"
|
||||
```
|
||||
|
||||
## Ansible Vault Operations
|
||||
|
||||
### View Encrypted File
|
||||
|
||||
```bash
|
||||
ansible-vault view secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### Edit Encrypted File
|
||||
|
||||
```bash
|
||||
ansible-vault edit secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### Change Vault Password
|
||||
|
||||
```bash
|
||||
ansible-vault rekey secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### Decrypt File (Temporarily)
|
||||
|
||||
```bash
|
||||
ansible-vault decrypt secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
|
||||
# DO NOT COMMIT DECRYPTED FILE!
|
||||
|
||||
# Re-encrypt when done
|
||||
ansible-vault encrypt secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
## Testing Playbooks
|
||||
|
||||
### Test with Check Mode (Dry Run)
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/deploy-update.yml \
|
||||
--check \
|
||||
-e "image_tag=test"
|
||||
```
|
||||
|
||||
### Test Connection
|
||||
|
||||
```bash
|
||||
ansible production -m ping
|
||||
```
|
||||
|
||||
### Verify Inventory
|
||||
|
||||
```bash
|
||||
ansible-inventory --list -y
|
||||
```
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
1. **Never commit unencrypted secrets**
|
||||
- `production.vault.yml` must be encrypted
|
||||
- `.vault_pass` is gitignored
|
||||
- Use `.example` files for documentation
|
||||
|
||||
2. **Rotate secrets regularly**
|
||||
- Update vault file
|
||||
- Re-run `setup-production-secrets.yml`
|
||||
- Restart affected services
|
||||
|
||||
3. **Limit SSH key access**
|
||||
- Use separate SSH key for Ansible
|
||||
- Limit key to `deploy` user only
|
||||
- Consider IP restrictions
|
||||
|
||||
4. **Vault password security**
|
||||
- Store vault password in secure password manager
|
||||
- Don't share via insecure channels
|
||||
- Use different passwords for dev/staging/prod
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Vault Decryption Failed
|
||||
|
||||
**Error**: `Decryption failed (no vault secrets were found)`
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
# Verify vault password is correct
|
||||
ansible-vault view secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
|
||||
# If password is wrong, you'll need the original password to decrypt
|
||||
```
|
||||
|
||||
### SSH Connection Failed
|
||||
|
||||
**Error**: `Failed to connect to the host`
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Test SSH connection manually
|
||||
ssh -i ~/.ssh/production deploy@94.16.110.151
|
||||
|
||||
# Check SSH key permissions
|
||||
chmod 600 ~/.ssh/production
|
||||
chmod 644 ~/.ssh/production.pub
|
||||
|
||||
# Verify SSH key is added to server
|
||||
ssh-copy-id -i ~/.ssh/production.pub deploy@94.16.110.151
|
||||
```
|
||||
|
||||
### Docker Registry Authentication Failed
|
||||
|
||||
**Error**: `unauthorized: authentication required`
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
# Verify registry credentials in vault file
|
||||
ansible-vault view secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
|
||||
# Test registry login manually on production server
|
||||
docker login git.michaelschiemer.de:5000
|
||||
```
|
||||
|
||||
### Service Not Starting
|
||||
|
||||
**Check service logs**:
|
||||
```bash
|
||||
# SSH to production server
|
||||
ssh -i ~/.ssh/production deploy@94.16.110.151
|
||||
|
||||
# Check Docker service logs
|
||||
docker service logs app_app
|
||||
|
||||
# Check stack status
|
||||
docker stack ps app
|
||||
```
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
These playbooks are automatically executed by Gitea Actions workflows:
|
||||
|
||||
- **`.gitea/workflows/production-deploy.yml`** - Calls `deploy-update.yml` on push to main
|
||||
- **`.gitea/workflows/update-production-secrets.yml`** - Calls `setup-production-secrets.yml` on manual trigger
|
||||
|
||||
Vault password is stored as Gitea Actions secret: `ANSIBLE_VAULT_PASSWORD`
|
||||
|
||||
## Inventory Variables
|
||||
|
||||
All deployment variables are defined in `inventory/production.yml`:
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `docker_registry` | Docker registry URL | git.michaelschiemer.de:5000 |
|
||||
| `app_name` | Application name | framework |
|
||||
| `app_domain` | Production domain | michaelschiemer.de |
|
||||
| `stack_name` | Docker stack name | app |
|
||||
| `compose_file` | Docker Compose file path | /home/deploy/docker-compose.prod.yml |
|
||||
| `secrets_path` | Secrets directory | /home/deploy/secrets |
|
||||
| `backups_path` | Backups directory | /home/deploy/backups |
|
||||
| `max_rollback_versions` | Backup retention | 5 |
|
||||
| `health_check_url` | Health check endpoint | https://michaelschiemer.de/health |
|
||||
|
||||
## Backup Management
|
||||
|
||||
Backups are automatically created before each deployment:
|
||||
|
||||
- **Location**: `/home/deploy/backups/`
|
||||
- **Retention**: Last 5 versions kept
|
||||
- **Contents**:
|
||||
- `current_image.txt` - Previously deployed image
|
||||
- `stack_status.txt` - Stack status before deployment
|
||||
- `deployment_metadata.txt` - Deployment details
|
||||
|
||||
### List Available Backups
|
||||
|
||||
```bash
|
||||
ssh -i ~/.ssh/production deploy@94.16.110.151 \
|
||||
"ls -lh /home/deploy/backups/"
|
||||
```
|
||||
|
||||
### Manual Backup
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/deploy-update.yml \
|
||||
--tags backup \
|
||||
-e "image_tag=current"
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For issues with:
|
||||
- **Playbooks**: Check this README and playbook comments
|
||||
- **Vault**: See Ansible Vault documentation
|
||||
- **Deployment**: Review Gitea Actions logs
|
||||
- **Production**: SSH to server and check Docker logs
|
||||
15
deployment/ansible/ansible.cfg
Normal file
15
deployment/ansible/ansible.cfg
Normal file
@@ -0,0 +1,15 @@
|
||||
[defaults]
|
||||
inventory = inventory/production.yml
|
||||
host_key_checking = False
|
||||
remote_user = deploy
|
||||
private_key_file = ~/.ssh/production
|
||||
timeout = 30
|
||||
retry_files_enabled = False
|
||||
gathering = smart
|
||||
fact_caching = jsonfile
|
||||
fact_caching_connection = /tmp/ansible_facts
|
||||
fact_caching_timeout = 3600
|
||||
|
||||
[ssh_connection]
|
||||
pipelining = True
|
||||
control_path = /tmp/ansible-ssh-%%h-%%p-%%r
|
||||
40
deployment/ansible/inventory/production.yml
Normal file
40
deployment/ansible/inventory/production.yml
Normal file
@@ -0,0 +1,40 @@
|
||||
---
|
||||
all:
|
||||
hosts:
|
||||
production:
|
||||
ansible_host: 94.16.110.151
|
||||
ansible_user: deploy
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
ansible_ssh_private_key_file: ~/.ssh/production
|
||||
|
||||
vars:
|
||||
# Docker Registry
|
||||
docker_registry: git.michaelschiemer.de:5000
|
||||
docker_registry_url: git.michaelschiemer.de:5000
|
||||
# Registry credentials (can be overridden via -e or vault)
|
||||
docker_registry_username: "{{ docker_registry_username | default('admin') }}"
|
||||
docker_registry_password: "{{ docker_registry_password | default('registry-secure-password-2025') }}"
|
||||
|
||||
# Application Configuration
|
||||
app_name: framework
|
||||
app_domain: michaelschiemer.de
|
||||
app_image: "{{ docker_registry }}/{{ app_name }}"
|
||||
|
||||
# Docker Stack
|
||||
stack_name: app
|
||||
compose_file: /home/deploy/docker-compose.prod.yml
|
||||
|
||||
# Deployment Paths
|
||||
deploy_user_home: /home/deploy
|
||||
app_base_path: "{{ deploy_user_home }}/app"
|
||||
secrets_path: "{{ deploy_user_home }}/secrets"
|
||||
backups_path: "{{ deploy_user_home }}/backups"
|
||||
|
||||
# Health Check
|
||||
health_check_url: "https://{{ app_domain }}/health"
|
||||
health_check_retries: 10
|
||||
health_check_delay: 10
|
||||
|
||||
# Rollback Configuration
|
||||
max_rollback_versions: 5
|
||||
rollback_timeout: 300
|
||||
187
deployment/ansible/scripts/init-secrets.sh
Executable file
187
deployment/ansible/scripts/init-secrets.sh
Executable file
@@ -0,0 +1,187 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Initialize Ansible Secrets
|
||||
# This script helps set up the Ansible vault file for the first time
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
ANSIBLE_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
SECRETS_DIR="$ANSIBLE_DIR/secrets"
|
||||
|
||||
echo "🔐 Ansible Secrets Initialization"
|
||||
echo "=================================="
|
||||
echo ""
|
||||
|
||||
# Check if running from correct directory
|
||||
if [ ! -f "$ANSIBLE_DIR/ansible.cfg" ]; then
|
||||
echo "❌ Error: Must run from deployment/ansible directory"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Step 1: Create vault password file
|
||||
echo "Step 1: Vault Password"
|
||||
echo "----------------------"
|
||||
|
||||
if [ -f "$SECRETS_DIR/.vault_pass" ]; then
|
||||
echo "⚠️ Vault password file already exists: $SECRETS_DIR/.vault_pass"
|
||||
read -p "Do you want to replace it? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "Keeping existing vault password file."
|
||||
else
|
||||
rm "$SECRETS_DIR/.vault_pass"
|
||||
read -sp "Enter new vault password: " VAULT_PASS
|
||||
echo
|
||||
read -sp "Confirm vault password: " VAULT_PASS_CONFIRM
|
||||
echo
|
||||
|
||||
if [ "$VAULT_PASS" != "$VAULT_PASS_CONFIRM" ]; then
|
||||
echo "❌ Passwords don't match!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "$VAULT_PASS" > "$SECRETS_DIR/.vault_pass"
|
||||
chmod 600 "$SECRETS_DIR/.vault_pass"
|
||||
echo "✅ Vault password file created"
|
||||
fi
|
||||
else
|
||||
read -sp "Enter vault password: " VAULT_PASS
|
||||
echo
|
||||
read -sp "Confirm vault password: " VAULT_PASS_CONFIRM
|
||||
echo
|
||||
|
||||
if [ "$VAULT_PASS" != "$VAULT_PASS_CONFIRM" ]; then
|
||||
echo "❌ Passwords don't match!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "$VAULT_PASS" > "$SECRETS_DIR/.vault_pass"
|
||||
chmod 600 "$SECRETS_DIR/.vault_pass"
|
||||
echo "✅ Vault password file created"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 2: Create production vault file
|
||||
echo "Step 2: Production Vault File"
|
||||
echo "-----------------------------"
|
||||
|
||||
if [ -f "$SECRETS_DIR/production.vault.yml" ]; then
|
||||
echo "⚠️ Production vault file already exists"
|
||||
read -p "Do you want to decrypt and edit it? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
ansible-vault edit "$SECRETS_DIR/production.vault.yml" \
|
||||
--vault-password-file "$SECRETS_DIR/.vault_pass"
|
||||
echo "✅ Vault file updated"
|
||||
fi
|
||||
else
|
||||
echo "Creating new vault file from example..."
|
||||
cp "$SECRETS_DIR/production.vault.yml.example" "$SECRETS_DIR/production.vault.yml"
|
||||
|
||||
echo ""
|
||||
echo "⚠️ IMPORTANT: You must edit the vault file and replace all 'change-me' values!"
|
||||
echo ""
|
||||
read -p "Press ENTER to edit the vault file now..."
|
||||
|
||||
${EDITOR:-nano} "$SECRETS_DIR/production.vault.yml"
|
||||
|
||||
echo ""
|
||||
echo "Encrypting vault file..."
|
||||
ansible-vault encrypt "$SECRETS_DIR/production.vault.yml" \
|
||||
--vault-password-file "$SECRETS_DIR/.vault_pass"
|
||||
|
||||
echo "✅ Production vault file created and encrypted"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 3: Verify vault file
|
||||
echo "Step 3: Verification"
|
||||
echo "-------------------"
|
||||
|
||||
echo "Testing vault decryption..."
|
||||
if ansible-vault view "$SECRETS_DIR/production.vault.yml" \
|
||||
--vault-password-file "$SECRETS_DIR/.vault_pass" > /dev/null 2>&1; then
|
||||
echo "✅ Vault file can be decrypted successfully"
|
||||
else
|
||||
echo "❌ Failed to decrypt vault file!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for example values
|
||||
echo "Checking for unchanged example values..."
|
||||
EXAMPLE_VALUES=$(ansible-vault view "$SECRETS_DIR/production.vault.yml" \
|
||||
--vault-password-file "$SECRETS_DIR/.vault_pass" | grep -c "change-me" || true)
|
||||
|
||||
if [ "$EXAMPLE_VALUES" -gt 0 ]; then
|
||||
echo "⚠️ WARNING: Found $EXAMPLE_VALUES 'change-me' placeholder values!"
|
||||
echo " You must replace these before deploying to production."
|
||||
echo ""
|
||||
read -p "Do you want to edit the vault file now? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
ansible-vault edit "$SECRETS_DIR/production.vault.yml" \
|
||||
--vault-password-file "$SECRETS_DIR/.vault_pass"
|
||||
fi
|
||||
else
|
||||
echo "✅ No placeholder values found"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 4: Setup SSH key
|
||||
echo "Step 4: SSH Key Setup"
|
||||
echo "--------------------"
|
||||
|
||||
SSH_KEY="$HOME/.ssh/production"
|
||||
|
||||
if [ -f "$SSH_KEY" ]; then
|
||||
echo "✅ SSH key already exists: $SSH_KEY"
|
||||
else
|
||||
echo "SSH key not found. Creating new key..."
|
||||
ssh-keygen -t ed25519 -f "$SSH_KEY" -C "ansible-deploy" -N ""
|
||||
chmod 600 "$SSH_KEY"
|
||||
chmod 644 "$SSH_KEY.pub"
|
||||
echo "✅ SSH key created"
|
||||
echo ""
|
||||
echo "📋 Public key:"
|
||||
cat "$SSH_KEY.pub"
|
||||
echo ""
|
||||
echo "⚠️ You must add this public key to the production server:"
|
||||
echo " ssh-copy-id -i $SSH_KEY.pub deploy@94.16.110.151"
|
||||
echo ""
|
||||
read -p "Press ENTER to continue..."
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 5: Test connection
|
||||
echo "Step 5: Connection Test"
|
||||
echo "----------------------"
|
||||
|
||||
read -p "Do you want to test the connection to production? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "Testing Ansible connection..."
|
||||
if ansible production -m ping 2>&1 | grep -q "SUCCESS"; then
|
||||
echo "✅ Connection successful!"
|
||||
else
|
||||
echo "❌ Connection failed!"
|
||||
echo ""
|
||||
echo "Troubleshooting steps:"
|
||||
echo "1. Verify SSH key is added to server: ssh-copy-id -i $SSH_KEY.pub deploy@94.16.110.151"
|
||||
echo "2. Test SSH manually: ssh -i $SSH_KEY deploy@94.16.110.151"
|
||||
echo "3. Check inventory file: cat $ANSIBLE_DIR/inventory/production.yml"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "✅ Setup complete!"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo "1. Review vault file: ansible-vault view secrets/production.vault.yml --vault-password-file secrets/.vault_pass"
|
||||
echo "2. Deploy secrets: ansible-playbook playbooks/setup-production-secrets.yml --vault-password-file secrets/.vault_pass"
|
||||
echo "3. Deploy application: See README.md for deployment instructions"
|
||||
echo ""
|
||||
echo "📖 For more information, see: deployment/ansible/README.md"
|
||||
6
deployment/ansible/secrets/.gitignore
vendored
Normal file
6
deployment/ansible/secrets/.gitignore
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
# Ansible Vault Files
|
||||
*.vault.yml
|
||||
.vault_pass
|
||||
|
||||
# Decrypted secrets (if any)
|
||||
*.decrypted.yml
|
||||
26
deployment/ansible/secrets/production.vault.yml.example
Normal file
26
deployment/ansible/secrets/production.vault.yml.example
Normal file
@@ -0,0 +1,26 @@
|
||||
---
|
||||
# Ansible Vault Example
|
||||
# Copy this file to production.vault.yml and encrypt with:
|
||||
# ansible-vault encrypt production.vault.yml
|
||||
|
||||
# Database Credentials
|
||||
vault_db_password: "change-me-secure-db-password"
|
||||
vault_db_root_password: "change-me-secure-root-password"
|
||||
|
||||
# Redis Credentials
|
||||
vault_redis_password: "change-me-secure-redis-password"
|
||||
|
||||
# Application Secrets
|
||||
vault_app_key: "change-me-base64-encoded-32-byte-key"
|
||||
vault_jwt_secret: "change-me-jwt-signing-secret"
|
||||
|
||||
# Mail Configuration
|
||||
vault_mail_password: "change-me-mail-password"
|
||||
|
||||
# Docker Registry Credentials
|
||||
vault_docker_registry_username: "gitea-user"
|
||||
vault_docker_registry_password: "change-me-registry-password"
|
||||
|
||||
# Optional: Additional Secrets
|
||||
vault_encryption_key: "change-me-encryption-key"
|
||||
vault_session_secret: "change-me-session-secret"
|
||||
@@ -1,428 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Application Deployment Script for Custom PHP Framework
|
||||
# Integrates Docker Compose with Ansible infrastructure deployment
|
||||
# Usage: ./deploy-app.sh [staging|production] [options]
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Script directory and project root
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../../" && pwd)"
|
||||
DEPLOYMENT_DIR="${PROJECT_ROOT}/deployment"
|
||||
APPLICATIONS_DIR="${DEPLOYMENT_DIR}/applications"
|
||||
|
||||
# Default configuration
|
||||
DEFAULT_ENV="staging"
|
||||
DRY_RUN=false
|
||||
SKIP_TESTS=false
|
||||
SKIP_BACKUP=false
|
||||
FORCE_DEPLOY=false
|
||||
VERBOSE=false
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Logging functions
|
||||
log() {
|
||||
echo -e "${GREEN}[$(date +'%Y-%m-%d %H:%M:%S')] INFO: $1${NC}"
|
||||
}
|
||||
|
||||
warn() {
|
||||
echo -e "${YELLOW}[$(date +'%Y-%m-%d %H:%M:%S')] WARN: $1${NC}"
|
||||
}
|
||||
|
||||
error() {
|
||||
echo -e "${RED}[$(date +'%Y-%m-%d %H:%M:%S')] ERROR: $1${NC}"
|
||||
}
|
||||
|
||||
debug() {
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')] DEBUG: $1${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
# Usage information
|
||||
show_usage() {
|
||||
cat << EOF
|
||||
Usage: $0 [environment] [options]
|
||||
|
||||
Environment:
|
||||
staging Deploy to staging environment (default)
|
||||
production Deploy to production environment
|
||||
|
||||
Options:
|
||||
--dry-run Show what would be done without making changes
|
||||
--skip-tests Skip running tests before deployment
|
||||
--skip-backup Skip database backup (not recommended for production)
|
||||
--force Force deployment even if validation fails
|
||||
--verbose Enable verbose output
|
||||
-h, --help Show this help message
|
||||
|
||||
Examples:
|
||||
$0 staging # Deploy to staging
|
||||
$0 production --skip-tests # Deploy to production without tests
|
||||
$0 staging --dry-run --verbose # Dry run with detailed output
|
||||
|
||||
EOF
|
||||
}
|
||||
|
||||
# Parse command line arguments
|
||||
parse_arguments() {
|
||||
local environment=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
staging|production)
|
||||
environment="$1"
|
||||
shift
|
||||
;;
|
||||
--dry-run)
|
||||
DRY_RUN=true
|
||||
shift
|
||||
;;
|
||||
--skip-tests)
|
||||
SKIP_TESTS=true
|
||||
shift
|
||||
;;
|
||||
--skip-backup)
|
||||
SKIP_BACKUP=true
|
||||
shift
|
||||
;;
|
||||
--force)
|
||||
FORCE_DEPLOY=true
|
||||
shift
|
||||
;;
|
||||
--verbose)
|
||||
VERBOSE=true
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
show_usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
error "Unknown argument: $1"
|
||||
show_usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Set environment, defaulting to staging
|
||||
DEPLOY_ENV="${environment:-$DEFAULT_ENV}"
|
||||
}
|
||||
|
||||
# Validate environment and prerequisites
|
||||
validate_environment() {
|
||||
log "Validating deployment environment: $DEPLOY_ENV"
|
||||
|
||||
# Check if we're in the project root
|
||||
if [[ ! -f "${PROJECT_ROOT}/docker-compose.yml" ]]; then
|
||||
error "Project root not found. Please run from the correct directory."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate environment-specific files exist
|
||||
local compose_file="${APPLICATIONS_DIR}/docker-compose.${DEPLOY_ENV}.yml"
|
||||
local env_file="${APPLICATIONS_DIR}/environments/.env.${DEPLOY_ENV}"
|
||||
|
||||
if [[ ! -f "$compose_file" ]]; then
|
||||
error "Docker Compose overlay not found: $compose_file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$env_file" ]]; then
|
||||
error "Environment file not found: $env_file"
|
||||
error "Copy from template: cp ${env_file}.template $env_file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for required tools
|
||||
local required_tools=("docker" "docker-compose" "ansible-playbook")
|
||||
for tool in "${required_tools[@]}"; do
|
||||
if ! command -v "$tool" &> /dev/null; then
|
||||
error "Required tool not found: $tool"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
debug "Environment validation completed"
|
||||
}
|
||||
|
||||
# Validate environment configuration
|
||||
validate_configuration() {
|
||||
log "Validating environment configuration"
|
||||
|
||||
local env_file="${APPLICATIONS_DIR}/environments/.env.${DEPLOY_ENV}"
|
||||
|
||||
# Check for required placeholder values
|
||||
local required_placeholders=(
|
||||
"*** REQUIRED"
|
||||
)
|
||||
|
||||
for placeholder in "${required_placeholders[@]}"; do
|
||||
if grep -q "$placeholder" "$env_file"; then
|
||||
error "Environment file contains unfilled templates:"
|
||||
grep "$placeholder" "$env_file" || true
|
||||
if [ "$FORCE_DEPLOY" != true ]; then
|
||||
error "Fix configuration or use --force to proceed"
|
||||
exit 1
|
||||
else
|
||||
warn "Proceeding with incomplete configuration due to --force flag"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
debug "Configuration validation completed"
|
||||
}
|
||||
|
||||
# Run pre-deployment tests
|
||||
run_tests() {
|
||||
if [ "$SKIP_TESTS" = true ]; then
|
||||
warn "Skipping tests as requested"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log "Running pre-deployment tests"
|
||||
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# PHP tests
|
||||
if [[ -f "vendor/bin/pest" ]]; then
|
||||
log "Running PHP tests with Pest"
|
||||
./vendor/bin/pest --bail
|
||||
elif [[ -f "vendor/bin/phpunit" ]]; then
|
||||
log "Running PHP tests with PHPUnit"
|
||||
./vendor/bin/phpunit --stop-on-failure
|
||||
else
|
||||
warn "No PHP test framework found"
|
||||
fi
|
||||
|
||||
# JavaScript tests
|
||||
if [[ -f "package.json" ]] && command -v npm &> /dev/null; then
|
||||
log "Running JavaScript tests"
|
||||
npm test
|
||||
fi
|
||||
|
||||
# Code quality checks
|
||||
if [[ -f "composer.json" ]]; then
|
||||
log "Running code style checks"
|
||||
composer cs || {
|
||||
error "Code style checks failed"
|
||||
if [ "$FORCE_DEPLOY" != true ]; then
|
||||
exit 1
|
||||
else
|
||||
warn "Proceeding despite code style issues due to --force flag"
|
||||
fi
|
||||
}
|
||||
fi
|
||||
|
||||
debug "Tests completed successfully"
|
||||
}
|
||||
|
||||
# Create database backup
|
||||
create_backup() {
|
||||
if [ "$SKIP_BACKUP" = true ]; then
|
||||
warn "Skipping database backup as requested"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log "Creating database backup for $DEPLOY_ENV"
|
||||
|
||||
local backup_dir="${PROJECT_ROOT}/storage/backups"
|
||||
local timestamp=$(date +%Y%m%d_%H%M%S)
|
||||
local backup_file="${backup_dir}/db_backup_${DEPLOY_ENV}_${timestamp}.sql"
|
||||
|
||||
mkdir -p "$backup_dir"
|
||||
|
||||
# Load environment variables
|
||||
set -a
|
||||
source "${APPLICATIONS_DIR}/environments/.env.${DEPLOY_ENV}"
|
||||
set +a
|
||||
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
# Create backup using the running database container
|
||||
docker-compose -f "${PROJECT_ROOT}/docker-compose.yml" \
|
||||
-f "${APPLICATIONS_DIR}/docker-compose.${DEPLOY_ENV}.yml" \
|
||||
exec -T db \
|
||||
mariadb-dump -u root -p"${DB_ROOT_PASSWORD}" --all-databases > "$backup_file"
|
||||
|
||||
log "Database backup created: $backup_file"
|
||||
else
|
||||
debug "DRY RUN: Would create backup at $backup_file"
|
||||
fi
|
||||
}
|
||||
|
||||
# Build application assets
|
||||
build_assets() {
|
||||
log "Building application assets"
|
||||
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
if [[ -f "package.json" ]] && command -v npm &> /dev/null; then
|
||||
log "Installing Node.js dependencies"
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
npm ci
|
||||
else
|
||||
debug "DRY RUN: Would run npm ci"
|
||||
fi
|
||||
|
||||
log "Building production assets"
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
npm run build
|
||||
else
|
||||
debug "DRY RUN: Would run npm run build"
|
||||
fi
|
||||
else
|
||||
warn "No package.json found, skipping asset build"
|
||||
fi
|
||||
|
||||
debug "Asset building completed"
|
||||
}
|
||||
|
||||
# Deploy infrastructure using Ansible
|
||||
deploy_infrastructure() {
|
||||
log "Deploying infrastructure with Ansible"
|
||||
|
||||
local ansible_dir="${DEPLOYMENT_DIR}/infrastructure"
|
||||
local inventory="${ansible_dir}/inventories/${DEPLOY_ENV}/hosts.yml"
|
||||
local site_playbook="${ansible_dir}/site.yml"
|
||||
|
||||
if [[ ! -f "$inventory" ]]; then
|
||||
warn "Ansible inventory not found: $inventory"
|
||||
warn "Skipping infrastructure deployment"
|
||||
return 0
|
||||
fi
|
||||
|
||||
cd "$ansible_dir"
|
||||
|
||||
# First run the main site playbook for infrastructure setup
|
||||
local ansible_cmd="ansible-playbook -i $inventory $site_playbook"
|
||||
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
ansible_cmd="$ansible_cmd --check"
|
||||
fi
|
||||
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
ansible_cmd="$ansible_cmd -v"
|
||||
fi
|
||||
|
||||
debug "Running infrastructure setup: $ansible_cmd"
|
||||
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
$ansible_cmd
|
||||
log "Infrastructure setup completed"
|
||||
else
|
||||
debug "DRY RUN: Would run Ansible infrastructure setup"
|
||||
fi
|
||||
}
|
||||
|
||||
# Deploy application using Ansible
|
||||
deploy_application() {
|
||||
log "Deploying application with Ansible"
|
||||
|
||||
local ansible_dir="${DEPLOYMENT_DIR}/infrastructure"
|
||||
local inventory="${ansible_dir}/inventories/${DEPLOY_ENV}/hosts.yml"
|
||||
local app_playbook="${ansible_dir}/playbooks/deploy-application.yml"
|
||||
|
||||
if [[ ! -f "$inventory" ]]; then
|
||||
warn "Ansible inventory not found: $inventory"
|
||||
warn "Skipping application deployment"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ ! -f "$app_playbook" ]]; then
|
||||
error "Application deployment playbook not found: $app_playbook"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$ansible_dir"
|
||||
|
||||
# Run the application deployment playbook with proper environment variable
|
||||
local ansible_cmd="ansible-playbook -i $inventory $app_playbook -e environment=$DEPLOY_ENV"
|
||||
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
ansible_cmd="$ansible_cmd --check"
|
||||
fi
|
||||
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
ansible_cmd="$ansible_cmd -v"
|
||||
fi
|
||||
|
||||
debug "Running application deployment: $ansible_cmd"
|
||||
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
$ansible_cmd
|
||||
log "Application deployment completed"
|
||||
else
|
||||
debug "DRY RUN: Would run Ansible application deployment"
|
||||
fi
|
||||
}
|
||||
|
||||
# Run post-deployment tasks
|
||||
post_deployment() {
|
||||
log "Running post-deployment tasks"
|
||||
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
local compose_files="-f docker-compose.yml -f ${APPLICATIONS_DIR}/docker-compose.${DEPLOY_ENV}.yml"
|
||||
local env_file="--env-file ${APPLICATIONS_DIR}/environments/.env.${DEPLOY_ENV}"
|
||||
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
# Run database migrations
|
||||
log "Running database migrations"
|
||||
docker-compose $compose_files $env_file exec -T php php console.php db:migrate
|
||||
|
||||
# Clear application cache
|
||||
log "Clearing application cache"
|
||||
docker-compose $compose_files $env_file exec -T php php console.php cache:clear || true
|
||||
|
||||
# Warm up application
|
||||
log "Warming up application"
|
||||
sleep 5
|
||||
|
||||
# Run health checks
|
||||
"${SCRIPT_DIR}/health-check.sh" "$DEPLOY_ENV"
|
||||
|
||||
else
|
||||
debug "DRY RUN: Would run post-deployment tasks"
|
||||
fi
|
||||
|
||||
debug "Post-deployment tasks completed"
|
||||
}
|
||||
|
||||
# Main deployment function
|
||||
main() {
|
||||
log "Starting deployment to $DEPLOY_ENV environment"
|
||||
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
log "DRY RUN MODE - No actual changes will be made"
|
||||
fi
|
||||
|
||||
# Deployment steps
|
||||
validate_environment
|
||||
validate_configuration
|
||||
run_tests
|
||||
create_backup
|
||||
build_assets
|
||||
deploy_infrastructure
|
||||
deploy_application
|
||||
post_deployment
|
||||
|
||||
log "Deployment to $DEPLOY_ENV completed successfully!"
|
||||
|
||||
if [ "$DEPLOY_ENV" = "production" ]; then
|
||||
log "Production deployment complete. Monitor application health and performance."
|
||||
else
|
||||
log "Staging deployment complete. Ready for testing."
|
||||
fi
|
||||
}
|
||||
|
||||
# Parse arguments and run main function
|
||||
parse_arguments "$@"
|
||||
main
|
||||
@@ -1,463 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Health Check Script for Custom PHP Framework
|
||||
# Verifies application health after deployment
|
||||
# Usage: ./health-check.sh [environment] [options]
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Script configuration
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../../" && pwd)"
|
||||
DEPLOYMENT_DIR="${PROJECT_ROOT}/deployment"
|
||||
APPLICATIONS_DIR="${DEPLOYMENT_DIR}/applications"
|
||||
|
||||
# Default settings
|
||||
DEFAULT_ENV="staging"
|
||||
VERBOSE=false
|
||||
MAX_RETRIES=10
|
||||
RETRY_INTERVAL=30
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Logging functions
|
||||
log() {
|
||||
echo -e "${GREEN}[$(date +'%Y-%m-%d %H:%M:%S')] INFO: $1${NC}"
|
||||
}
|
||||
|
||||
warn() {
|
||||
echo -e "${YELLOW}[$(date +'%Y-%m-%d %H:%M:%S')] WARN: $1${NC}"
|
||||
}
|
||||
|
||||
error() {
|
||||
echo -e "${RED}[$(date +'%Y-%m-%d %H:%M:%S')] ERROR: $1${NC}"
|
||||
}
|
||||
|
||||
debug() {
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')] DEBUG: $1${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
success() {
|
||||
echo -e "${GREEN}[$(date +'%Y-%m-%d %H:%M:%S')] SUCCESS: $1${NC}"
|
||||
}
|
||||
|
||||
# Usage information
|
||||
show_usage() {
|
||||
cat << EOF
|
||||
Usage: $0 [environment] [options]
|
||||
|
||||
Environment:
|
||||
staging Check staging environment (default)
|
||||
production Check production environment
|
||||
|
||||
Options:
|
||||
--verbose Enable verbose output
|
||||
--max-retries N Maximum number of health check retries (default: 10)
|
||||
--retry-interval N Seconds between retries (default: 30)
|
||||
-h, --help Show this help message
|
||||
|
||||
Examples:
|
||||
$0 staging # Check staging environment
|
||||
$0 production --verbose # Check production with detailed output
|
||||
$0 staging --max-retries 5 # Check with custom retry limit
|
||||
|
||||
EOF
|
||||
}
|
||||
|
||||
# Parse command line arguments
|
||||
parse_arguments() {
|
||||
local environment=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
staging|production)
|
||||
environment="$1"
|
||||
shift
|
||||
;;
|
||||
--verbose)
|
||||
VERBOSE=true
|
||||
shift
|
||||
;;
|
||||
--max-retries)
|
||||
MAX_RETRIES="$2"
|
||||
shift 2
|
||||
;;
|
||||
--retry-interval)
|
||||
RETRY_INTERVAL="$2"
|
||||
shift 2
|
||||
;;
|
||||
-h|--help)
|
||||
show_usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
error "Unknown argument: $1"
|
||||
show_usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Set environment, defaulting to staging
|
||||
CHECK_ENV="${environment:-$DEFAULT_ENV}"
|
||||
}
|
||||
|
||||
# Load environment configuration
|
||||
load_environment() {
|
||||
local env_file="${APPLICATIONS_DIR}/environments/.env.${CHECK_ENV}"
|
||||
|
||||
if [[ ! -f "$env_file" ]]; then
|
||||
error "Environment file not found: $env_file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
debug "Loading environment from: $env_file"
|
||||
set -a
|
||||
source "$env_file"
|
||||
set +a
|
||||
}
|
||||
|
||||
# Check Docker container health
|
||||
check_container_health() {
|
||||
log "Checking Docker container health"
|
||||
|
||||
local compose_files="-f ${PROJECT_ROOT}/docker-compose.yml -f ${APPLICATIONS_DIR}/docker-compose.${CHECK_ENV}.yml"
|
||||
local env_file="--env-file ${APPLICATIONS_DIR}/environments/.env.${CHECK_ENV}"
|
||||
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Get container status
|
||||
local containers
|
||||
containers=$(docker-compose $compose_files $env_file ps --services)
|
||||
|
||||
local all_healthy=true
|
||||
|
||||
while IFS= read -r service; do
|
||||
if [[ -n "$service" ]]; then
|
||||
local container_status
|
||||
container_status=$(docker-compose $compose_files $env_file ps "$service" | tail -n +3 | awk '{print $4}')
|
||||
|
||||
case "$container_status" in
|
||||
"Up"|"Up (healthy)")
|
||||
success "Container $service: $container_status"
|
||||
;;
|
||||
*)
|
||||
error "Container $service: $container_status"
|
||||
all_healthy=false
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
done <<< "$containers"
|
||||
|
||||
if [ "$all_healthy" = false ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
debug "All containers are healthy"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Check HTTP endpoints
|
||||
check_http_endpoints() {
|
||||
log "Checking HTTP endpoints"
|
||||
|
||||
# Determine base URL based on environment
|
||||
local base_url
|
||||
if [ "$CHECK_ENV" = "production" ]; then
|
||||
base_url="https://michaelschiemer.de"
|
||||
else
|
||||
base_url="https://localhost:${APP_SSL_PORT:-443}"
|
||||
fi
|
||||
|
||||
# Test endpoints
|
||||
local endpoints=(
|
||||
"/"
|
||||
"/health"
|
||||
"/api/health"
|
||||
)
|
||||
|
||||
local all_ok=true
|
||||
|
||||
for endpoint in "${endpoints[@]}"; do
|
||||
local url="${base_url}${endpoint}"
|
||||
debug "Testing endpoint: $url"
|
||||
|
||||
local http_code
|
||||
http_code=$(curl -s -o /dev/null -w "%{http_code}" -k \
|
||||
-H "User-Agent: Mozilla/5.0 (HealthCheck)" \
|
||||
--max-time 30 \
|
||||
"$url" || echo "000")
|
||||
|
||||
case "$http_code" in
|
||||
200|301|302)
|
||||
success "Endpoint $endpoint: HTTP $http_code"
|
||||
;;
|
||||
000)
|
||||
error "Endpoint $endpoint: Connection failed"
|
||||
all_ok=false
|
||||
;;
|
||||
*)
|
||||
error "Endpoint $endpoint: HTTP $http_code"
|
||||
all_ok=false
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ "$all_ok" = false ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
debug "All HTTP endpoints are responding correctly"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Check database connectivity
|
||||
check_database() {
|
||||
log "Checking database connectivity"
|
||||
|
||||
local compose_files="-f ${PROJECT_ROOT}/docker-compose.yml -f ${APPLICATIONS_DIR}/docker-compose.${CHECK_ENV}.yml"
|
||||
local env_file="--env-file ${APPLICATIONS_DIR}/environments/.env.${CHECK_ENV}"
|
||||
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Test database connection through application
|
||||
local db_check
|
||||
if db_check=$(docker-compose $compose_files $env_file exec -T php \
|
||||
php console.php db:ping 2>&1); then
|
||||
success "Database connection: OK"
|
||||
debug "Database response: $db_check"
|
||||
return 0
|
||||
else
|
||||
error "Database connection: FAILED"
|
||||
error "Database error: $db_check"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Check Redis connectivity
|
||||
check_redis() {
|
||||
log "Checking Redis connectivity"
|
||||
|
||||
local compose_files="-f ${PROJECT_ROOT}/docker-compose.yml -f ${APPLICATIONS_DIR}/docker-compose.${CHECK_ENV}.yml"
|
||||
local env_file="--env-file ${APPLICATIONS_DIR}/environments/.env.${CHECK_ENV}"
|
||||
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Test Redis connection
|
||||
local redis_check
|
||||
if redis_check=$(docker-compose $compose_files $env_file exec -T redis \
|
||||
redis-cli ping 2>&1); then
|
||||
if [[ "$redis_check" == *"PONG"* ]]; then
|
||||
success "Redis connection: OK"
|
||||
debug "Redis response: $redis_check"
|
||||
return 0
|
||||
else
|
||||
error "Redis connection: Unexpected response - $redis_check"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
error "Redis connection: FAILED"
|
||||
error "Redis error: $redis_check"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Check queue worker status
|
||||
check_queue_worker() {
|
||||
log "Checking queue worker status"
|
||||
|
||||
local compose_files="-f ${PROJECT_ROOT}/docker-compose.yml -f ${APPLICATIONS_DIR}/docker-compose.${CHECK_ENV}.yml"
|
||||
local env_file="--env-file ${APPLICATIONS_DIR}/environments/.env.${CHECK_ENV}"
|
||||
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Check if worker container is running
|
||||
local worker_status
|
||||
worker_status=$(docker-compose $compose_files $env_file ps queue-worker | tail -n +3 | awk '{print $4}')
|
||||
|
||||
case "$worker_status" in
|
||||
"Up")
|
||||
success "Queue worker: Running"
|
||||
|
||||
# Check worker process inside container
|
||||
local worker_process
|
||||
if worker_process=$(docker-compose $compose_files $env_file exec -T queue-worker \
|
||||
ps aux | grep -v grep | grep worker 2>&1); then
|
||||
debug "Worker process found: $worker_process"
|
||||
return 0
|
||||
else
|
||||
warn "Queue worker container running but no worker process found"
|
||||
return 1
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
error "Queue worker: $worker_status"
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Check application performance
|
||||
check_performance() {
|
||||
log "Checking application performance"
|
||||
|
||||
local base_url
|
||||
if [ "$CHECK_ENV" = "production" ]; then
|
||||
base_url="https://michaelschiemer.de"
|
||||
else
|
||||
base_url="https://localhost:${APP_SSL_PORT:-443}"
|
||||
fi
|
||||
|
||||
# Test response times
|
||||
local response_time
|
||||
response_time=$(curl -s -o /dev/null -w "%{time_total}" -k \
|
||||
-H "User-Agent: Mozilla/5.0 (HealthCheck)" \
|
||||
--max-time 30 \
|
||||
"${base_url}/" || echo "timeout")
|
||||
|
||||
if [[ "$response_time" == "timeout" ]]; then
|
||||
error "Performance check: Request timed out"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local response_ms
|
||||
response_ms=$(echo "$response_time * 1000" | bc -l | cut -d. -f1)
|
||||
|
||||
# Performance thresholds (milliseconds)
|
||||
local warning_threshold=2000
|
||||
local error_threshold=5000
|
||||
|
||||
if [ "$response_ms" -lt "$warning_threshold" ]; then
|
||||
success "Performance: ${response_ms}ms (Good)"
|
||||
return 0
|
||||
elif [ "$response_ms" -lt "$error_threshold" ]; then
|
||||
warn "Performance: ${response_ms}ms (Slow)"
|
||||
return 0
|
||||
else
|
||||
error "Performance: ${response_ms}ms (Too slow)"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Check SSL certificate
|
||||
check_ssl() {
|
||||
if [ "$CHECK_ENV" != "production" ]; then
|
||||
debug "Skipping SSL check for non-production environment"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log "Checking SSL certificate"
|
||||
|
||||
local domain="michaelschiemer.de"
|
||||
local ssl_info
|
||||
|
||||
if ssl_info=$(echo | openssl s_client -connect "${domain}:443" -servername "$domain" 2>/dev/null | \
|
||||
openssl x509 -noout -dates 2>/dev/null); then
|
||||
|
||||
local expiry_date
|
||||
expiry_date=$(echo "$ssl_info" | grep "notAfter" | cut -d= -f2)
|
||||
|
||||
local expiry_timestamp
|
||||
expiry_timestamp=$(date -d "$expiry_date" +%s 2>/dev/null || echo "0")
|
||||
|
||||
local current_timestamp
|
||||
current_timestamp=$(date +%s)
|
||||
|
||||
local days_until_expiry
|
||||
days_until_expiry=$(( (expiry_timestamp - current_timestamp) / 86400 ))
|
||||
|
||||
if [ "$days_until_expiry" -gt 30 ]; then
|
||||
success "SSL certificate: Valid (expires in $days_until_expiry days)"
|
||||
return 0
|
||||
elif [ "$days_until_expiry" -gt 7 ]; then
|
||||
warn "SSL certificate: Expires soon (in $days_until_expiry days)"
|
||||
return 0
|
||||
else
|
||||
error "SSL certificate: Critical expiry (in $days_until_expiry days)"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
error "SSL certificate: Could not retrieve certificate information"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Comprehensive health check with retries
|
||||
run_health_checks() {
|
||||
log "Starting comprehensive health check for $CHECK_ENV environment"
|
||||
|
||||
local checks=(
|
||||
"check_container_health"
|
||||
"check_database"
|
||||
"check_redis"
|
||||
"check_queue_worker"
|
||||
"check_http_endpoints"
|
||||
"check_performance"
|
||||
"check_ssl"
|
||||
)
|
||||
|
||||
local attempt=1
|
||||
local all_passed=false
|
||||
|
||||
while [ $attempt -le $MAX_RETRIES ] && [ "$all_passed" = false ]; do
|
||||
log "Health check attempt $attempt of $MAX_RETRIES"
|
||||
|
||||
local failed_checks=()
|
||||
|
||||
for check in "${checks[@]}"; do
|
||||
debug "Running check: $check"
|
||||
if ! $check; then
|
||||
failed_checks+=("$check")
|
||||
fi
|
||||
done
|
||||
|
||||
if [ ${#failed_checks[@]} -eq 0 ]; then
|
||||
all_passed=true
|
||||
success "All health checks passed!"
|
||||
else
|
||||
warn "Failed checks: ${failed_checks[*]}"
|
||||
|
||||
if [ $attempt -lt $MAX_RETRIES ]; then
|
||||
log "Waiting $RETRY_INTERVAL seconds before retry..."
|
||||
sleep $RETRY_INTERVAL
|
||||
fi
|
||||
fi
|
||||
|
||||
((attempt++))
|
||||
done
|
||||
|
||||
if [ "$all_passed" = false ]; then
|
||||
error "Health checks failed after $MAX_RETRIES attempts"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Main function
|
||||
main() {
|
||||
log "Starting health check for $CHECK_ENV environment"
|
||||
|
||||
load_environment
|
||||
|
||||
if run_health_checks; then
|
||||
success "Health check completed successfully for $CHECK_ENV environment"
|
||||
log "Application is healthy and ready to serve traffic"
|
||||
return 0
|
||||
else
|
||||
error "Health check failed for $CHECK_ENV environment"
|
||||
error "Please review the application status and fix any issues"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Parse arguments and run main function
|
||||
parse_arguments "$@"
|
||||
main
|
||||
@@ -1,459 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Deployment CLI - Unified Interface for Custom PHP Framework Deployment
|
||||
# Provides easy access to all deployment tools and workflows
|
||||
# Domain: michaelschiemer.de | Email: kontakt@michaelschiemer.de | PHP: 8.4
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Script configuration
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../" && pwd)"
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
PURPLE='\033[0;35m'
|
||||
CYAN='\033[0;36m'
|
||||
WHITE='\033[1;37m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Logging
|
||||
log() { echo -e "${GREEN}[CLI] ✅ $1${NC}"; }
|
||||
warn() { echo -e "${YELLOW}[CLI] ⚠️ $1${NC}"; }
|
||||
error() { echo -e "${RED}[CLI] ❌ $1${NC}"; }
|
||||
info() { echo -e "${BLUE}[CLI] ℹ️ $1${NC}"; }
|
||||
success() { echo -e "${WHITE}[CLI] 🎉 $1${NC}"; }
|
||||
|
||||
# Show main help
|
||||
show_help() {
|
||||
cat << 'EOF'
|
||||
|
||||
██████╗ ███████╗██████╗ ██╗ ██████╗ ██╗ ██╗ ██████╗██╗ ██╗
|
||||
██╔══██╗██╔════╝██╔══██╗██║ ██╔═══██╗╚██╗ ██╔╝ ██╔════╝██║ ██║
|
||||
██║ ██║█████╗ ██████╔╝██║ ██║ ██║ ╚████╔╝ ██║ ██║ ██║
|
||||
██║ ██║██╔══╝ ██╔═══╝ ██║ ██║ ██║ ╚██╔╝ ██║ ██║ ██║
|
||||
██████╔╝███████╗██║ ███████╗╚██████╔╝ ██║ ╚██████╗███████╗██║
|
||||
╚═════╝ ╚══════╝╚═╝ ╚══════╝ ╚═════╝ ╚═╝ ╚═════╝╚══════╝╚═╝
|
||||
|
||||
EOF
|
||||
cat << EOF
|
||||
${WHITE}Custom PHP Framework Deployment CLI${NC}
|
||||
${CYAN}Domain: michaelschiemer.de | Email: kontakt@michaelschiemer.de | PHP: 8.4${NC}
|
||||
|
||||
${WHITE}Usage:${NC} $0 <command> [options]
|
||||
|
||||
${YELLOW}🚀 QUICK START COMMANDS${NC}
|
||||
${GREEN}wizard${NC} Interactive setup wizard for new deployments
|
||||
${GREEN}production${NC} One-command production setup and deployment
|
||||
${GREEN}deploy <env>${NC} Deploy to specific environment (dev/staging/prod)
|
||||
${GREEN}status <env>${NC} Check deployment status and health
|
||||
|
||||
${YELLOW}📋 SETUP & CONFIGURATION${NC}
|
||||
${GREEN}setup${NC} Run initial project setup (dependencies, configs)
|
||||
${GREEN}config <env>${NC} Configure environment settings
|
||||
${GREEN}validate <env>${NC} Validate environment configuration
|
||||
${GREEN}credentials <env>${NC} Generate secure credentials for environment
|
||||
|
||||
${YELLOW}🔐 SECURITY TOOLS${NC}
|
||||
${GREEN}password [length]${NC} Generate secure password
|
||||
${GREEN}ssh-key [name]${NC} Generate SSH key pair for deployment
|
||||
${GREEN}security-scan${NC} Run security vulnerability scan
|
||||
${GREEN}security-report <env>${NC} Generate comprehensive security report
|
||||
|
||||
${YELLOW}🐳 DOCKER & INFRASTRUCTURE${NC}
|
||||
${GREEN}build <env>${NC} Build Docker containers for environment
|
||||
${GREEN}up <env>${NC} Start Docker containers
|
||||
${GREEN}down <env>${NC} Stop Docker containers
|
||||
${GREEN}logs <env> [service] [--follow]${NC} Show Docker container logs
|
||||
${CYAN}$0 logs production${NC} # All services (last 50 lines)
|
||||
${CYAN}$0 logs production php${NC} # PHP service (last 100 lines)
|
||||
${CYAN}$0 logs production nginx --follow${NC}# Follow nginx logs live
|
||||
${GREEN}shell <env>${NC} Open shell in PHP container
|
||||
|
||||
${YELLOW}🗄️ DATABASE OPERATIONS${NC}
|
||||
${GREEN}db:migrate <env>${NC} Run database migrations
|
||||
${GREEN}db:status <env>${NC} Show migration status
|
||||
${GREEN}db:backup <env>${NC} Create database backup
|
||||
${GREEN}db:restore <env> <file>${NC} Restore from backup
|
||||
|
||||
${YELLOW}🔄 MAINTENANCE${NC}
|
||||
${GREEN}update <env>${NC} Update deployment to latest code
|
||||
${GREEN}rollback <env>${NC} Rollback to previous deployment
|
||||
${GREEN}cleanup${NC} Clean up old files and containers
|
||||
${GREEN}health <env>${NC} Run comprehensive health check
|
||||
|
||||
${YELLOW}ℹ️ INFORMATION${NC}
|
||||
${GREEN}info <env>${NC} Show environment information
|
||||
${GREEN}list${NC} List all available environments
|
||||
${GREEN}version${NC} Show version information
|
||||
${GREEN}help [command]${NC} Show help for specific command
|
||||
|
||||
${WHITE}Examples:${NC}
|
||||
${CYAN}$0 wizard${NC} # Interactive setup for new deployment
|
||||
${CYAN}$0 production --auto-yes${NC} # Automated production setup
|
||||
${CYAN}$0 deploy staging --verbose${NC} # Deploy to staging with verbose output
|
||||
${CYAN}$0 security-scan${NC} # Run security vulnerability scan
|
||||
${CYAN}$0 credentials production${NC} # Generate production credentials
|
||||
${CYAN}$0 logs production php --follow${NC} # Follow PHP logs in production
|
||||
|
||||
${WHITE}Environment Variables:${NC}
|
||||
${YELLOW}DEPLOY_ENV${NC} Default environment (development/staging/production)
|
||||
${YELLOW}DEPLOY_VERBOSE${NC} Enable verbose output (true/false)
|
||||
${YELLOW}DEPLOY_DRY_RUN${NC} Enable dry-run mode (true/false)
|
||||
|
||||
${WHITE}Configuration Files:${NC}
|
||||
${YELLOW}deployment/applications/environments/${NC} Environment configurations
|
||||
${YELLOW}deployment/infrastructure/inventories/${NC} Ansible inventory files
|
||||
${YELLOW}deployment/.credentials/${NC} Secure credential storage
|
||||
|
||||
EOF
|
||||
}
|
||||
|
||||
# Execute command
|
||||
execute_command() {
|
||||
local command="$1"
|
||||
shift
|
||||
local args=("$@")
|
||||
|
||||
case $command in
|
||||
# Quick start commands
|
||||
"wizard")
|
||||
info "Starting setup wizard..."
|
||||
"${SCRIPT_DIR}/setup-wizard.sh" "${args[@]}"
|
||||
;;
|
||||
|
||||
"production")
|
||||
info "Starting one-command production setup..."
|
||||
"${SCRIPT_DIR}/setup-production.sh" "${args[@]}"
|
||||
;;
|
||||
|
||||
"deploy")
|
||||
local env="${args[0]:-staging}"
|
||||
info "Deploying to $env environment..."
|
||||
"${SCRIPT_DIR}/deploy.sh" "$env" "${args[@]:1}"
|
||||
;;
|
||||
|
||||
"status")
|
||||
local env="${args[0]:-staging}"
|
||||
info "Checking status for $env environment..."
|
||||
"${SCRIPT_DIR}/deploy.sh" "$env" --dry-run --verbose "${args[@]:1}"
|
||||
;;
|
||||
|
||||
# Setup & configuration
|
||||
"setup")
|
||||
info "Running initial project setup..."
|
||||
"${SCRIPT_DIR}/setup.sh" "${args[@]}"
|
||||
;;
|
||||
|
||||
"config")
|
||||
local env="${args[0]:-production}"
|
||||
info "Configuring $env environment..."
|
||||
"${SCRIPT_DIR}/lib/config-manager.sh" apply-config "$env" \
|
||||
"${args[1]:-$env.michaelschiemer.de}" \
|
||||
"${args[2]:-kontakt@michaelschiemer.de}" \
|
||||
"${args[3]:-}"
|
||||
;;
|
||||
|
||||
"validate")
|
||||
local env="${args[0]:-production}"
|
||||
info "Validating $env configuration..."
|
||||
"${SCRIPT_DIR}/lib/config-manager.sh" validate "$env"
|
||||
;;
|
||||
|
||||
"credentials")
|
||||
local env="${args[0]:-production}"
|
||||
info "Generating credentials for $env environment..."
|
||||
"${SCRIPT_DIR}/lib/config-manager.sh" generate-credentials "$env"
|
||||
;;
|
||||
|
||||
# Security tools
|
||||
"password")
|
||||
local length="${args[0]:-32}"
|
||||
"${SCRIPT_DIR}/lib/security-tools.sh" generate-password "$length"
|
||||
;;
|
||||
|
||||
"ssh-key")
|
||||
local name="${args[0]:-production}"
|
||||
"${SCRIPT_DIR}/lib/security-tools.sh" generate-ssh "$name"
|
||||
;;
|
||||
|
||||
"security-scan")
|
||||
"${SCRIPT_DIR}/lib/security-tools.sh" scan "${args[0]:-$SCRIPT_DIR}"
|
||||
;;
|
||||
|
||||
"security-report")
|
||||
local env="${args[0]:-production}"
|
||||
"${SCRIPT_DIR}/lib/security-tools.sh" report "$env"
|
||||
;;
|
||||
|
||||
# Docker & infrastructure
|
||||
"build")
|
||||
local env="${args[0]:-development}"
|
||||
info "Building Docker containers for $env..."
|
||||
cd "$PROJECT_ROOT"
|
||||
docker-compose -f docker-compose.yml \
|
||||
-f "deployment/applications/docker-compose.${env}.yml" \
|
||||
build "${args[@]:1}"
|
||||
;;
|
||||
|
||||
"up")
|
||||
local env="${args[0]:-development}"
|
||||
info "Starting Docker containers for $env..."
|
||||
cd "$PROJECT_ROOT"
|
||||
docker-compose -f docker-compose.yml \
|
||||
-f "deployment/applications/docker-compose.${env}.yml" \
|
||||
up -d "${args[@]:1}"
|
||||
;;
|
||||
|
||||
"down")
|
||||
local env="${args[0]:-development}"
|
||||
info "Stopping Docker containers for $env..."
|
||||
cd "$PROJECT_ROOT"
|
||||
docker-compose -f docker-compose.yml \
|
||||
-f "deployment/applications/docker-compose.${env}.yml" \
|
||||
down "${args[@]:1}"
|
||||
;;
|
||||
|
||||
"logs")
|
||||
local env="${args[0]:-development}"
|
||||
local service="${args[1]:-}"
|
||||
local follow="${args[2]:-}"
|
||||
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
if [[ -z "$service" ]]; then
|
||||
info "Showing logs for all services in $env environment..."
|
||||
echo -e "${YELLOW}Available services: web, php, db, redis, queue-worker${NC}"
|
||||
echo -e "${YELLOW}Usage: $0 logs $env [service] [--follow]${NC}"
|
||||
echo -e "${YELLOW}Example: $0 logs production php --follow${NC}"
|
||||
echo ""
|
||||
|
||||
docker-compose -f docker-compose.yml \
|
||||
-f "deployment/applications/docker-compose.${env}.yml" \
|
||||
logs --tail=50
|
||||
else
|
||||
if [[ "$follow" == "--follow" || "$follow" == "-f" ]]; then
|
||||
info "Following logs for $service in $env environment (Press Ctrl+C to exit)..."
|
||||
docker-compose -f docker-compose.yml \
|
||||
-f "deployment/applications/docker-compose.${env}.yml" \
|
||||
logs -f "$service"
|
||||
else
|
||||
info "Showing recent logs for $service in $env environment..."
|
||||
docker-compose -f docker-compose.yml \
|
||||
-f "deployment/applications/docker-compose.${env}.yml" \
|
||||
logs --tail=100 "$service"
|
||||
fi
|
||||
fi
|
||||
;;
|
||||
|
||||
"shell")
|
||||
local env="${args[0]:-development}"
|
||||
local service="${args[1]:-php}"
|
||||
info "Opening shell in $service container ($env)..."
|
||||
cd "$PROJECT_ROOT"
|
||||
docker-compose -f docker-compose.yml \
|
||||
-f "deployment/applications/docker-compose.${env}.yml" \
|
||||
exec "$service" bash
|
||||
;;
|
||||
|
||||
# Database operations
|
||||
"db:migrate")
|
||||
local env="${args[0]:-development}"
|
||||
info "Running database migrations for $env..."
|
||||
cd "$PROJECT_ROOT"
|
||||
docker-compose -f docker-compose.yml \
|
||||
-f "deployment/applications/docker-compose.${env}.yml" \
|
||||
exec -T php php console.php db:migrate
|
||||
;;
|
||||
|
||||
"db:status")
|
||||
local env="${args[0]:-development}"
|
||||
info "Showing migration status for $env..."
|
||||
cd "$PROJECT_ROOT"
|
||||
docker-compose -f docker-compose.yml \
|
||||
-f "deployment/applications/docker-compose.${env}.yml" \
|
||||
exec -T php php console.php db:status
|
||||
;;
|
||||
|
||||
"db:backup")
|
||||
local env="${args[0]:-development}"
|
||||
local backup_file="backup_${env}_$(date +%Y%m%d_%H%M%S).sql"
|
||||
info "Creating database backup for $env: $backup_file"
|
||||
cd "$PROJECT_ROOT"
|
||||
docker-compose -f docker-compose.yml \
|
||||
-f "deployment/applications/docker-compose.${env}.yml" \
|
||||
exec -T db mysqldump -u root -p\$DB_ROOT_PASSWORD --all-databases > "$backup_file"
|
||||
success "Database backup created: $backup_file"
|
||||
;;
|
||||
|
||||
"db:restore")
|
||||
local env="${args[0]:-development}"
|
||||
local backup_file="${args[1]:-}"
|
||||
if [[ -z "$backup_file" ]]; then
|
||||
error "Backup file required for restore"
|
||||
exit 1
|
||||
fi
|
||||
warn "Restoring database from: $backup_file"
|
||||
printf "Are you sure? [y/N]: "
|
||||
read -r confirm
|
||||
if [[ $confirm =~ ^[Yy]$ ]]; then
|
||||
cd "$PROJECT_ROOT"
|
||||
docker-compose -f docker-compose.yml \
|
||||
-f "deployment/applications/docker-compose.${env}.yml" \
|
||||
exec -T db mysql -u root -p\$DB_ROOT_PASSWORD < "$backup_file"
|
||||
success "Database restored from: $backup_file"
|
||||
fi
|
||||
;;
|
||||
|
||||
# Maintenance
|
||||
"update")
|
||||
local env="${args[0]:-staging}"
|
||||
info "Updating $env deployment to latest code..."
|
||||
"${SCRIPT_DIR}/deploy.sh" "$env" --skip-backup "${args[@]:1}"
|
||||
;;
|
||||
|
||||
"rollback")
|
||||
local env="${args[0]:-staging}"
|
||||
warn "Rolling back $env deployment..."
|
||||
printf "Are you sure you want to rollback? [y/N]: "
|
||||
read -r confirm
|
||||
if [[ $confirm =~ ^[Yy]$ ]]; then
|
||||
# Implementation depends on your rollback strategy
|
||||
info "Rollback functionality - implementation needed"
|
||||
fi
|
||||
;;
|
||||
|
||||
"cleanup")
|
||||
info "Cleaning up old Docker images and containers..."
|
||||
docker system prune -f
|
||||
docker image prune -f
|
||||
"${SCRIPT_DIR}/lib/security-tools.sh" cleanup "${args[0]:-30}"
|
||||
success "Cleanup completed"
|
||||
;;
|
||||
|
||||
"health")
|
||||
local env="${args[0]:-development}"
|
||||
info "Running health check for $env environment..."
|
||||
|
||||
# Check Docker containers
|
||||
cd "$PROJECT_ROOT"
|
||||
echo -e "\n${CYAN}Docker Container Status:${NC}"
|
||||
docker-compose -f docker-compose.yml \
|
||||
-f "deployment/applications/docker-compose.${env}.yml" ps
|
||||
|
||||
# Check application health
|
||||
echo -e "\n${CYAN}Application Health:${NC}"
|
||||
if command -v curl >/dev/null 2>&1; then
|
||||
local domain
|
||||
case $env in
|
||||
production) domain="https://michaelschiemer.de" ;;
|
||||
staging) domain="https://staging.michaelschiemer.de" ;;
|
||||
*) domain="https://localhost" ;;
|
||||
esac
|
||||
|
||||
if curl -sf "$domain/api/health" >/dev/null 2>&1; then
|
||||
success "Application health check passed"
|
||||
else
|
||||
warn "Application health check failed"
|
||||
fi
|
||||
fi
|
||||
;;
|
||||
|
||||
# Information
|
||||
"info")
|
||||
local env="${args[0]:-production}"
|
||||
"${SCRIPT_DIR}/lib/config-manager.sh" info "$env"
|
||||
;;
|
||||
|
||||
"list")
|
||||
"${SCRIPT_DIR}/lib/config-manager.sh" list
|
||||
;;
|
||||
|
||||
"version")
|
||||
cat << EOF
|
||||
${WHITE}Custom PHP Framework Deployment CLI${NC}
|
||||
Version: 1.0.0
|
||||
Domain: michaelschiemer.de
|
||||
Email: kontakt@michaelschiemer.de
|
||||
PHP Version: 8.4
|
||||
Built: $(date +'%Y-%m-%d')
|
||||
EOF
|
||||
;;
|
||||
|
||||
"help")
|
||||
if [[ ${#args[@]} -gt 0 ]]; then
|
||||
# Show specific command help
|
||||
local help_command="${args[0]}"
|
||||
case $help_command in
|
||||
"wizard")
|
||||
"${SCRIPT_DIR}/setup-wizard.sh" --help || true
|
||||
;;
|
||||
"production")
|
||||
"${SCRIPT_DIR}/setup-production.sh" --help || true
|
||||
;;
|
||||
"deploy")
|
||||
"${SCRIPT_DIR}/deploy.sh" --help || true
|
||||
;;
|
||||
*)
|
||||
echo "No specific help available for: $help_command"
|
||||
echo "Run '$0 help' for general help"
|
||||
;;
|
||||
esac
|
||||
else
|
||||
show_help
|
||||
fi
|
||||
;;
|
||||
|
||||
*)
|
||||
error "Unknown command: $command"
|
||||
echo
|
||||
echo "Available commands:"
|
||||
echo " wizard, production, deploy, status, setup, config, validate"
|
||||
echo " credentials, password, ssh-key, security-scan, security-report"
|
||||
echo " build, up, down, logs, shell, db:migrate, db:status, db:backup"
|
||||
echo " update, rollback, cleanup, health, info, list, version, help"
|
||||
echo
|
||||
echo "Run '$0 help' for detailed usage information."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
if [[ $# -eq 0 ]]; then
|
||||
show_help
|
||||
exit 0
|
||||
fi
|
||||
|
||||
local command="$1"
|
||||
shift
|
||||
|
||||
# Apply environment variables
|
||||
if [[ -n "${DEPLOY_VERBOSE:-}" ]]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
execute_command "$command" "$@"
|
||||
}
|
||||
|
||||
# Error handling
|
||||
cleanup() {
|
||||
local exit_code=$?
|
||||
if [[ $exit_code -ne 0 ]]; then
|
||||
error "Command failed with exit code: $exit_code"
|
||||
echo
|
||||
echo -e "${CYAN}For help, run: $0 help${NC}"
|
||||
echo -e "${CYAN}For verbose output, set: DEPLOY_VERBOSE=true${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
trap cleanup EXIT
|
||||
|
||||
# Execute if run directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
||||
@@ -1,325 +0,0 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Production Deployment Script
|
||||
# Deploys pre-built container images to production environment
|
||||
#
|
||||
# Usage: ./deploy-production.sh <IMAGE_TAG> [OPTIONS]
|
||||
#
|
||||
# Options:
|
||||
# --cdn-update Update CDN configuration after deployment
|
||||
# --no-backup Skip backup creation
|
||||
# --retention-days N Set backup retention days (default: 30)
|
||||
# --domain DOMAIN Override domain name (default: michaelschiemer.de)
|
||||
# --vault-password-file FILE Specify vault password file
|
||||
# --help Show this help message
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
INFRA_DIR="${SCRIPT_DIR}/infrastructure"
|
||||
|
||||
# Default values
|
||||
DEFAULT_DOMAIN="michaelschiemer.de"
|
||||
DEFAULT_RETENTION_DAYS="30"
|
||||
ENVIRONMENT="production"
|
||||
|
||||
# Initialize variables
|
||||
IMAGE_TAG=""
|
||||
DOMAIN_NAME="$DEFAULT_DOMAIN"
|
||||
CDN_UPDATE="false"
|
||||
BACKUP_ENABLED="true"
|
||||
BACKUP_RETENTION_DAYS="$DEFAULT_RETENTION_DAYS"
|
||||
VAULT_PASSWORD_FILE=""
|
||||
EXTRA_VARS=""
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Logging functions
|
||||
log_info() {
|
||||
echo -e "${BLUE}[INFO]${NC} $1" >&2
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $1" >&2
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo -e "${YELLOW}[WARNING]${NC} $1" >&2
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1" >&2
|
||||
}
|
||||
|
||||
# Help function
|
||||
show_help() {
|
||||
cat << EOF
|
||||
Production Deployment Script for Custom PHP Framework
|
||||
|
||||
USAGE:
|
||||
$0 <IMAGE_TAG> [OPTIONS]
|
||||
|
||||
ARGUMENTS:
|
||||
IMAGE_TAG Container image tag to deploy (required)
|
||||
Must NOT be 'latest' for production deployments
|
||||
|
||||
OPTIONS:
|
||||
--cdn-update Update CDN configuration after deployment
|
||||
--no-backup Skip backup creation before deployment
|
||||
--retention-days N Set backup retention days (default: $DEFAULT_RETENTION_DAYS)
|
||||
--domain DOMAIN Override domain name (default: $DEFAULT_DOMAIN)
|
||||
--vault-password-file FILE Specify vault password file path
|
||||
--help Show this help message
|
||||
|
||||
EXAMPLES:
|
||||
# Deploy version 1.2.3 to production
|
||||
$0 1.2.3
|
||||
|
||||
# Deploy with CDN update
|
||||
$0 1.2.3 --cdn-update
|
||||
|
||||
# Deploy without backup
|
||||
$0 1.2.3 --no-backup
|
||||
|
||||
# Deploy with custom retention period
|
||||
$0 1.2.3 --retention-days 60
|
||||
|
||||
ENVIRONMENT VARIABLES:
|
||||
ANSIBLE_VAULT_PASSWORD_FILE Vault password file (overrides --vault-password-file)
|
||||
IMAGE_TAG Image tag to deploy (overrides first argument)
|
||||
DOMAIN_NAME Domain name (overrides --domain)
|
||||
CDN_UPDATE Enable CDN update (overrides --cdn-update)
|
||||
BACKUP_ENABLED Enable/disable backup (overrides --no-backup)
|
||||
BACKUP_RETENTION_DAYS Backup retention days (overrides --retention-days)
|
||||
|
||||
REQUIREMENTS:
|
||||
- Ansible 2.9+
|
||||
- community.docker collection
|
||||
- SSH access to production server
|
||||
- Vault password file or ANSIBLE_VAULT_PASSWORD_FILE environment variable
|
||||
EOF
|
||||
}
|
||||
|
||||
# Parse command line arguments
|
||||
parse_args() {
|
||||
if [[ $# -eq 0 ]]; then
|
||||
log_error "No arguments provided"
|
||||
show_help
|
||||
exit 1
|
||||
fi
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--help|-h)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
--cdn-update)
|
||||
CDN_UPDATE="true"
|
||||
shift
|
||||
;;
|
||||
--no-backup)
|
||||
BACKUP_ENABLED="false"
|
||||
shift
|
||||
;;
|
||||
--retention-days)
|
||||
if [[ -z "${2:-}" ]] || [[ "$2" =~ ^-- ]]; then
|
||||
log_error "--retention-days requires a number"
|
||||
exit 1
|
||||
fi
|
||||
BACKUP_RETENTION_DAYS="$2"
|
||||
shift 2
|
||||
;;
|
||||
--domain)
|
||||
if [[ -z "${2:-}" ]] || [[ "$2" =~ ^-- ]]; then
|
||||
log_error "--domain requires a domain name"
|
||||
exit 1
|
||||
fi
|
||||
DOMAIN_NAME="$2"
|
||||
shift 2
|
||||
;;
|
||||
--vault-password-file)
|
||||
if [[ -z "${2:-}" ]] || [[ "$2" =~ ^-- ]]; then
|
||||
log_error "--vault-password-file requires a file path"
|
||||
exit 1
|
||||
fi
|
||||
VAULT_PASSWORD_FILE="$2"
|
||||
shift 2
|
||||
;;
|
||||
-*)
|
||||
log_error "Unknown option: $1"
|
||||
show_help
|
||||
exit 1
|
||||
;;
|
||||
*)
|
||||
if [[ -z "$IMAGE_TAG" ]]; then
|
||||
IMAGE_TAG="$1"
|
||||
else
|
||||
log_error "Multiple positional arguments provided. Only IMAGE_TAG is expected."
|
||||
show_help
|
||||
exit 1
|
||||
fi
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
# Validate environment and requirements
|
||||
validate_environment() {
|
||||
log_info "Validating deployment environment..."
|
||||
|
||||
# Check for required IMAGE_TAG
|
||||
if [[ -z "$IMAGE_TAG" ]]; then
|
||||
if [[ -n "${IMAGE_TAG:-}" ]]; then
|
||||
IMAGE_TAG="$IMAGE_TAG"
|
||||
else
|
||||
log_error "IMAGE_TAG is required"
|
||||
show_help
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Validate image tag for production
|
||||
if [[ "$IMAGE_TAG" == "latest" ]]; then
|
||||
log_error "Production deployments cannot use 'latest' tag"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Override with environment variables if set
|
||||
DOMAIN_NAME="${DOMAIN_NAME:-$DEFAULT_DOMAIN}"
|
||||
CDN_UPDATE="${CDN_UPDATE:-false}"
|
||||
BACKUP_ENABLED="${BACKUP_ENABLED:-true}"
|
||||
BACKUP_RETENTION_DAYS="${BACKUP_RETENTION_DAYS:-$DEFAULT_RETENTION_DAYS}"
|
||||
|
||||
# Check if ansible is available
|
||||
if ! command -v ansible-playbook &> /dev/null; then
|
||||
log_error "ansible-playbook not found. Please install Ansible."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check vault password file
|
||||
if [[ -n "${ANSIBLE_VAULT_PASSWORD_FILE:-}" ]]; then
|
||||
VAULT_PASSWORD_FILE="$ANSIBLE_VAULT_PASSWORD_FILE"
|
||||
fi
|
||||
|
||||
if [[ -z "$VAULT_PASSWORD_FILE" ]]; then
|
||||
log_warning "No vault password file specified. Ansible will prompt for vault password."
|
||||
elif [[ ! -f "$VAULT_PASSWORD_FILE" ]]; then
|
||||
log_error "Vault password file not found: $VAULT_PASSWORD_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check infrastructure directory
|
||||
if [[ ! -d "$INFRA_DIR" ]]; then
|
||||
log_error "Infrastructure directory not found: $INFRA_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check inventory file
|
||||
local inventory_file="${INFRA_DIR}/inventories/production/hosts.yml"
|
||||
if [[ ! -f "$inventory_file" ]]; then
|
||||
log_error "Production inventory not found: $inventory_file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check playbook file
|
||||
local playbook_file="${INFRA_DIR}/playbooks/deploy-application.yml"
|
||||
if [[ ! -f "$playbook_file" ]]; then
|
||||
log_error "Deployment playbook not found: $playbook_file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Environment validation complete"
|
||||
}
|
||||
|
||||
# Build extra variables for ansible
|
||||
build_extra_vars() {
|
||||
EXTRA_VARS="-e IMAGE_TAG=$IMAGE_TAG"
|
||||
EXTRA_VARS+=" -e DOMAIN_NAME=$DOMAIN_NAME"
|
||||
EXTRA_VARS+=" -e CDN_UPDATE=$CDN_UPDATE"
|
||||
EXTRA_VARS+=" -e BACKUP_ENABLED=$BACKUP_ENABLED"
|
||||
EXTRA_VARS+=" -e BACKUP_RETENTION_DAYS=$BACKUP_RETENTION_DAYS"
|
||||
EXTRA_VARS+=" -e deploy_environment=$ENVIRONMENT"
|
||||
|
||||
log_info "Deployment configuration:"
|
||||
log_info " Image Tag: $IMAGE_TAG"
|
||||
log_info " Domain: $DOMAIN_NAME"
|
||||
log_info " CDN Update: $CDN_UPDATE"
|
||||
log_info " Backup Enabled: $BACKUP_ENABLED"
|
||||
log_info " Backup Retention: $BACKUP_RETENTION_DAYS days"
|
||||
}
|
||||
|
||||
# Execute deployment
|
||||
run_deployment() {
|
||||
log_info "Starting production deployment..."
|
||||
|
||||
local ansible_cmd="ansible-playbook"
|
||||
local inventory="${INFRA_DIR}/inventories/production/hosts.yml"
|
||||
local playbook="${INFRA_DIR}/playbooks/deploy-application.yml"
|
||||
|
||||
# Build ansible command
|
||||
local cmd="$ansible_cmd -i $inventory $playbook $EXTRA_VARS"
|
||||
|
||||
# Add vault password file if specified
|
||||
if [[ -n "$VAULT_PASSWORD_FILE" ]]; then
|
||||
cmd+=" --vault-password-file $VAULT_PASSWORD_FILE"
|
||||
fi
|
||||
|
||||
# Change to infrastructure directory
|
||||
cd "$INFRA_DIR"
|
||||
|
||||
log_info "Executing: $cmd"
|
||||
|
||||
# Run deployment
|
||||
if eval "$cmd"; then
|
||||
log_success "Deployment completed successfully!"
|
||||
log_success "Application is available at: https://$DOMAIN_NAME"
|
||||
return 0
|
||||
else
|
||||
log_error "Deployment failed!"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Cleanup function
|
||||
cleanup() {
|
||||
local exit_code=$?
|
||||
if [[ $exit_code -ne 0 ]]; then
|
||||
log_error "Deployment failed with exit code: $exit_code"
|
||||
log_info "Check the logs above for details"
|
||||
log_info "You may need to run rollback if the deployment was partially successful"
|
||||
fi
|
||||
exit $exit_code
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
# Set trap for cleanup
|
||||
trap cleanup EXIT
|
||||
|
||||
# Parse command line arguments
|
||||
parse_args "$@"
|
||||
|
||||
# Validate environment
|
||||
validate_environment
|
||||
|
||||
# Build extra variables
|
||||
build_extra_vars
|
||||
|
||||
# Run deployment
|
||||
run_deployment
|
||||
|
||||
log_success "Production deployment completed successfully!"
|
||||
}
|
||||
|
||||
# Execute main function if script is run directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
||||
@@ -1,898 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Main Deployment Orchestration Script for Custom PHP Framework
|
||||
# Coordinates infrastructure (Ansible) and application (Docker Compose) deployment
|
||||
# Domain: michaelschiemer.de | Email: kontakt@michaelschiemer.de | PHP: 8.4
|
||||
# Usage: ./deploy.sh [environment] [options]
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Script configuration
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../" && pwd)"
|
||||
DEPLOYMENT_DIR="${SCRIPT_DIR}"
|
||||
INFRASTRUCTURE_DIR="${DEPLOYMENT_DIR}/infrastructure"
|
||||
APPLICATIONS_DIR="${DEPLOYMENT_DIR}/applications"
|
||||
LIB_DIR="${DEPLOYMENT_DIR}/lib"
|
||||
|
||||
# Load deployment libraries
|
||||
if [[ -f "${LIB_DIR}/config-manager.sh" ]]; then
|
||||
source "${LIB_DIR}/config-manager.sh"
|
||||
fi
|
||||
|
||||
if [[ -f "${LIB_DIR}/security-tools.sh" ]]; then
|
||||
source "${LIB_DIR}/security-tools.sh"
|
||||
fi
|
||||
|
||||
# Default configuration
|
||||
DEFAULT_ENV="staging"
|
||||
INFRASTRUCTURE_ONLY=false
|
||||
APPLICATION_ONLY=false
|
||||
DRY_RUN=false
|
||||
SKIP_TESTS=false
|
||||
SKIP_BACKUP=false
|
||||
FORCE_DEPLOY=false
|
||||
VERBOSE=false
|
||||
INTERACTIVE=true
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
PURPLE='\033[0;35m'
|
||||
CYAN='\033[0;36m'
|
||||
WHITE='\033[1;37m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Logging functions with improved formatting
|
||||
log() {
|
||||
echo -e "${GREEN}[$(date +'%Y-%m-%d %H:%M:%S')] ✅ INFO: $1${NC}"
|
||||
}
|
||||
|
||||
warn() {
|
||||
echo -e "${YELLOW}[$(date +'%Y-%m-%d %H:%M:%S')] ⚠️ WARN: $1${NC}"
|
||||
}
|
||||
|
||||
error() {
|
||||
echo -e "${RED}[$(date +'%Y-%m-%d %H:%M:%S')] ❌ ERROR: $1${NC}"
|
||||
}
|
||||
|
||||
debug() {
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')] 🔍 DEBUG: $1${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
success() {
|
||||
echo -e "${WHITE}[$(date +'%Y-%m-%d %H:%M:%S')] 🎉 SUCCESS: $1${NC}"
|
||||
}
|
||||
|
||||
section() {
|
||||
echo -e "\n${PURPLE}================================${NC}"
|
||||
echo -e "${PURPLE}$1${NC}"
|
||||
echo -e "${PURPLE}================================${NC}\n"
|
||||
}
|
||||
|
||||
# Usage information
|
||||
show_usage() {
|
||||
cat << EOF
|
||||
${WHITE}Custom PHP Framework Deployment Orchestrator${NC}
|
||||
${CYAN}Domain: michaelschiemer.de | Email: kontakt@michaelschiemer.de | PHP: 8.4${NC}
|
||||
|
||||
${WHITE}Usage:${NC} $0 [environment] [options]
|
||||
|
||||
${WHITE}Environments:${NC}
|
||||
${GREEN}development${NC} Deploy to development environment
|
||||
${GREEN}staging${NC} Deploy to staging environment (default)
|
||||
${GREEN}production${NC} Deploy to production environment
|
||||
|
||||
${WHITE}Deployment Options:${NC}
|
||||
${YELLOW}--infrastructure-only${NC} Deploy only infrastructure (Ansible)
|
||||
${YELLOW}--application-only${NC} Deploy only application (Docker Compose)
|
||||
${YELLOW}--dry-run${NC} Show what would be done without making changes
|
||||
${YELLOW}--skip-tests${NC} Skip running tests before deployment
|
||||
${YELLOW}--skip-backup${NC} Skip database backup (not recommended for production)
|
||||
${YELLOW}--force${NC} Force deployment even if validation fails
|
||||
${YELLOW}--non-interactive${NC} Skip confirmation prompts
|
||||
${YELLOW}--verbose${NC} Enable verbose output
|
||||
|
||||
${WHITE}General Options:${NC}
|
||||
${YELLOW}-h, --help${NC} Show this help message
|
||||
${YELLOW}--version${NC} Show version information
|
||||
|
||||
${WHITE}Examples:${NC}
|
||||
${CYAN}$0 staging${NC} # Deploy to staging
|
||||
${CYAN}$0 production --infrastructure-only${NC} # Deploy only infrastructure to production
|
||||
${CYAN}$0 staging --application-only --skip-tests${NC} # Deploy only app to staging without tests
|
||||
${CYAN}$0 production --dry-run --verbose${NC} # Dry run with detailed output
|
||||
${CYAN}$0 development --non-interactive${NC} # Development deploy without prompts
|
||||
|
||||
${WHITE}Safety Features:${NC}
|
||||
• Production deployments require confirmation
|
||||
• Pre-flight validation checks
|
||||
• Database backups before deployment
|
||||
• Health checks after deployment
|
||||
• Rollback capability on failures
|
||||
|
||||
EOF
|
||||
}
|
||||
|
||||
# Version information
|
||||
show_version() {
|
||||
cat << EOF
|
||||
${WHITE}Custom PHP Framework Deployment System${NC}
|
||||
Version: 1.0.0
|
||||
Domain: michaelschiemer.de
|
||||
Email: kontakt@michaelschiemer.de
|
||||
PHP Version: 8.4
|
||||
Build Date: $(date +'%Y-%m-%d')
|
||||
EOF
|
||||
}
|
||||
|
||||
# Parse command line arguments
|
||||
parse_arguments() {
|
||||
local environment=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
development|staging|production)
|
||||
environment="$1"
|
||||
shift
|
||||
;;
|
||||
--infrastructure-only)
|
||||
INFRASTRUCTURE_ONLY=true
|
||||
APPLICATION_ONLY=false
|
||||
shift
|
||||
;;
|
||||
--application-only)
|
||||
APPLICATION_ONLY=true
|
||||
INFRASTRUCTURE_ONLY=false
|
||||
shift
|
||||
;;
|
||||
--dry-run)
|
||||
DRY_RUN=true
|
||||
shift
|
||||
;;
|
||||
--skip-tests)
|
||||
SKIP_TESTS=true
|
||||
shift
|
||||
;;
|
||||
--skip-backup)
|
||||
SKIP_BACKUP=true
|
||||
shift
|
||||
;;
|
||||
--force)
|
||||
FORCE_DEPLOY=true
|
||||
shift
|
||||
;;
|
||||
--non-interactive)
|
||||
INTERACTIVE=false
|
||||
shift
|
||||
;;
|
||||
--verbose)
|
||||
VERBOSE=true
|
||||
shift
|
||||
;;
|
||||
--version)
|
||||
show_version
|
||||
exit 0
|
||||
;;
|
||||
-h|--help)
|
||||
show_usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
error "Unknown argument: $1"
|
||||
echo
|
||||
show_usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Set environment, defaulting to staging
|
||||
DEPLOY_ENV="${environment:-$DEFAULT_ENV}"
|
||||
}
|
||||
|
||||
# Confirmation prompt for production deployments
|
||||
confirm_deployment() {
|
||||
if [ "$INTERACTIVE" = false ]; then
|
||||
debug "Non-interactive mode, skipping confirmation"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [ "$DEPLOY_ENV" = "production" ]; then
|
||||
section "PRODUCTION DEPLOYMENT CONFIRMATION"
|
||||
echo -e "${RED}⚠️ You are about to deploy to PRODUCTION environment!${NC}"
|
||||
echo -e "${YELLOW}Domain: michaelschiemer.de${NC}"
|
||||
echo -e "${YELLOW}This will affect the live website.${NC}"
|
||||
echo
|
||||
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
echo -e "${BLUE}This is a DRY RUN - no actual changes will be made.${NC}"
|
||||
else
|
||||
echo -e "${RED}This is a LIVE DEPLOYMENT - changes will be applied immediately.${NC}"
|
||||
fi
|
||||
|
||||
echo
|
||||
read -p "Are you sure you want to continue? [y/N]: " -n 1 -r
|
||||
echo
|
||||
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
log "Deployment cancelled by user"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Second confirmation for non-dry-run production deployments
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
echo
|
||||
echo -e "${RED}FINAL CONFIRMATION: This will deploy to PRODUCTION.${NC}"
|
||||
read -p "Type 'DEPLOY' to confirm: " -r
|
||||
echo
|
||||
|
||||
if [[ $REPLY != "DEPLOY" ]]; then
|
||||
log "Deployment cancelled - confirmation not received"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
else
|
||||
section "DEPLOYMENT CONFIRMATION"
|
||||
echo -e "${GREEN}Deploying to: ${DEPLOY_ENV} environment${NC}"
|
||||
echo -e "${YELLOW}Domain: michaelschiemer.de${NC}"
|
||||
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
echo -e "${BLUE}Mode: DRY RUN (no actual changes)${NC}"
|
||||
fi
|
||||
|
||||
echo
|
||||
read -p "Continue with deployment? [Y/n]: " -n 1 -r
|
||||
echo
|
||||
|
||||
if [[ $REPLY =~ ^[Nn]$ ]]; then
|
||||
log "Deployment cancelled by user"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Enhanced environment detection
|
||||
detect_environment() {
|
||||
section "DETECTING DEPLOYMENT ENVIRONMENT"
|
||||
|
||||
log "Analyzing deployment environment: $DEPLOY_ENV"
|
||||
|
||||
# Detect if this is a fresh setup
|
||||
local env_file="${APPLICATIONS_DIR}/environments/.env.${DEPLOY_ENV}"
|
||||
if [[ ! -f "$env_file" ]]; then
|
||||
warn "Environment configuration not found: .env.${DEPLOY_ENV}"
|
||||
info "Consider running setup wizard: ./setup-wizard.sh"
|
||||
|
||||
# Check if template exists
|
||||
local template_file="${env_file}.template"
|
||||
if [[ -f "$template_file" ]]; then
|
||||
printf "${CYAN}Create environment configuration now? [Y/n]: ${NC}"
|
||||
read -r create_env
|
||||
if [[ ! $create_env =~ ^[Nn]$ ]]; then
|
||||
info "Running configuration setup..."
|
||||
if command -v "${LIB_DIR}/config-manager.sh" >/dev/null 2>&1; then
|
||||
"${LIB_DIR}/config-manager.sh" apply-config "$DEPLOY_ENV" \
|
||||
"${DOMAIN:-$DEPLOY_ENV.michaelschiemer.de}" \
|
||||
"${EMAIL:-kontakt@michaelschiemer.de}"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Environment-specific warnings
|
||||
case $DEPLOY_ENV in
|
||||
production)
|
||||
if [[ "$INTERACTIVE" == "true" && "$FORCE_DEPLOY" != "true" ]]; then
|
||||
warn "Production deployment detected"
|
||||
warn "This will affect the live website: michaelschiemer.de"
|
||||
fi
|
||||
;;
|
||||
staging)
|
||||
info "Staging deployment - safe for testing"
|
||||
;;
|
||||
development)
|
||||
info "Development deployment - local development environment"
|
||||
;;
|
||||
*)
|
||||
warn "Unknown environment: $DEPLOY_ENV"
|
||||
warn "Proceeding with custom environment configuration"
|
||||
;;
|
||||
esac
|
||||
|
||||
success "Environment detection completed"
|
||||
}
|
||||
|
||||
# Validate prerequisites and environment
|
||||
validate_prerequisites() {
|
||||
section "VALIDATING PREREQUISITES"
|
||||
|
||||
log "Checking deployment environment: $DEPLOY_ENV"
|
||||
|
||||
# Check if we're in the project root
|
||||
if [[ ! -f "${PROJECT_ROOT}/docker-compose.yml" ]]; then
|
||||
error "Project root not found. Please run from the correct directory."
|
||||
error "Expected file: ${PROJECT_ROOT}/docker-compose.yml"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for required tools
|
||||
local required_tools=()
|
||||
|
||||
if [ "$INFRASTRUCTURE_ONLY" = true ] || [ "$APPLICATION_ONLY" = false ]; then
|
||||
required_tools+=("ansible-playbook")
|
||||
fi
|
||||
|
||||
if [ "$APPLICATION_ONLY" = true ] || [ "$INFRASTRUCTURE_ONLY" = false ]; then
|
||||
required_tools+=("docker" "docker-compose")
|
||||
fi
|
||||
|
||||
for tool in "${required_tools[@]}"; do
|
||||
if ! command -v "$tool" &> /dev/null; then
|
||||
error "Required tool not found: $tool"
|
||||
case $tool in
|
||||
ansible-playbook)
|
||||
error "Install with: sudo apt-get install ansible"
|
||||
;;
|
||||
docker)
|
||||
error "Install with: curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh"
|
||||
;;
|
||||
docker-compose)
|
||||
error "Install with: sudo apt-get install docker-compose"
|
||||
;;
|
||||
esac
|
||||
exit 1
|
||||
else
|
||||
debug "✓ $tool found"
|
||||
fi
|
||||
done
|
||||
|
||||
# Validate environment-specific files
|
||||
if [ "$APPLICATION_ONLY" = true ] || [ "$INFRASTRUCTURE_ONLY" = false ]; then
|
||||
local compose_file="${APPLICATIONS_DIR}/docker-compose.${DEPLOY_ENV}.yml"
|
||||
local env_file="${APPLICATIONS_DIR}/environments/.env.${DEPLOY_ENV}"
|
||||
|
||||
if [[ ! -f "$compose_file" ]]; then
|
||||
error "Docker Compose overlay not found: $compose_file"
|
||||
exit 1
|
||||
else
|
||||
debug "✓ Docker Compose overlay found"
|
||||
fi
|
||||
|
||||
if [[ ! -f "$env_file" ]]; then
|
||||
error "Environment file not found: $env_file"
|
||||
error "Copy from template: cp ${env_file}.template $env_file"
|
||||
exit 1
|
||||
else
|
||||
debug "✓ Environment file found"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Validate Ansible inventory
|
||||
if [ "$INFRASTRUCTURE_ONLY" = true ] || [ "$APPLICATION_ONLY" = false ]; then
|
||||
local inventory="${INFRASTRUCTURE_DIR}/inventories/${DEPLOY_ENV}/hosts.yml"
|
||||
|
||||
if [[ ! -f "$inventory" ]]; then
|
||||
warn "Ansible inventory not found: $inventory"
|
||||
if [ "$INFRASTRUCTURE_ONLY" = true ]; then
|
||||
error "Infrastructure deployment requires inventory file"
|
||||
exit 1
|
||||
else
|
||||
warn "Skipping infrastructure deployment"
|
||||
INFRASTRUCTURE_ONLY=false
|
||||
APPLICATION_ONLY=true
|
||||
fi
|
||||
else
|
||||
debug "✓ Ansible inventory found"
|
||||
|
||||
# Check if this is a fresh server setup
|
||||
if grep -q "fresh_server_setup: true" "$inventory" 2>/dev/null; then
|
||||
warn "Fresh server setup detected in inventory"
|
||||
warn "Run initial setup first: ansible-playbook -i $inventory setup-fresh-server.yml"
|
||||
if [ "$FORCE_DEPLOY" != true ]; then
|
||||
error "Use --force to skip initial setup check"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
success "Prerequisites validation completed"
|
||||
}
|
||||
|
||||
# Validate configuration files
|
||||
validate_configuration() {
|
||||
section "VALIDATING CONFIGURATION"
|
||||
|
||||
if [ "$APPLICATION_ONLY" = true ] || [ "$INFRASTRUCTURE_ONLY" = false ]; then
|
||||
log "Validating application environment configuration"
|
||||
|
||||
local env_file="${APPLICATIONS_DIR}/environments/.env.${DEPLOY_ENV}"
|
||||
|
||||
# Check for required placeholder values
|
||||
local placeholder_found=false
|
||||
local required_placeholders=(
|
||||
"*** REQUIRED"
|
||||
"your-domain.com"
|
||||
"your-email@example.com"
|
||||
)
|
||||
|
||||
for placeholder in "${required_placeholders[@]}"; do
|
||||
if grep -q "$placeholder" "$env_file" 2>/dev/null; then
|
||||
error "Environment file contains unfilled templates:"
|
||||
grep "$placeholder" "$env_file" || true
|
||||
placeholder_found=true
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$placeholder_found" = true ]; then
|
||||
if [ "$FORCE_DEPLOY" != true ]; then
|
||||
error "Fix configuration placeholders or use --force to proceed"
|
||||
exit 1
|
||||
else
|
||||
warn "Proceeding with incomplete configuration due to --force flag"
|
||||
fi
|
||||
else
|
||||
debug "✓ No placeholder values found"
|
||||
fi
|
||||
|
||||
# Validate critical environment variables
|
||||
source "$env_file"
|
||||
|
||||
if [[ "$DEPLOY_ENV" = "production" ]]; then
|
||||
if [[ -z "${DB_PASSWORD:-}" || "${DB_PASSWORD}" = "changeme" ]]; then
|
||||
error "Production deployment requires a secure database password"
|
||||
if [ "$FORCE_DEPLOY" != true ]; then
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "${APP_DEBUG:-false}" = "true" ]]; then
|
||||
warn "Debug mode is enabled in production environment"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
success "Configuration validation completed"
|
||||
}
|
||||
|
||||
# Run deployment tests
|
||||
run_deployment_tests() {
|
||||
if [ "$SKIP_TESTS" = true ]; then
|
||||
warn "Skipping tests as requested"
|
||||
return 0
|
||||
fi
|
||||
|
||||
section "RUNNING DEPLOYMENT TESTS"
|
||||
|
||||
if [ "$APPLICATION_ONLY" = true ] || [ "$INFRASTRUCTURE_ONLY" = false ]; then
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# PHP tests
|
||||
if [[ -f "vendor/bin/pest" ]]; then
|
||||
log "Running PHP tests with Pest"
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
./vendor/bin/pest --bail
|
||||
else
|
||||
debug "DRY RUN: Would run PHP tests"
|
||||
fi
|
||||
elif [[ -f "vendor/bin/phpunit" ]]; then
|
||||
log "Running PHP tests with PHPUnit"
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
./vendor/bin/phpunit --stop-on-failure
|
||||
else
|
||||
debug "DRY RUN: Would run PHPUnit tests"
|
||||
fi
|
||||
else
|
||||
warn "No PHP test framework found"
|
||||
fi
|
||||
|
||||
# JavaScript tests
|
||||
if [[ -f "package.json" ]] && command -v npm &> /dev/null; then
|
||||
log "Running JavaScript tests"
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
npm test
|
||||
else
|
||||
debug "DRY RUN: Would run JavaScript tests"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Code quality checks
|
||||
if [[ -f "composer.json" ]]; then
|
||||
log "Running code style checks"
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
composer cs || {
|
||||
error "Code style checks failed"
|
||||
if [ "$FORCE_DEPLOY" != true ]; then
|
||||
exit 1
|
||||
else
|
||||
warn "Proceeding despite code style issues due to --force flag"
|
||||
fi
|
||||
}
|
||||
else
|
||||
debug "DRY RUN: Would run code style checks"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Ansible syntax check
|
||||
if [ "$INFRASTRUCTURE_ONLY" = true ] || [ "$APPLICATION_ONLY" = false ]; then
|
||||
log "Validating Ansible playbook syntax"
|
||||
|
||||
local inventory="${INFRASTRUCTURE_DIR}/inventories/${DEPLOY_ENV}/hosts.yml"
|
||||
local playbook="${INFRASTRUCTURE_DIR}/site.yml"
|
||||
|
||||
if [[ -f "$inventory" && -f "$playbook" ]]; then
|
||||
cd "$INFRASTRUCTURE_DIR"
|
||||
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
ansible-playbook -i "$inventory" "$playbook" --syntax-check
|
||||
else
|
||||
debug "DRY RUN: Would validate Ansible syntax"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
success "All tests passed"
|
||||
}
|
||||
|
||||
# Deploy infrastructure using Ansible
|
||||
deploy_infrastructure() {
|
||||
if [ "$APPLICATION_ONLY" = true ]; then
|
||||
debug "Skipping infrastructure deployment (application-only mode)"
|
||||
return 0
|
||||
fi
|
||||
|
||||
section "DEPLOYING INFRASTRUCTURE"
|
||||
|
||||
local inventory="${INFRASTRUCTURE_DIR}/inventories/${DEPLOY_ENV}/hosts.yml"
|
||||
local playbook="${INFRASTRUCTURE_DIR}/site.yml"
|
||||
|
||||
if [[ ! -f "$inventory" ]]; then
|
||||
warn "Ansible inventory not found: $inventory"
|
||||
warn "Skipping infrastructure deployment"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log "Deploying infrastructure with Ansible for $DEPLOY_ENV environment"
|
||||
|
||||
cd "$INFRASTRUCTURE_DIR"
|
||||
|
||||
local ansible_cmd="ansible-playbook -i $inventory $playbook"
|
||||
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
ansible_cmd="$ansible_cmd --check"
|
||||
fi
|
||||
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
ansible_cmd="$ansible_cmd -v"
|
||||
fi
|
||||
|
||||
debug "Running: $ansible_cmd"
|
||||
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
$ansible_cmd
|
||||
success "Infrastructure deployment completed"
|
||||
else
|
||||
debug "DRY RUN: Would run Ansible infrastructure deployment"
|
||||
success "Infrastructure deployment dry run completed"
|
||||
fi
|
||||
}
|
||||
|
||||
# Deploy application using the existing script
|
||||
deploy_application() {
|
||||
if [ "$INFRASTRUCTURE_ONLY" = true ]; then
|
||||
debug "Skipping application deployment (infrastructure-only mode)"
|
||||
return 0
|
||||
fi
|
||||
|
||||
section "DEPLOYING APPLICATION"
|
||||
|
||||
log "Deploying application with Docker Compose for $DEPLOY_ENV environment"
|
||||
|
||||
local app_deploy_script="${APPLICATIONS_DIR}/scripts/deploy-app.sh"
|
||||
|
||||
if [[ ! -f "$app_deploy_script" ]]; then
|
||||
error "Application deployment script not found: $app_deploy_script"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Build command arguments
|
||||
local app_args=("$DEPLOY_ENV")
|
||||
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
app_args+=("--dry-run")
|
||||
fi
|
||||
|
||||
if [ "$SKIP_TESTS" = true ]; then
|
||||
app_args+=("--skip-tests")
|
||||
fi
|
||||
|
||||
if [ "$SKIP_BACKUP" = true ]; then
|
||||
app_args+=("--skip-backup")
|
||||
fi
|
||||
|
||||
if [ "$FORCE_DEPLOY" = true ]; then
|
||||
app_args+=("--force")
|
||||
fi
|
||||
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
app_args+=("--verbose")
|
||||
fi
|
||||
|
||||
debug "Running: $app_deploy_script ${app_args[*]}"
|
||||
|
||||
# Execute application deployment
|
||||
"$app_deploy_script" "${app_args[@]}"
|
||||
|
||||
success "Application deployment completed"
|
||||
}
|
||||
|
||||
# Perform comprehensive post-deployment validation
|
||||
post_deployment_validation() {
|
||||
section "POST-DEPLOYMENT VALIDATION"
|
||||
|
||||
log "Performing comprehensive deployment validation"
|
||||
|
||||
# Service health checks
|
||||
if [ "$INFRASTRUCTURE_ONLY" = true ] || [ "$APPLICATION_ONLY" = false ]; then
|
||||
log "Validating infrastructure services"
|
||||
|
||||
# This would typically involve SSH connections to verify services
|
||||
# For now, we'll do basic connectivity tests
|
||||
|
||||
debug "Infrastructure validation completed"
|
||||
fi
|
||||
|
||||
# Application health checks
|
||||
if [ "$APPLICATION_ONLY" = true ] || [ "$INFRASTRUCTURE_ONLY" = false ]; then
|
||||
log "Validating application deployment"
|
||||
|
||||
local health_check_script="${APPLICATIONS_DIR}/scripts/health-check.sh"
|
||||
|
||||
if [[ -f "$health_check_script" ]]; then
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
"$health_check_script" "$DEPLOY_ENV"
|
||||
else
|
||||
debug "DRY RUN: Would run health checks"
|
||||
fi
|
||||
else
|
||||
warn "Health check script not found, performing basic validation"
|
||||
|
||||
# Basic Docker Compose health check
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
cd "$PROJECT_ROOT"
|
||||
local compose_files="-f docker-compose.yml -f ${APPLICATIONS_DIR}/docker-compose.${DEPLOY_ENV}.yml"
|
||||
docker-compose $compose_files ps
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
success "Post-deployment validation completed"
|
||||
}
|
||||
|
||||
# Display deployment summary
|
||||
show_deployment_summary() {
|
||||
section "DEPLOYMENT SUMMARY"
|
||||
|
||||
local deployment_type=""
|
||||
if [ "$INFRASTRUCTURE_ONLY" = true ]; then
|
||||
deployment_type="Infrastructure Only"
|
||||
elif [ "$APPLICATION_ONLY" = true ]; then
|
||||
deployment_type="Application Only"
|
||||
else
|
||||
deployment_type="Full Stack (Infrastructure + Application)"
|
||||
fi
|
||||
|
||||
cat << EOF
|
||||
${WHITE}🎉 DEPLOYMENT COMPLETED SUCCESSFULLY! 🎉${NC}
|
||||
|
||||
${CYAN}Deployment Details:${NC}
|
||||
• Environment: ${WHITE}${DEPLOY_ENV^^}${NC}
|
||||
• Type: ${WHITE}${deployment_type}${NC}
|
||||
• Domain: ${WHITE}michaelschiemer.de${NC}
|
||||
• PHP Version: ${WHITE}8.4${NC}
|
||||
• Mode: ${WHITE}$([ "$DRY_RUN" = true ] && echo "DRY RUN" || echo "LIVE DEPLOYMENT")${NC}
|
||||
|
||||
${CYAN}What was deployed:${NC}
|
||||
EOF
|
||||
|
||||
if [ "$INFRASTRUCTURE_ONLY" = true ] || [ "$APPLICATION_ONLY" = false ]; then
|
||||
echo "• ✅ Infrastructure (Ansible)"
|
||||
echo " - Base security hardening"
|
||||
echo " - Docker runtime environment"
|
||||
echo " - Nginx reverse proxy with SSL"
|
||||
echo " - System monitoring and health checks"
|
||||
fi
|
||||
|
||||
if [ "$APPLICATION_ONLY" = true ] || [ "$INFRASTRUCTURE_ONLY" = false ]; then
|
||||
echo "• ✅ Application (Docker Compose)"
|
||||
echo " - PHP 8.4 application container"
|
||||
echo " - Database with migrations"
|
||||
echo " - Frontend assets built and deployed"
|
||||
echo " - Health checks configured"
|
||||
fi
|
||||
|
||||
echo
|
||||
|
||||
if [ "$DEPLOY_ENV" = "production" ]; then
|
||||
cat << EOF
|
||||
${GREEN}🌟 Production Deployment Complete!${NC}
|
||||
Your Custom PHP Framework is now live at: ${WHITE}https://michaelschiemer.de${NC}
|
||||
|
||||
${YELLOW}Next Steps:${NC}
|
||||
• Monitor application performance and logs
|
||||
• Verify all functionality is working correctly
|
||||
• Update DNS records if this is a new deployment
|
||||
• Consider setting up automated monitoring alerts
|
||||
|
||||
EOF
|
||||
else
|
||||
cat << EOF
|
||||
${GREEN}🚀 ${DEPLOY_ENV^} Deployment Complete!${NC}
|
||||
|
||||
${YELLOW}Next Steps:${NC}
|
||||
• Test all application functionality
|
||||
• Run integration tests
|
||||
• Verify performance and security
|
||||
• Prepare for production deployment when ready
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
echo -e "${BLUE}Note: This was a dry run. No actual changes were made.${NC}"
|
||||
echo -e "${BLUE}Remove the --dry-run flag to perform the actual deployment.${NC}"
|
||||
echo
|
||||
fi
|
||||
}
|
||||
|
||||
# Error handling and cleanup
|
||||
cleanup() {
|
||||
local exit_code=$?
|
||||
|
||||
if [ $exit_code -ne 0 ]; then
|
||||
error "Deployment failed with exit code: $exit_code"
|
||||
|
||||
if [ "$DEPLOY_ENV" = "production" ] && [ "$DRY_RUN" != true ]; then
|
||||
error "PRODUCTION DEPLOYMENT FAILED!"
|
||||
error "Immediate action required. Check logs and consider rollback."
|
||||
fi
|
||||
|
||||
echo
|
||||
echo -e "${RED}Troubleshooting Tips:${NC}"
|
||||
echo "• Check the error messages above for specific issues"
|
||||
echo "• Review configuration files for missing or incorrect values"
|
||||
echo "• Verify all required services are running"
|
||||
echo "• Check network connectivity to deployment targets"
|
||||
echo "• Review the deployment documentation in deployment/docs/"
|
||||
|
||||
# Offer to run with verbose mode if not already enabled
|
||||
if [ "$VERBOSE" != true ]; then
|
||||
echo "• Try running with --verbose flag for more detailed output"
|
||||
fi
|
||||
|
||||
# Offer dry run option if this was a live deployment
|
||||
if [ "$DRY_RUN" != true ]; then
|
||||
echo "• Use --dry-run flag to test deployment without making changes"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Set up error handling
|
||||
trap cleanup EXIT
|
||||
|
||||
# Enhanced deployment health check
|
||||
deployment_health_check() {
|
||||
section "DEPLOYMENT HEALTH CHECK"
|
||||
|
||||
log "Performing comprehensive pre-deployment health check"
|
||||
|
||||
local health_score=0
|
||||
local max_score=100
|
||||
|
||||
# Check environment configuration (25 points)
|
||||
local env_file="${APPLICATIONS_DIR}/environments/.env.${DEPLOY_ENV}"
|
||||
if [[ -f "$env_file" ]]; then
|
||||
if ! grep -q "\*\*\* REQUIRED \*\*\*" "$env_file" 2>/dev/null; then
|
||||
health_score=$((health_score + 25))
|
||||
debug "✓ Environment configuration complete"
|
||||
else
|
||||
warn "Environment configuration incomplete"
|
||||
fi
|
||||
else
|
||||
warn "Environment configuration missing"
|
||||
fi
|
||||
|
||||
# Check Docker availability (25 points)
|
||||
if docker info >/dev/null 2>&1; then
|
||||
health_score=$((health_score + 25))
|
||||
debug "✓ Docker daemon accessible"
|
||||
else
|
||||
warn "Docker daemon not accessible"
|
||||
fi
|
||||
|
||||
# Check network connectivity (25 points)
|
||||
if [[ "$DEPLOY_ENV" != "development" ]]; then
|
||||
if ping -c 1 8.8.8.8 >/dev/null 2>&1; then
|
||||
health_score=$((health_score + 25))
|
||||
debug "✓ Internet connectivity available"
|
||||
else
|
||||
warn "Internet connectivity issues detected"
|
||||
fi
|
||||
else
|
||||
health_score=$((health_score + 25)) # Skip for development
|
||||
fi
|
||||
|
||||
# Check project files (25 points)
|
||||
local required_files=("docker-compose.yml" "composer.json")
|
||||
local files_found=0
|
||||
for file in "${required_files[@]}"; do
|
||||
if [[ -f "${PROJECT_ROOT}/${file}" ]]; then
|
||||
((files_found++))
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ $files_found -eq ${#required_files[@]} ]]; then
|
||||
health_score=$((health_score + 25))
|
||||
debug "✓ All required project files found"
|
||||
else
|
||||
warn "Some project files missing"
|
||||
fi
|
||||
|
||||
# Health score summary
|
||||
local health_percentage=$((health_score * 100 / max_score))
|
||||
|
||||
if [[ $health_percentage -ge 90 ]]; then
|
||||
success "Deployment health check: EXCELLENT ($health_percentage%)"
|
||||
elif [[ $health_percentage -ge 75 ]]; then
|
||||
log "Deployment health check: GOOD ($health_percentage%)"
|
||||
elif [[ $health_percentage -ge 50 ]]; then
|
||||
warn "Deployment health check: FAIR ($health_percentage%)"
|
||||
else
|
||||
error "Deployment health check: POOR ($health_percentage%)"
|
||||
if [[ "$FORCE_DEPLOY" != "true" ]]; then
|
||||
error "Health check failed. Use --force to proceed anyway."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Main deployment orchestration function
|
||||
main() {
|
||||
log "Starting Custom PHP Framework deployment orchestration"
|
||||
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
log "🧪 DRY RUN MODE - No actual changes will be made"
|
||||
fi
|
||||
|
||||
# Pre-deployment steps
|
||||
detect_environment
|
||||
deployment_health_check
|
||||
confirm_deployment
|
||||
validate_prerequisites
|
||||
validate_configuration
|
||||
run_deployment_tests
|
||||
|
||||
# Deployment execution
|
||||
deploy_infrastructure
|
||||
deploy_application
|
||||
|
||||
# Post-deployment validation
|
||||
post_deployment_validation
|
||||
|
||||
# Success summary
|
||||
show_deployment_summary
|
||||
|
||||
success "Deployment orchestration completed successfully!"
|
||||
}
|
||||
|
||||
# Script execution
|
||||
parse_arguments "$@"
|
||||
main
|
||||
@@ -1,388 +0,0 @@
|
||||
# Environment Configuration Guide
|
||||
|
||||
This guide covers how to configure and manage different deployment environments for the Custom PHP Framework.
|
||||
|
||||
## Project Configuration
|
||||
- **Domain**: michaelschiemer.de
|
||||
- **Email**: kontakt@michaelschiemer.de
|
||||
- **PHP Version**: 8.4
|
||||
|
||||
## Available Environments
|
||||
|
||||
### Development
|
||||
- **Purpose**: Local development and testing
|
||||
- **Domain**: development.michaelschiemer.de (or localhost)
|
||||
- **SSL**: Self-signed certificates
|
||||
- **Debug**: Enabled
|
||||
- **Database**: Local container
|
||||
|
||||
### Staging
|
||||
- **Purpose**: Pre-production testing and validation
|
||||
- **Domain**: staging.michaelschiemer.de
|
||||
- **SSL**: Let's Encrypt or provided certificates
|
||||
- **Debug**: Limited debugging
|
||||
- **Database**: Staging database with production-like data
|
||||
|
||||
### Production
|
||||
- **Purpose**: Live production environment
|
||||
- **Domain**: michaelschiemer.de
|
||||
- **SSL**: Let's Encrypt with strict security
|
||||
- **Debug**: Disabled
|
||||
- **Database**: Production database with backups
|
||||
|
||||
## Environment Files Structure
|
||||
|
||||
```
|
||||
deployment/applications/environments/
|
||||
├── .env.development
|
||||
├── .env.staging
|
||||
├── .env.production
|
||||
├── .env.development.template
|
||||
├── .env.staging.template
|
||||
└── .env.production.template
|
||||
```
|
||||
|
||||
## Configuration Variables
|
||||
|
||||
### Application Settings
|
||||
|
||||
```bash
|
||||
# Application Environment
|
||||
APP_ENV=production # Environment name
|
||||
APP_DEBUG=false # Debug mode (true only for development)
|
||||
APP_URL=https://michaelschiemer.de # Application URL
|
||||
|
||||
# Framework Settings
|
||||
FRAMEWORK_VERSION=1.0.0 # Framework version
|
||||
FRAMEWORK_ENV=production # Framework environment
|
||||
```
|
||||
|
||||
### Database Configuration
|
||||
|
||||
```bash
|
||||
# Database Connection
|
||||
DB_CONNECTION=mysql
|
||||
DB_HOST=db # Docker service name
|
||||
DB_PORT=3306
|
||||
DB_DATABASE=michaelschiemer
|
||||
DB_USERNAME=app_user
|
||||
DB_PASSWORD=*** SECURE PASSWORD *** # Generate strong password
|
||||
DB_ROOT_PASSWORD=*** SECURE PASSWORD *** # Generate strong password
|
||||
```
|
||||
|
||||
### SSL and Security
|
||||
|
||||
```bash
|
||||
# SSL Configuration
|
||||
SSL_EMAIL=kontakt@michaelschiemer.de # Let's Encrypt email
|
||||
DOMAIN_NAME=michaelschiemer.de # Primary domain
|
||||
|
||||
# Security Settings
|
||||
SECURITY_LEVEL=high # Security hardening level
|
||||
FIREWALL_STRICT_MODE=true # Enable strict firewall rules
|
||||
FAIL2BAN_ENABLED=true # Enable fail2ban protection
|
||||
```
|
||||
|
||||
### Performance and Caching
|
||||
|
||||
```bash
|
||||
# Performance Settings
|
||||
PHP_MEMORY_LIMIT=512M
|
||||
PHP_MAX_EXECUTION_TIME=60
|
||||
OPCACHE_ENABLED=true
|
||||
|
||||
# Caching
|
||||
CACHE_DRIVER=redis
|
||||
REDIS_HOST=redis
|
||||
REDIS_PORT=6379
|
||||
```
|
||||
|
||||
### Email Configuration
|
||||
|
||||
```bash
|
||||
# Email Settings
|
||||
MAIL_MAILER=smtp
|
||||
MAIL_HOST=smtp.mailgun.org
|
||||
MAIL_PORT=587
|
||||
MAIL_USERNAME=*** REQUIRED ***
|
||||
MAIL_PASSWORD=*** REQUIRED ***
|
||||
MAIL_FROM_ADDRESS=noreply@michaelschiemer.de
|
||||
MAIL_FROM_NAME="Michael Schiemer"
|
||||
```
|
||||
|
||||
## Environment-Specific Configurations
|
||||
|
||||
### Development Environment
|
||||
|
||||
```bash
|
||||
# Development-specific settings
|
||||
APP_ENV=development
|
||||
APP_DEBUG=true
|
||||
APP_URL=https://localhost
|
||||
|
||||
# Relaxed security for development
|
||||
SECURITY_LEVEL=standard
|
||||
FIREWALL_STRICT_MODE=false
|
||||
|
||||
# Development database
|
||||
DB_DATABASE=michaelschiemer_dev
|
||||
DB_PASSWORD=dev_password # Simple password for dev
|
||||
|
||||
# Development mail (log emails instead of sending)
|
||||
MAIL_MAILER=log
|
||||
```
|
||||
|
||||
### Staging Environment
|
||||
|
||||
```bash
|
||||
# Staging-specific settings
|
||||
APP_ENV=staging
|
||||
APP_DEBUG=false
|
||||
APP_URL=https://staging.michaelschiemer.de
|
||||
|
||||
# Production-like security
|
||||
SECURITY_LEVEL=high
|
||||
FIREWALL_STRICT_MODE=true
|
||||
|
||||
# Staging database
|
||||
DB_DATABASE=michaelschiemer_staging
|
||||
DB_PASSWORD=*** SECURE STAGING PASSWORD ***
|
||||
|
||||
# Email testing
|
||||
MAIL_MAILER=smtp
|
||||
MAIL_HOST=smtp.mailtrap.io # Testing service
|
||||
```
|
||||
|
||||
### Production Environment
|
||||
|
||||
```bash
|
||||
# Production settings
|
||||
APP_ENV=production
|
||||
APP_DEBUG=false
|
||||
APP_URL=https://michaelschiemer.de
|
||||
|
||||
# Maximum security
|
||||
SECURITY_LEVEL=high
|
||||
FIREWALL_STRICT_MODE=true
|
||||
FAIL2BAN_ENABLED=true
|
||||
|
||||
# Production database
|
||||
DB_DATABASE=michaelschiemer_prod
|
||||
DB_PASSWORD=*** VERY SECURE PRODUCTION PASSWORD ***
|
||||
|
||||
# Production email
|
||||
MAIL_MAILER=smtp
|
||||
MAIL_HOST=smtp.mailgun.org
|
||||
MAIL_USERNAME=*** PRODUCTION MAIL USERNAME ***
|
||||
MAIL_PASSWORD=*** PRODUCTION MAIL PASSWORD ***
|
||||
```
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
### Password Generation
|
||||
|
||||
Generate secure passwords using:
|
||||
|
||||
```bash
|
||||
# Generate random password
|
||||
openssl rand -base64 32 | tr -d "=+/" | cut -c1-25
|
||||
|
||||
# Generate application key
|
||||
openssl rand -base64 32
|
||||
```
|
||||
|
||||
### Environment File Security
|
||||
|
||||
```bash
|
||||
# Set restrictive permissions
|
||||
chmod 600 .env.*
|
||||
|
||||
# Never commit to version control
|
||||
# (Already in .gitignore)
|
||||
|
||||
# Use different passwords for each environment
|
||||
# Never reuse production passwords in staging/dev
|
||||
```
|
||||
|
||||
### SSL Certificate Management
|
||||
|
||||
```bash
|
||||
# Let's Encrypt (recommended for production)
|
||||
SSL_PROVIDER=letsencrypt
|
||||
SSL_EMAIL=kontakt@michaelschiemer.de
|
||||
|
||||
# Self-signed (development only)
|
||||
SSL_PROVIDER=self-signed
|
||||
|
||||
# Custom certificates
|
||||
SSL_PROVIDER=custom
|
||||
SSL_CERT_FILE=/path/to/cert.pem
|
||||
SSL_KEY_FILE=/path/to/key.pem
|
||||
```
|
||||
|
||||
## Database Configuration
|
||||
|
||||
### Connection Settings
|
||||
|
||||
```bash
|
||||
# MySQL/MariaDB settings
|
||||
DB_CONNECTION=mysql
|
||||
DB_CHARSET=utf8mb4
|
||||
DB_COLLATION=utf8mb4_unicode_ci
|
||||
DB_TIMEZONE=+00:00
|
||||
|
||||
# Connection pooling
|
||||
DB_POOL_MIN=5
|
||||
DB_POOL_MAX=20
|
||||
DB_POOL_TIMEOUT=30
|
||||
```
|
||||
|
||||
### Backup Configuration
|
||||
|
||||
```bash
|
||||
# Backup settings
|
||||
BACKUP_ENABLED=true
|
||||
BACKUP_FREQUENCY=daily
|
||||
BACKUP_RETENTION_DAYS=30
|
||||
BACKUP_STORAGE=local # or s3, gcs, etc.
|
||||
```
|
||||
|
||||
## Monitoring and Logging
|
||||
|
||||
### Monitoring Configuration
|
||||
|
||||
```bash
|
||||
# Monitoring settings
|
||||
MONITORING_ENABLED=true
|
||||
HEALTH_CHECK_ENDPOINT=/health
|
||||
METRICS_ENDPOINT=/metrics
|
||||
|
||||
# Log levels
|
||||
LOG_LEVEL=info # debug, info, warning, error
|
||||
LOG_CHANNEL=stack
|
||||
```
|
||||
|
||||
### Performance Monitoring
|
||||
|
||||
```bash
|
||||
# Performance settings
|
||||
PERFORMANCE_MONITORING=true
|
||||
SLOW_QUERY_LOG=true
|
||||
QUERY_CACHE_ENABLED=true
|
||||
|
||||
# Memory and execution limits
|
||||
PHP_MEMORY_LIMIT=512M
|
||||
PHP_MAX_EXECUTION_TIME=60
|
||||
NGINX_CLIENT_MAX_BODY_SIZE=50M
|
||||
```
|
||||
|
||||
## Configuration Management Commands
|
||||
|
||||
### Using Make Commands
|
||||
|
||||
```bash
|
||||
# Initialize configuration files
|
||||
make init-config
|
||||
|
||||
# Edit environment configuration
|
||||
make edit-config ENV=staging
|
||||
|
||||
# Validate configuration
|
||||
make validate-config ENV=production
|
||||
|
||||
# Show safe configuration values
|
||||
make show-config ENV=staging
|
||||
```
|
||||
|
||||
### Using Deploy Script
|
||||
|
||||
```bash
|
||||
# Validate configuration during deployment
|
||||
./deploy.sh staging --dry-run
|
||||
|
||||
# Force deployment with incomplete config
|
||||
./deploy.sh staging --force
|
||||
```
|
||||
|
||||
## Environment Switching
|
||||
|
||||
### Quick Environment Changes
|
||||
|
||||
```bash
|
||||
# Deploy to different environments
|
||||
make deploy ENV=development
|
||||
make deploy ENV=staging
|
||||
make deploy ENV=production
|
||||
|
||||
# Environment-specific shortcuts
|
||||
make deploy-development
|
||||
make deploy-staging
|
||||
make deploy-production
|
||||
```
|
||||
|
||||
### Configuration Validation
|
||||
|
||||
```bash
|
||||
# Check configuration before deployment
|
||||
make validate-config ENV=production
|
||||
|
||||
# Test deployment without changes
|
||||
make deploy-dry ENV=production
|
||||
```
|
||||
|
||||
## Troubleshooting Configuration
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Missing Template Values**
|
||||
```bash
|
||||
# Check for unfilled templates
|
||||
grep "*** REQUIRED" .env.production
|
||||
```
|
||||
|
||||
2. **Permission Issues**
|
||||
```bash
|
||||
# Fix permissions
|
||||
chmod 600 .env.*
|
||||
```
|
||||
|
||||
3. **Database Connection**
|
||||
```bash
|
||||
# Test database connection
|
||||
docker-compose exec php php console.php db:ping
|
||||
```
|
||||
|
||||
4. **SSL Certificate Issues**
|
||||
```bash
|
||||
# Check SSL configuration
|
||||
make deploy-dry ENV=production
|
||||
```
|
||||
|
||||
### Configuration Validation
|
||||
|
||||
The deployment system automatically validates:
|
||||
- Required variables are set
|
||||
- No template placeholders remain
|
||||
- Secure passwords in production
|
||||
- SSL configuration is valid
|
||||
- Database connection settings
|
||||
|
||||
### Getting Help
|
||||
|
||||
```bash
|
||||
# Show deployment information
|
||||
make info
|
||||
|
||||
# Display all available commands
|
||||
make help
|
||||
|
||||
# Check deployment status
|
||||
make status ENV=production
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Review the [Quick Start Guide](QUICKSTART.md) for deployment steps
|
||||
- Check [Troubleshooting Guide](TROUBLESHOOTING.md) for common issues
|
||||
- Test your configuration with dry-run deployments
|
||||
- Set up monitoring and alerting for production environments
|
||||
@@ -1,190 +0,0 @@
|
||||
# Quick Start Guide
|
||||
|
||||
Get your Custom PHP Framework deployed quickly with this step-by-step guide.
|
||||
|
||||
## Project Information
|
||||
- **Domain**: michaelschiemer.de
|
||||
- **Email**: kontakt@michaelschiemer.de
|
||||
- **PHP Version**: 8.4
|
||||
- **Framework**: Custom PHP Framework
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Linux/macOS/WSL environment
|
||||
- Internet connection
|
||||
- Sudo privileges (for dependency installation)
|
||||
|
||||
## 1. First-Time Setup
|
||||
|
||||
Run the setup script to install dependencies and configure the deployment environment:
|
||||
|
||||
```bash
|
||||
cd deployment/
|
||||
./setup.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Install Docker, Docker Compose, and Ansible
|
||||
- Create configuration files from templates
|
||||
- Generate SSH keys for deployment
|
||||
- Validate the environment
|
||||
|
||||
### Non-Interactive Setup
|
||||
|
||||
For automated/CI environments:
|
||||
|
||||
```bash
|
||||
./setup.sh --skip-prompts
|
||||
```
|
||||
|
||||
## 2. Configure Your Environments
|
||||
|
||||
Edit the environment files created during setup:
|
||||
|
||||
```bash
|
||||
# Development environment
|
||||
nano applications/environments/.env.development
|
||||
|
||||
# Staging environment
|
||||
nano applications/environments/.env.staging
|
||||
|
||||
# Production environment
|
||||
nano applications/environments/.env.production
|
||||
```
|
||||
|
||||
**Important**: Replace all template values, especially:
|
||||
- Database passwords
|
||||
- SSL email addresses
|
||||
- API keys and secrets
|
||||
|
||||
## 3. Test Your Configuration
|
||||
|
||||
Validate your configuration without making changes:
|
||||
|
||||
```bash
|
||||
# Using the deploy script
|
||||
./deploy.sh staging --dry-run
|
||||
|
||||
# Using make commands
|
||||
make deploy-dry ENV=staging
|
||||
make validate-config ENV=staging
|
||||
```
|
||||
|
||||
## 4. Deploy to Staging
|
||||
|
||||
Deploy to staging environment for testing:
|
||||
|
||||
```bash
|
||||
# Using the deploy script
|
||||
./deploy.sh staging
|
||||
|
||||
# Using make command
|
||||
make deploy-staging
|
||||
```
|
||||
|
||||
## 5. Deploy to Production
|
||||
|
||||
When ready for production:
|
||||
|
||||
```bash
|
||||
# Test production deployment first
|
||||
./deploy.sh production --dry-run
|
||||
|
||||
# Deploy to production (requires confirmation)
|
||||
./deploy.sh production
|
||||
|
||||
# Or using make
|
||||
make deploy-production
|
||||
```
|
||||
|
||||
## Quick Commands Reference
|
||||
|
||||
### Main Deployment Commands
|
||||
|
||||
```bash
|
||||
# Deploy full stack to staging
|
||||
make deploy-staging
|
||||
|
||||
# Deploy full stack to production
|
||||
make deploy-production
|
||||
|
||||
# Dry run for any environment
|
||||
make deploy-dry ENV=production
|
||||
|
||||
# Deploy only infrastructure
|
||||
make infrastructure ENV=staging
|
||||
|
||||
# Deploy only application
|
||||
make application ENV=staging
|
||||
```
|
||||
|
||||
### Status and Health Checks
|
||||
|
||||
```bash
|
||||
# Check deployment status
|
||||
make status ENV=staging
|
||||
|
||||
# Run health checks
|
||||
make health ENV=production
|
||||
|
||||
# View application logs
|
||||
make logs ENV=staging
|
||||
```
|
||||
|
||||
### Configuration Management
|
||||
|
||||
```bash
|
||||
# Show deployment info
|
||||
make info
|
||||
|
||||
# Validate configuration
|
||||
make validate-config ENV=production
|
||||
|
||||
# Edit configuration
|
||||
make edit-config ENV=staging
|
||||
```
|
||||
|
||||
### Emergency Commands
|
||||
|
||||
```bash
|
||||
# Emergency stop all services
|
||||
make emergency-stop ENV=staging
|
||||
|
||||
# Emergency restart all services
|
||||
make emergency-restart ENV=production
|
||||
|
||||
# Create backup
|
||||
make backup ENV=production
|
||||
```
|
||||
|
||||
## Deployment Flow
|
||||
|
||||
1. **Validation**: Prerequisites, configuration, and tests
|
||||
2. **Infrastructure**: Ansible deploys security, Docker, Nginx, SSL
|
||||
3. **Application**: Docker Compose deploys PHP app, database, assets
|
||||
4. **Health Checks**: Validates deployment success
|
||||
|
||||
## Safety Features
|
||||
|
||||
- Production deployments require double confirmation
|
||||
- Database backups are created automatically
|
||||
- Dry run mode for testing without changes
|
||||
- Health checks verify deployment success
|
||||
- Emergency stop/restart commands available
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Review [Environment Configuration](ENVIRONMENTS.md) for detailed setup
|
||||
- Check [Troubleshooting Guide](TROUBLESHOOTING.md) if issues arise
|
||||
- Customize Ansible playbooks for your specific needs
|
||||
- Set up monitoring and alerting for production
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
- Check the troubleshooting guide
|
||||
- Review deployment logs
|
||||
- Verify configuration files
|
||||
- Test with dry-run mode first
|
||||
|
||||
Happy deploying! 🚀
|
||||
@@ -1,606 +0,0 @@
|
||||
# Troubleshooting Guide
|
||||
|
||||
This guide helps you diagnose and fix common deployment issues for the Custom PHP Framework.
|
||||
|
||||
## Project Information
|
||||
- **Domain**: michaelschiemer.de
|
||||
- **Email**: kontakt@michaelschiemer.de
|
||||
- **PHP Version**: 8.4
|
||||
|
||||
## Quick Diagnostics
|
||||
|
||||
### System Status Check
|
||||
|
||||
```bash
|
||||
# Check overall deployment status
|
||||
make status ENV=staging
|
||||
|
||||
# Run health checks
|
||||
make health ENV=production
|
||||
|
||||
# Check prerequisites
|
||||
make check-prerequisites
|
||||
|
||||
# Validate configuration
|
||||
make validate-config ENV=production
|
||||
```
|
||||
|
||||
### Log Investigation
|
||||
|
||||
```bash
|
||||
# View application logs
|
||||
make logs ENV=staging
|
||||
|
||||
# Infrastructure logs
|
||||
tail -f deployment/infrastructure/logs/ansible.log
|
||||
|
||||
# Docker container logs
|
||||
docker-compose logs --tail=100 -f php
|
||||
docker-compose logs --tail=100 -f nginx
|
||||
docker-compose logs --tail=100 -f db
|
||||
```
|
||||
|
||||
## Common Issues and Solutions
|
||||
|
||||
### 1. Setup and Prerequisites
|
||||
|
||||
#### Issue: Dependencies Not Installed
|
||||
|
||||
**Symptoms:**
|
||||
```bash
|
||||
command not found: docker
|
||||
command not found: ansible-playbook
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Run setup script
|
||||
./setup.sh
|
||||
|
||||
# Or install manually
|
||||
sudo apt-get install docker.io docker-compose ansible # Ubuntu/Debian
|
||||
brew install docker ansible # macOS
|
||||
```
|
||||
|
||||
#### Issue: Docker Permission Denied
|
||||
|
||||
**Symptoms:**
|
||||
```bash
|
||||
Got permission denied while trying to connect to the Docker daemon socket
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Add user to docker group
|
||||
sudo usermod -aG docker $USER
|
||||
|
||||
# Log out and back in, or start new shell
|
||||
newgrp docker
|
||||
|
||||
# Test Docker access
|
||||
docker ps
|
||||
```
|
||||
|
||||
#### Issue: SSH Key Authentication
|
||||
|
||||
**Symptoms:**
|
||||
```bash
|
||||
Permission denied (publickey)
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Generate SSH keys if not exists
|
||||
ssh-keygen -t ed25519 -C "deployment@michaelschiemer.de"
|
||||
|
||||
# Add public key to target server
|
||||
ssh-copy-id user@your-server.com
|
||||
|
||||
# Or manually copy key
|
||||
cat ~/.ssh/id_ed25519.pub
|
||||
# Copy output to server's ~/.ssh/authorized_keys
|
||||
```
|
||||
|
||||
### 2. Configuration Issues
|
||||
|
||||
#### Issue: Environment File Not Found
|
||||
|
||||
**Symptoms:**
|
||||
```bash
|
||||
ERROR: Environment file not found: .env.production
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Create from template
|
||||
cp applications/environments/.env.production.template applications/environments/.env.production
|
||||
|
||||
# Or initialize all configs
|
||||
make init-config
|
||||
|
||||
# Edit the configuration
|
||||
make edit-config ENV=production
|
||||
```
|
||||
|
||||
#### Issue: Template Values Not Replaced
|
||||
|
||||
**Symptoms:**
|
||||
```bash
|
||||
ERROR: Environment file contains unfilled templates
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Find unfilled templates
|
||||
grep "*** REQUIRED" applications/environments/.env.production
|
||||
|
||||
# Replace with actual values
|
||||
nano applications/environments/.env.production
|
||||
|
||||
# Generate secure passwords
|
||||
openssl rand -base64 32 | tr -d "=+/" | cut -c1-25
|
||||
```
|
||||
|
||||
#### Issue: SSL Certificate Problems
|
||||
|
||||
**Symptoms:**
|
||||
```bash
|
||||
SSL certificate error
|
||||
nginx: [emerg] cannot load certificate
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# For Let's Encrypt issues
|
||||
# Check domain DNS points to server
|
||||
dig +short michaelschiemer.de
|
||||
|
||||
# Verify SSL email is correct
|
||||
grep SSL_EMAIL applications/environments/.env.production
|
||||
|
||||
# For self-signed certificates (development)
|
||||
# Regenerate certificates
|
||||
./scripts/generate_ssl_certificates.sh
|
||||
|
||||
# Check certificate validity
|
||||
openssl x509 -in /path/to/cert.pem -text -noout
|
||||
```
|
||||
|
||||
### 3. Deployment Failures
|
||||
|
||||
#### Issue: Ansible Connection Failed
|
||||
|
||||
**Symptoms:**
|
||||
```bash
|
||||
UNREACHABLE! => {"msg": "Failed to connect to the host via ssh"}
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Test SSH connection manually
|
||||
ssh user@your-server.com
|
||||
|
||||
# Check Ansible inventory
|
||||
cat deployment/infrastructure/inventories/production/hosts.yml
|
||||
|
||||
# Test Ansible connectivity
|
||||
ansible all -i deployment/infrastructure/inventories/production/hosts.yml -m ping
|
||||
|
||||
# Common fixes:
|
||||
# 1. Update server IP address in inventory
|
||||
# 2. Ensure SSH key is added to server
|
||||
# 3. Check firewall allows SSH (port 22)
|
||||
# 4. Verify username in inventory file
|
||||
```
|
||||
|
||||
#### Issue: Docker Compose Build Failed
|
||||
|
||||
**Symptoms:**
|
||||
```bash
|
||||
ERROR: Failed to build custom-php-framework
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Check Docker Compose syntax
|
||||
docker-compose config
|
||||
|
||||
# Rebuild without cache
|
||||
docker-compose build --no-cache
|
||||
|
||||
# Check for disk space
|
||||
df -h
|
||||
|
||||
# Clear Docker build cache
|
||||
docker system prune -a
|
||||
|
||||
# Check specific service logs
|
||||
docker-compose logs php
|
||||
```
|
||||
|
||||
#### Issue: Database Connection Failed
|
||||
|
||||
**Symptoms:**
|
||||
```bash
|
||||
SQLSTATE[HY000] [2002] Connection refused
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Check database container status
|
||||
docker-compose ps db
|
||||
|
||||
# Check database logs
|
||||
docker-compose logs db
|
||||
|
||||
# Test database connection
|
||||
docker-compose exec php php console.php db:ping
|
||||
|
||||
# Verify database credentials in .env file
|
||||
grep DB_ applications/environments/.env.production
|
||||
|
||||
# Reset database container
|
||||
docker-compose down
|
||||
docker volume rm michaelschiemer_db_data # WARNING: This removes all data
|
||||
docker-compose up -d db
|
||||
```
|
||||
|
||||
### 4. Application Issues
|
||||
|
||||
#### Issue: 502 Bad Gateway
|
||||
|
||||
**Symptoms:**
|
||||
- Nginx shows 502 error
|
||||
- Application not responding
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Check if PHP-FPM container is running
|
||||
docker-compose ps php
|
||||
|
||||
# Check PHP-FPM logs
|
||||
docker-compose logs php
|
||||
|
||||
# Restart PHP container
|
||||
docker-compose restart php
|
||||
|
||||
# Check nginx upstream configuration
|
||||
docker-compose exec nginx nginx -t
|
||||
|
||||
# Verify PHP-FPM is listening on correct port
|
||||
docker-compose exec php netstat -ln | grep 9000
|
||||
```
|
||||
|
||||
#### Issue: 404 Not Found
|
||||
|
||||
**Symptoms:**
|
||||
- All routes return 404
|
||||
- Static files not found
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Check nginx configuration
|
||||
docker-compose exec nginx nginx -t
|
||||
|
||||
# Check document root
|
||||
docker-compose exec php ls -la /var/www/html/public/
|
||||
|
||||
# Verify file permissions
|
||||
docker-compose exec php chmod -R 755 /var/www/html/public
|
||||
docker-compose exec php chown -R www-data:www-data /var/www/html
|
||||
|
||||
# Check nginx routing
|
||||
docker-compose logs nginx | grep 404
|
||||
```
|
||||
|
||||
#### Issue: PHP Fatal Errors
|
||||
|
||||
**Symptoms:**
|
||||
```bash
|
||||
PHP Fatal error: Class not found
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Check composer autoloader
|
||||
docker-compose exec php composer dump-autoload -o
|
||||
|
||||
# Verify dependencies installed
|
||||
docker-compose exec php composer install --no-dev --optimize-autoloader
|
||||
|
||||
# Check PHP configuration
|
||||
docker-compose exec php php -i | grep -E "(memory_limit|max_execution_time)"
|
||||
|
||||
# Check application logs
|
||||
docker-compose logs php | grep "FATAL"
|
||||
```
|
||||
|
||||
### 5. Performance Issues
|
||||
|
||||
#### Issue: Slow Response Times
|
||||
|
||||
**Symptoms:**
|
||||
- Pages load slowly
|
||||
- Timeouts occur
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Check resource usage
|
||||
docker stats
|
||||
|
||||
# Monitor PHP-FPM processes
|
||||
docker-compose exec php ps aux | grep php-fpm
|
||||
|
||||
# Check database queries
|
||||
docker-compose logs db | grep "Query_time"
|
||||
|
||||
# Optimize PHP-FPM configuration
|
||||
# Edit: deployment/applications/dockerfiles/php-fpm/php-fpm.conf
|
||||
|
||||
# Enable OPcache
|
||||
docker-compose exec php php -m | grep OPcache
|
||||
```
|
||||
|
||||
#### Issue: High Memory Usage
|
||||
|
||||
**Symptoms:**
|
||||
```bash
|
||||
Fatal error: Allowed memory size exhausted
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Increase PHP memory limit
|
||||
# Edit .env file: PHP_MEMORY_LIMIT=1024M
|
||||
|
||||
# Check memory usage
|
||||
docker-compose exec php php -r "echo ini_get('memory_limit');"
|
||||
|
||||
# Monitor memory usage
|
||||
docker stats --no-stream
|
||||
|
||||
# Restart containers with new limits
|
||||
docker-compose down && docker-compose up -d
|
||||
```
|
||||
|
||||
### 6. SSL and Security Issues
|
||||
|
||||
#### Issue: SSL Certificate Not Trusted
|
||||
|
||||
**Symptoms:**
|
||||
- Browser shows security warning
|
||||
- SSL certificate invalid
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Check certificate status
|
||||
curl -I https://michaelschiemer.de
|
||||
|
||||
# Verify certificate chain
|
||||
openssl s_client -connect michaelschiemer.de:443 -servername michaelschiemer.de
|
||||
|
||||
# For Let's Encrypt renewal issues
|
||||
docker-compose exec nginx certbot renew --dry-run
|
||||
|
||||
# Check certificate expiration
|
||||
echo | openssl s_client -connect michaelschiemer.de:443 2>/dev/null | openssl x509 -noout -dates
|
||||
```
|
||||
|
||||
#### Issue: Firewall Blocking Connections
|
||||
|
||||
**Symptoms:**
|
||||
- Connection timeout
|
||||
- Cannot reach server
|
||||
|
||||
**Solutions:**
|
||||
```bash
|
||||
# Check firewall status on server
|
||||
sudo ufw status
|
||||
|
||||
# Allow HTTP/HTTPS traffic
|
||||
sudo ufw allow 80/tcp
|
||||
sudo ufw allow 443/tcp
|
||||
sudo ufw allow 22/tcp # SSH
|
||||
|
||||
# Check if ports are listening
|
||||
netstat -tlnp | grep :80
|
||||
netstat -tlnp | grep :443
|
||||
```
|
||||
|
||||
## Advanced Troubleshooting
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable debug mode for detailed error information:
|
||||
|
||||
```bash
|
||||
# Enable debug in .env file (non-production only)
|
||||
APP_DEBUG=true
|
||||
|
||||
# Redeploy with debug enabled
|
||||
./deploy.sh staging --force
|
||||
|
||||
# Check detailed logs
|
||||
make logs ENV=staging | grep ERROR
|
||||
```
|
||||
|
||||
### Verbose Deployment
|
||||
|
||||
Run deployment with verbose output:
|
||||
|
||||
```bash
|
||||
# Verbose deployment
|
||||
./deploy.sh staging --verbose
|
||||
|
||||
# Dry run with verbose output
|
||||
./deploy.sh production --dry-run --verbose
|
||||
|
||||
# Ansible verbose mode
|
||||
cd deployment/infrastructure
|
||||
ansible-playbook -i inventories/staging/hosts.yml site.yml -vvv
|
||||
```
|
||||
|
||||
### Database Debugging
|
||||
|
||||
```bash
|
||||
# Check database status
|
||||
make health ENV=staging
|
||||
|
||||
# Access database directly
|
||||
docker-compose exec db mysql -u root -p
|
||||
|
||||
# Check database structure
|
||||
docker-compose exec php php console.php db:status
|
||||
|
||||
# Run migrations
|
||||
docker-compose exec php php console.php db:migrate
|
||||
|
||||
# Rollback migrations
|
||||
docker-compose exec php php console.php db:rollback
|
||||
```
|
||||
|
||||
### Container Debugging
|
||||
|
||||
```bash
|
||||
# Enter container for debugging
|
||||
docker-compose exec php /bin/bash
|
||||
docker-compose exec nginx /bin/sh
|
||||
|
||||
# Check container resource usage
|
||||
docker stats --no-stream
|
||||
|
||||
# Inspect container configuration
|
||||
docker-compose config
|
||||
|
||||
# Check container networking
|
||||
docker network ls
|
||||
docker network inspect michaelschiemer_default
|
||||
```
|
||||
|
||||
## Recovery Procedures
|
||||
|
||||
### Emergency Procedures
|
||||
|
||||
```bash
|
||||
# Emergency stop all services
|
||||
make emergency-stop ENV=production
|
||||
|
||||
# Emergency restart
|
||||
make emergency-restart ENV=production
|
||||
|
||||
# Rollback deployment (with caution)
|
||||
make rollback ENV=production
|
||||
```
|
||||
|
||||
### Backup and Restore
|
||||
|
||||
```bash
|
||||
# Create backup before troubleshooting
|
||||
make backup ENV=production
|
||||
|
||||
# Restore from backup if needed
|
||||
make restore ENV=production
|
||||
|
||||
# List available backups
|
||||
ls -la ../storage/backups/
|
||||
```
|
||||
|
||||
### Service Recovery
|
||||
|
||||
```bash
|
||||
# Restart specific service
|
||||
docker-compose restart nginx
|
||||
docker-compose restart php
|
||||
docker-compose restart db
|
||||
|
||||
# Rebuild and restart
|
||||
docker-compose down
|
||||
docker-compose up -d --build
|
||||
|
||||
# Full reset (removes data - use with caution)
|
||||
docker-compose down -v
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
## Getting Help
|
||||
|
||||
### Check Documentation
|
||||
|
||||
1. Review [Quick Start Guide](QUICKSTART.md)
|
||||
2. Check [Environment Configuration](ENVIRONMENTS.md)
|
||||
3. Examine deployment logs
|
||||
|
||||
### Collect Information
|
||||
|
||||
Before asking for help, collect:
|
||||
- Error messages from logs
|
||||
- Environment configuration (sanitized)
|
||||
- System information (`docker --version`, `ansible --version`)
|
||||
- Deployment command used
|
||||
- Environment being deployed to
|
||||
|
||||
### Common Commands for Support
|
||||
|
||||
```bash
|
||||
# System information
|
||||
make version
|
||||
make info
|
||||
|
||||
# Configuration status
|
||||
make validate-config ENV=production
|
||||
|
||||
# Health checks
|
||||
make health ENV=staging
|
||||
|
||||
# Recent logs
|
||||
make logs ENV=production | tail -100
|
||||
```
|
||||
|
||||
### Emergency Contacts
|
||||
|
||||
For critical production issues:
|
||||
- Check system logs immediately
|
||||
- Create backup if possible
|
||||
- Document the issue and steps taken
|
||||
- Consider rollback if service is down
|
||||
|
||||
## Prevention
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Always test with dry-run first**
|
||||
```bash
|
||||
./deploy.sh production --dry-run
|
||||
```
|
||||
|
||||
2. **Use staging environment**
|
||||
```bash
|
||||
make deploy-staging
|
||||
# Test thoroughly before production
|
||||
```
|
||||
|
||||
3. **Regular backups**
|
||||
```bash
|
||||
make backup ENV=production
|
||||
```
|
||||
|
||||
4. **Monitor health**
|
||||
```bash
|
||||
make health ENV=production
|
||||
```
|
||||
|
||||
5. **Keep configuration secure**
|
||||
```bash
|
||||
chmod 600 applications/environments/.env.*
|
||||
```
|
||||
|
||||
### Monitoring Setup
|
||||
|
||||
Consider implementing:
|
||||
- Automated health checks
|
||||
- Log monitoring and alerting
|
||||
- Performance monitoring
|
||||
- SSL certificate expiration alerts
|
||||
- Database backup verification
|
||||
|
||||
This troubleshooting guide should help you resolve most common deployment issues. Remember to always test changes in a staging environment before applying them to production!
|
||||
22
deployment/gitea-runner/.env.example
Normal file
22
deployment/gitea-runner/.env.example
Normal file
@@ -0,0 +1,22 @@
|
||||
# Gitea Actions Runner Configuration
|
||||
|
||||
# Gitea Instance URL (must be accessible from runner)
|
||||
GITEA_INSTANCE_URL=https://git.michaelschiemer.de
|
||||
|
||||
# Runner Registration Token (get from Gitea: Admin > Actions > Runners)
|
||||
# To generate: Gitea UI > Site Administration > Actions > Runners > Create New Runner
|
||||
GITEA_RUNNER_REGISTRATION_TOKEN=
|
||||
|
||||
# Runner Name (appears in Gitea UI)
|
||||
GITEA_RUNNER_NAME=dev-runner-01
|
||||
|
||||
# Runner Labels (comma-separated)
|
||||
# Format: label:image
|
||||
# Example: ubuntu-latest:docker://node:16-bullseye
|
||||
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye,debian-latest:docker://debian:bullseye
|
||||
|
||||
# Optional: Custom Docker registry for job images
|
||||
# DOCKER_REGISTRY_MIRROR=https://registry.michaelschiemer.de
|
||||
|
||||
# Optional: Runner capacity (max concurrent jobs)
|
||||
# GITEA_RUNNER_CAPACITY=1
|
||||
8
deployment/gitea-runner/.gitignore
vendored
Normal file
8
deployment/gitea-runner/.gitignore
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
# Environment configuration
|
||||
.env
|
||||
|
||||
# Runner data
|
||||
data/
|
||||
|
||||
# Logs
|
||||
*.log
|
||||
693
deployment/gitea-runner/README.md
Normal file
693
deployment/gitea-runner/README.md
Normal file
@@ -0,0 +1,693 @@
|
||||
# Gitea Actions Runner (Development Machine)
|
||||
|
||||
Self-hosted Gitea Actions runner for executing CI/CD workflows on the development machine.
|
||||
|
||||
## Overview
|
||||
|
||||
This setup provides a Gitea Actions runner that executes CI/CD workflows triggered by repository events in Gitea. The runner runs in Docker and uses Docker-in-Docker (DinD) for isolated job execution.
|
||||
|
||||
**Key Features**:
|
||||
- **Self-Hosted**: Runs on development machine with full control
|
||||
- **Docker-Based**: Isolated execution environment for jobs
|
||||
- **Docker-in-Docker**: Jobs run in separate containers for security
|
||||
- **Multiple Labels**: Support for different workflow environments
|
||||
- **Auto-Restart**: Automatically restarts on failure
|
||||
- **Secure**: Isolated network and resource limits
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker and Docker Compose installed
|
||||
- Gitea instance running (Stack 2: Gitea)
|
||||
- Admin access to Gitea for runner registration
|
||||
- Network connectivity to Gitea instance
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
gitea-runner/
|
||||
├── docker-compose.yml # Service definitions
|
||||
├── .env.example # Environment template
|
||||
├── .env # Environment configuration (create from .env.example)
|
||||
├── config.yaml # Runner configuration
|
||||
├── register.sh # Registration script
|
||||
├── unregister.sh # Unregistration script
|
||||
├── data/ # Runner data (auto-created)
|
||||
│ └── .runner # Registration info (auto-generated)
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Create Environment File
|
||||
|
||||
```bash
|
||||
cd deployment/gitea-runner
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
### 2. Get Registration Token
|
||||
|
||||
1. Go to Gitea admin panel: https://git.michaelschiemer.de/admin
|
||||
2. Navigate to: **Site Administration > Actions > Runners**
|
||||
3. Click **"Create New Runner"**
|
||||
4. Copy the registration token
|
||||
5. Add token to `.env` file:
|
||||
|
||||
```bash
|
||||
nano .env
|
||||
# Set GITEA_RUNNER_REGISTRATION_TOKEN=<your-token>
|
||||
```
|
||||
|
||||
### 3. Configure Environment Variables
|
||||
|
||||
Edit `.env` and configure:
|
||||
|
||||
```bash
|
||||
# Gitea Instance URL
|
||||
GITEA_INSTANCE_URL=https://git.michaelschiemer.de
|
||||
|
||||
# Registration Token (from step 2)
|
||||
GITEA_RUNNER_REGISTRATION_TOKEN=<your-token>
|
||||
|
||||
# Runner Name (appears in Gitea UI)
|
||||
GITEA_RUNNER_NAME=dev-runner-01
|
||||
|
||||
# Runner Labels (what environments this runner supports)
|
||||
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye
|
||||
```
|
||||
|
||||
### 4. Register Runner
|
||||
|
||||
Run the registration script:
|
||||
|
||||
```bash
|
||||
./register.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Start the runner services
|
||||
- Register the runner with Gitea
|
||||
- Display runner status
|
||||
|
||||
### 5. Verify Registration
|
||||
|
||||
Check runner status in Gitea:
|
||||
- Go to: https://git.michaelschiemer.de/admin/actions/runners
|
||||
- You should see your runner listed as "Idle" or "Active"
|
||||
|
||||
## Configuration
|
||||
|
||||
### Runner Labels
|
||||
|
||||
Labels define what workflow environments the runner supports. Format: `label:image`
|
||||
|
||||
**Common Labels**:
|
||||
```bash
|
||||
# Ubuntu with Node.js 16
|
||||
ubuntu-latest:docker://node:16-bullseye
|
||||
|
||||
# Ubuntu 22.04
|
||||
ubuntu-22.04:docker://node:16-bullseye
|
||||
|
||||
# Debian
|
||||
debian-latest:docker://debian:bullseye
|
||||
|
||||
# Custom images from private registry
|
||||
ubuntu-php:docker://registry.michaelschiemer.de/php:8.3-cli
|
||||
```
|
||||
|
||||
**Example Workflow Using Labels**:
|
||||
```yaml
|
||||
# .gitea/workflows/test.yml
|
||||
name: Test
|
||||
on: [push]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest # Uses runner with this label
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- run: npm install
|
||||
- run: npm test
|
||||
```
|
||||
|
||||
### Runner Capacity
|
||||
|
||||
Control how many jobs can run concurrently:
|
||||
|
||||
**In `.env`**:
|
||||
```bash
|
||||
GITEA_RUNNER_CAPACITY=1 # Max concurrent jobs
|
||||
```
|
||||
|
||||
**In `config.yaml`**:
|
||||
```yaml
|
||||
runner:
|
||||
capacity: 1 # Max concurrent jobs
|
||||
timeout: 3h # Job timeout
|
||||
```
|
||||
|
||||
### Resource Limits
|
||||
|
||||
Configure resource limits in `config.yaml`:
|
||||
|
||||
```yaml
|
||||
container:
|
||||
resources:
|
||||
memory_limit: 2147483648 # 2GB
|
||||
cpu_quota: 100000 # 1 CPU
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Start Runner
|
||||
|
||||
```bash
|
||||
# Start services
|
||||
docker compose up -d
|
||||
|
||||
# View logs
|
||||
docker compose logs -f gitea-runner
|
||||
```
|
||||
|
||||
### Stop Runner
|
||||
|
||||
```bash
|
||||
docker compose down
|
||||
```
|
||||
|
||||
### Restart Runner
|
||||
|
||||
```bash
|
||||
docker compose restart gitea-runner
|
||||
```
|
||||
|
||||
### View Logs
|
||||
|
||||
```bash
|
||||
# Follow logs
|
||||
docker compose logs -f gitea-runner
|
||||
|
||||
# View last 100 lines
|
||||
docker compose logs --tail=100 gitea-runner
|
||||
|
||||
# View Docker-in-Docker logs
|
||||
docker compose logs -f docker-dind
|
||||
```
|
||||
|
||||
### Check Runner Status
|
||||
|
||||
```bash
|
||||
# Check container status
|
||||
docker compose ps
|
||||
|
||||
# View runner info
|
||||
docker compose exec gitea-runner cat /data/.runner
|
||||
```
|
||||
|
||||
### Unregister Runner
|
||||
|
||||
```bash
|
||||
./unregister.sh
|
||||
```
|
||||
|
||||
This will:
|
||||
- Stop the runner services
|
||||
- Remove registration file
|
||||
- Optionally remove runner data
|
||||
|
||||
**Note**: You may need to manually delete the runner from Gitea UI:
|
||||
- Go to: https://git.michaelschiemer.de/admin/actions/runners
|
||||
- Find the runner and click "Delete"
|
||||
|
||||
## Workflow Examples
|
||||
|
||||
### Basic Node.js Test Workflow
|
||||
|
||||
Create `.gitea/workflows/test.yml` in your repository:
|
||||
|
||||
```yaml
|
||||
name: Test
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install
|
||||
|
||||
- name: Run tests
|
||||
run: npm test
|
||||
```
|
||||
|
||||
### PHP Test Workflow
|
||||
|
||||
```yaml
|
||||
name: PHP Tests
|
||||
on: [push]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Install PHP dependencies
|
||||
run: |
|
||||
apt-get update
|
||||
apt-get install -y php8.3-cli php8.3-mbstring
|
||||
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
|
||||
php composer-setup.php --install-dir=/usr/local/bin --filename=composer
|
||||
composer install
|
||||
|
||||
- name: Run tests
|
||||
run: ./vendor/bin/pest
|
||||
```
|
||||
|
||||
### Build and Deploy Workflow
|
||||
|
||||
```yaml
|
||||
name: Deploy
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Build Docker image
|
||||
run: |
|
||||
docker build -t registry.michaelschiemer.de/app:latest .
|
||||
|
||||
- name: Push to registry
|
||||
run: |
|
||||
echo "${{ secrets.REGISTRY_PASSWORD }}" | docker login registry.michaelschiemer.de -u admin --password-stdin
|
||||
docker push registry.michaelschiemer.de/app:latest
|
||||
|
||||
- name: Deploy via SSH
|
||||
run: |
|
||||
# Add SSH deployment commands here
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Runner Not Appearing in Gitea
|
||||
|
||||
**Check registration**:
|
||||
```bash
|
||||
# Verify registration file exists
|
||||
ls -la data/.runner
|
||||
|
||||
# Check runner logs
|
||||
docker compose logs gitea-runner
|
||||
```
|
||||
|
||||
**Re-register**:
|
||||
```bash
|
||||
./unregister.sh
|
||||
./register.sh
|
||||
```
|
||||
|
||||
### Jobs Not Starting
|
||||
|
||||
**Check runner status**:
|
||||
```bash
|
||||
# View logs
|
||||
docker compose logs -f gitea-runner
|
||||
|
||||
# Check if runner is idle
|
||||
# In Gitea: Admin > Actions > Runners
|
||||
```
|
||||
|
||||
**Common Issues**:
|
||||
- Runner is offline: Restart runner (`docker compose restart gitea-runner`)
|
||||
- No matching labels: Verify workflow `runs-on` matches runner labels
|
||||
- Capacity reached: Increase `GITEA_RUNNER_CAPACITY` or wait for jobs to finish
|
||||
|
||||
### Docker-in-Docker Issues
|
||||
|
||||
**Check DinD container**:
|
||||
```bash
|
||||
# View DinD logs
|
||||
docker compose logs docker-dind
|
||||
|
||||
# Check DinD is running
|
||||
docker compose ps docker-dind
|
||||
```
|
||||
|
||||
**Restart DinD**:
|
||||
```bash
|
||||
docker compose restart docker-dind
|
||||
```
|
||||
|
||||
### Job Timeout
|
||||
|
||||
Jobs timing out after 3 hours? Increase timeout in `config.yaml`:
|
||||
|
||||
```yaml
|
||||
runner:
|
||||
timeout: 6h # Increase to 6 hours
|
||||
```
|
||||
|
||||
### Network Issues
|
||||
|
||||
**Cannot reach Gitea**:
|
||||
```bash
|
||||
# Test connectivity from runner
|
||||
docker compose exec gitea-runner wget -O- https://git.michaelschiemer.de
|
||||
|
||||
# Check DNS resolution
|
||||
docker compose exec gitea-runner nslookup git.michaelschiemer.de
|
||||
```
|
||||
|
||||
### Disk Space Issues
|
||||
|
||||
**Clean up old job data**:
|
||||
```bash
|
||||
# Remove old workspace data
|
||||
docker compose exec gitea-runner rm -rf /tmp/gitea-runner/*
|
||||
|
||||
# Clean up Docker images
|
||||
docker image prune -a -f
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### 1. Runner Security
|
||||
|
||||
- Runner runs with access to Docker socket (required for jobs)
|
||||
- Jobs execute in isolated containers via Docker-in-Docker
|
||||
- Network is isolated from other Docker networks
|
||||
- Resource limits prevent resource exhaustion
|
||||
|
||||
### 2. Registration Token
|
||||
|
||||
- Registration token has admin privileges
|
||||
- Store token securely (in `.env` file, not in git)
|
||||
- Token is only used during registration
|
||||
- After registration, runner uses generated credentials
|
||||
|
||||
### 3. Job Isolation
|
||||
|
||||
- Each job runs in a separate container
|
||||
- Containers are destroyed after job completion
|
||||
- Docker-in-Docker provides additional isolation layer
|
||||
- Valid volume mounts are restricted in `config.yaml`
|
||||
|
||||
### 4. Secrets Management
|
||||
|
||||
**In Gitea**:
|
||||
- Store secrets in repository settings: Settings > Secrets
|
||||
- Access in workflows via `${{ secrets.SECRET_NAME }}`
|
||||
- Secrets are masked in logs
|
||||
|
||||
**Example**:
|
||||
```yaml
|
||||
steps:
|
||||
- name: Deploy
|
||||
run: |
|
||||
echo "${{ secrets.DEPLOY_KEY }}" > deploy_key
|
||||
chmod 600 deploy_key
|
||||
ssh -i deploy_key user@server "deploy.sh"
|
||||
```
|
||||
|
||||
### 5. Network Security
|
||||
|
||||
- Runner network is isolated
|
||||
- Only runner and DinD containers share network
|
||||
- No external access to runner management
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Daily Tasks
|
||||
|
||||
- Monitor runner logs for errors
|
||||
- Check disk space usage
|
||||
- Verify runner appears as "Idle" in Gitea when not running jobs
|
||||
|
||||
### Weekly Tasks
|
||||
|
||||
- Review completed jobs in Gitea
|
||||
- Clean up old Docker images: `docker image prune -a`
|
||||
- Check runner resource usage
|
||||
|
||||
### Monthly Tasks
|
||||
|
||||
- Update runner image: `docker compose pull && docker compose up -d`
|
||||
- Review and update runner labels
|
||||
- Audit workflow performance
|
||||
|
||||
### Update Runner
|
||||
|
||||
```bash
|
||||
# Pull latest image
|
||||
docker compose pull
|
||||
|
||||
# Restart with new image
|
||||
docker compose up -d
|
||||
|
||||
# Verify update
|
||||
docker compose logs -f gitea-runner
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Reduce Job Startup Time
|
||||
|
||||
**Cache dependencies** in workflows:
|
||||
|
||||
```yaml
|
||||
- name: Cache dependencies
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: ~/.npm
|
||||
key: ${{ runner.os }}-npm-${{ hashFiles('**/package-lock.json') }}
|
||||
```
|
||||
|
||||
### Optimize Docker Builds
|
||||
|
||||
**Use Docker layer caching**:
|
||||
|
||||
```yaml
|
||||
- name: Build with cache
|
||||
run: |
|
||||
docker build \
|
||||
--cache-from registry.michaelschiemer.de/app:cache \
|
||||
--tag registry.michaelschiemer.de/app:latest \
|
||||
.
|
||||
```
|
||||
|
||||
### Increase Runner Capacity
|
||||
|
||||
For more concurrent jobs:
|
||||
|
||||
```bash
|
||||
# In .env
|
||||
GITEA_RUNNER_CAPACITY=2 # Allow 2 concurrent jobs
|
||||
```
|
||||
|
||||
**Note**: Ensure development machine has sufficient resources (CPU, RAM, disk).
|
||||
|
||||
## Integration with Deployment Stacks
|
||||
|
||||
### Stack 2: Gitea Integration
|
||||
|
||||
- Runner connects to Gitea for job fetching
|
||||
- Uses Gitea API for workflow definitions
|
||||
- Reports job status back to Gitea
|
||||
|
||||
### Stack 3: Docker Registry Integration
|
||||
|
||||
- Push built images to private registry
|
||||
- Pull base images from registry for jobs
|
||||
- Use registry for caching layers
|
||||
|
||||
### Stack 4: Application Deployment
|
||||
|
||||
- Build and test application code
|
||||
- Deploy to application stack via SSH
|
||||
- Trigger stack updates via Ansible
|
||||
|
||||
### Stack 5: Database Migrations
|
||||
|
||||
- Run database migrations in workflows
|
||||
- Test database changes before deployment
|
||||
- Backup database before migrations
|
||||
|
||||
### Stack 6: Monitoring Integration
|
||||
|
||||
- Monitor runner resource usage via cAdvisor
|
||||
- Track job execution metrics in Prometheus
|
||||
- Alert on runner failures via Grafana
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Custom Docker Registry for Jobs
|
||||
|
||||
Use private registry for job images:
|
||||
|
||||
```bash
|
||||
# In .env
|
||||
DOCKER_REGISTRY_MIRROR=https://registry.michaelschiemer.de
|
||||
```
|
||||
|
||||
**In workflows**:
|
||||
```yaml
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
container:
|
||||
image: registry.michaelschiemer.de/php:8.3-cli
|
||||
credentials:
|
||||
username: admin
|
||||
password: ${{ secrets.REGISTRY_PASSWORD }}
|
||||
```
|
||||
|
||||
### Multiple Runners
|
||||
|
||||
Run multiple runners for different purposes:
|
||||
|
||||
```bash
|
||||
# Production runner
|
||||
cd deployment/gitea-runner-prod
|
||||
cp ../gitea-runner/.env.example .env
|
||||
# Set GITEA_RUNNER_NAME=prod-runner
|
||||
# Set different labels
|
||||
./register.sh
|
||||
|
||||
# Staging runner
|
||||
cd deployment/gitea-runner-staging
|
||||
cp ../gitea-runner/.env.example .env
|
||||
# Set GITEA_RUNNER_NAME=staging-runner
|
||||
./register.sh
|
||||
```
|
||||
|
||||
### Custom Job Container Options
|
||||
|
||||
In `config.yaml`:
|
||||
|
||||
```yaml
|
||||
container:
|
||||
# Custom Docker options
|
||||
options: "--dns 8.8.8.8 --add-host git.michaelschiemer.de:94.16.110.151"
|
||||
|
||||
# Custom network mode
|
||||
network: host
|
||||
|
||||
# Enable privileged mode (use cautiously)
|
||||
privileged: false
|
||||
```
|
||||
|
||||
## Monitoring and Logging
|
||||
|
||||
### View Runner Metrics
|
||||
|
||||
```bash
|
||||
# Container resource usage
|
||||
docker stats gitea-runner
|
||||
|
||||
# Detailed metrics
|
||||
docker compose exec gitea-runner cat /data/metrics
|
||||
```
|
||||
|
||||
### Centralized Logging
|
||||
|
||||
Send logs to monitoring stack:
|
||||
|
||||
```yaml
|
||||
# In docker-compose.yml
|
||||
services:
|
||||
gitea-runner:
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
```
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Check runner health
|
||||
docker compose exec gitea-runner ps aux | grep act_runner
|
||||
|
||||
# Check job queue
|
||||
# In Gitea: Actions > Jobs
|
||||
```
|
||||
|
||||
## Backup and Recovery
|
||||
|
||||
### Backup Runner Configuration
|
||||
|
||||
```bash
|
||||
# Backup registration and config
|
||||
tar czf gitea-runner-backup-$(date +%Y%m%d).tar.gz \
|
||||
.env config.yaml data/.runner
|
||||
```
|
||||
|
||||
### Restore Runner
|
||||
|
||||
```bash
|
||||
# Extract backup
|
||||
tar xzf gitea-runner-backup-YYYYMMDD.tar.gz
|
||||
|
||||
# Restart runner
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
**Note**: If registration token changed, re-register:
|
||||
```bash
|
||||
./unregister.sh
|
||||
./register.sh
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
### Documentation
|
||||
|
||||
- Gitea Actions: https://docs.gitea.io/en-us/actions/overview/
|
||||
- Act Runner: https://gitea.com/gitea/act_runner
|
||||
- GitHub Actions (compatible): https://docs.github.com/en/actions
|
||||
|
||||
### Logs
|
||||
|
||||
```bash
|
||||
# Runner logs
|
||||
docker compose logs -f gitea-runner
|
||||
|
||||
# All logs
|
||||
docker compose logs -f
|
||||
|
||||
# Export logs
|
||||
docker compose logs > runner-logs-$(date +%Y%m%d).log
|
||||
```
|
||||
|
||||
### Health Check
|
||||
|
||||
```bash
|
||||
# Check all components
|
||||
docker compose ps
|
||||
|
||||
# Test runner connection to Gitea
|
||||
docker compose exec gitea-runner wget -O- https://git.michaelschiemer.de/api/v1/version
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Setup Status**: ✅ Ready for registration
|
||||
|
||||
**Next Steps**:
|
||||
1. Copy `.env.example` to `.env`
|
||||
2. Get registration token from Gitea
|
||||
3. Run `./register.sh`
|
||||
4. Verify runner appears in Gitea UI
|
||||
5. Create a test workflow to verify functionality
|
||||
76
deployment/gitea-runner/config.yaml
Normal file
76
deployment/gitea-runner/config.yaml
Normal file
@@ -0,0 +1,76 @@
|
||||
# Gitea Actions Runner Configuration
|
||||
# https://docs.gitea.io/en-us/actions/act-runner/
|
||||
|
||||
log:
|
||||
level: info
|
||||
|
||||
runner:
|
||||
# File to store runner registration information
|
||||
file: /data/.runner
|
||||
|
||||
# Maximum number of concurrent jobs
|
||||
capacity: 1
|
||||
|
||||
# Timeout for a single job
|
||||
timeout: 3h
|
||||
|
||||
# Whether to enable debug mode (skip SSL verification for setup)
|
||||
insecure: true
|
||||
|
||||
# Timeout for fetching job from Gitea
|
||||
fetch_timeout: 5s
|
||||
|
||||
# Interval for fetching jobs
|
||||
fetch_interval: 2s
|
||||
|
||||
cache:
|
||||
# Enable cache server
|
||||
enabled: true
|
||||
|
||||
# Cache server directory
|
||||
dir: /data/cache
|
||||
|
||||
# Host address for cache server
|
||||
host: ""
|
||||
|
||||
# Port for cache server
|
||||
port: 0
|
||||
|
||||
container:
|
||||
# Docker network mode for job containers
|
||||
network: bridge
|
||||
|
||||
# Privileged mode for job containers
|
||||
privileged: false
|
||||
|
||||
# Container options
|
||||
options: ""
|
||||
|
||||
# Working directory in container
|
||||
workdir_parent: /workspace
|
||||
|
||||
# Force pull images before running jobs
|
||||
force_pull: false
|
||||
|
||||
# Default image if not specified in workflow
|
||||
default_image: node:16-bullseye
|
||||
|
||||
# Docker host
|
||||
docker_host: ""
|
||||
|
||||
# Valid volume paths that can be mounted
|
||||
valid_volumes:
|
||||
- /workspace
|
||||
- /data
|
||||
|
||||
# Resources limits
|
||||
resources:
|
||||
memory_limit: 0
|
||||
memory_swap_limit: 0
|
||||
cpu_quota: 0
|
||||
cpu_period: 0
|
||||
cpu_set: ""
|
||||
|
||||
host:
|
||||
# Working directory on host
|
||||
workdir_parent: /tmp/gitea-runner
|
||||
46
deployment/gitea-runner/docker-compose.yml
Normal file
46
deployment/gitea-runner/docker-compose.yml
Normal file
@@ -0,0 +1,46 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
gitea-runner:
|
||||
image: gitea/act_runner:latest
|
||||
container_name: gitea-runner
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./data:/data
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- ./config.yaml:/config.yaml:ro
|
||||
environment:
|
||||
- GITEA_INSTANCE_URL=${GITEA_INSTANCE_URL}
|
||||
- GITEA_RUNNER_REGISTRATION_TOKEN=${GITEA_RUNNER_REGISTRATION_TOKEN}
|
||||
- GITEA_RUNNER_NAME=${GITEA_RUNNER_NAME:-dev-runner}
|
||||
- GITEA_RUNNER_LABELS=${GITEA_RUNNER_LABELS:-ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye}
|
||||
networks:
|
||||
- gitea-runner
|
||||
depends_on:
|
||||
- docker-dind
|
||||
|
||||
# Docker-in-Docker for isolated job execution
|
||||
docker-dind:
|
||||
image: docker:dind
|
||||
container_name: gitea-runner-dind
|
||||
restart: unless-stopped
|
||||
privileged: true
|
||||
environment:
|
||||
- DOCKER_TLS_CERTDIR=/certs
|
||||
volumes:
|
||||
- docker-certs:/certs
|
||||
- docker-data:/var/lib/docker
|
||||
networks:
|
||||
- gitea-runner
|
||||
command: ["dockerd", "--host=unix:///var/run/docker.sock", "--host=tcp://0.0.0.0:2376", "--tlsverify"]
|
||||
|
||||
networks:
|
||||
gitea-runner:
|
||||
name: gitea-runner
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
docker-certs:
|
||||
name: gitea-runner-certs
|
||||
docker-data:
|
||||
name: gitea-runner-docker-data
|
||||
117
deployment/gitea-runner/register.sh
Executable file
117
deployment/gitea-runner/register.sh
Executable file
@@ -0,0 +1,117 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Gitea Actions Runner Registration Script
|
||||
# This script registers the runner with the Gitea instance
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
echo -e "${GREEN}Gitea Actions Runner Registration${NC}"
|
||||
echo "======================================"
|
||||
|
||||
# Check if .env exists
|
||||
if [ ! -f .env ]; then
|
||||
echo -e "${RED}Error: .env file not found${NC}"
|
||||
echo "Please copy .env.example to .env and configure it"
|
||||
echo ""
|
||||
echo " cp .env.example .env"
|
||||
echo " nano .env"
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Load environment variables
|
||||
set -a
|
||||
source .env
|
||||
set +a
|
||||
|
||||
# Validate required variables
|
||||
if [ -z "${GITEA_INSTANCE_URL:-}" ]; then
|
||||
echo -e "${RED}Error: GITEA_INSTANCE_URL is not set in .env${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "${GITEA_RUNNER_REGISTRATION_TOKEN:-}" ]; then
|
||||
echo -e "${RED}Error: GITEA_RUNNER_REGISTRATION_TOKEN is not set in .env${NC}"
|
||||
echo ""
|
||||
echo "To get registration token:"
|
||||
echo "1. Go to Gitea: ${GITEA_INSTANCE_URL}/admin/actions/runners"
|
||||
echo "2. Click 'Create New Runner'"
|
||||
echo "3. Copy the registration token"
|
||||
echo "4. Add it to .env file"
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${YELLOW}Configuration:${NC}"
|
||||
echo " Gitea URL: ${GITEA_INSTANCE_URL}"
|
||||
echo " Runner Name: ${GITEA_RUNNER_NAME:-dev-runner}"
|
||||
echo " Labels: ${GITEA_RUNNER_LABELS:-ubuntu-latest:docker://node:16-bullseye}"
|
||||
echo ""
|
||||
|
||||
# Check if runner is already registered
|
||||
if [ -f ./data/.runner ]; then
|
||||
echo -e "${YELLOW}Warning: Runner appears to be already registered${NC}"
|
||||
echo "Registration file exists at: ./data/.runner"
|
||||
echo ""
|
||||
read -p "Do you want to re-register? This will unregister the existing runner. (y/N): " -n 1 -r
|
||||
echo ""
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "Registration cancelled"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "Removing existing registration..."
|
||||
rm -f ./data/.runner
|
||||
fi
|
||||
|
||||
# Create data directory if it doesn't exist
|
||||
mkdir -p ./data
|
||||
|
||||
# Start services
|
||||
echo ""
|
||||
echo -e "${YELLOW}Starting services...${NC}"
|
||||
docker compose up -d
|
||||
|
||||
# Wait for services to be ready
|
||||
echo "Waiting for services to start..."
|
||||
sleep 5
|
||||
|
||||
# Register runner
|
||||
echo ""
|
||||
echo -e "${YELLOW}Registering runner...${NC}"
|
||||
docker compose exec -T gitea-runner act_runner register \
|
||||
--instance "${GITEA_INSTANCE_URL}" \
|
||||
--token "${GITEA_RUNNER_REGISTRATION_TOKEN}" \
|
||||
--name "${GITEA_RUNNER_NAME:-dev-runner}" \
|
||||
--labels "${GITEA_RUNNER_LABELS:-ubuntu-latest:docker://node:16-bullseye}"
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ Runner registered successfully!${NC}"
|
||||
echo ""
|
||||
echo "Runner is now active and will start accepting jobs."
|
||||
echo ""
|
||||
echo "Useful commands:"
|
||||
echo " docker compose logs -f gitea-runner # View runner logs"
|
||||
echo " docker compose restart gitea-runner # Restart runner"
|
||||
echo " docker compose down # Stop runner"
|
||||
echo ""
|
||||
echo "Check runner status in Gitea:"
|
||||
echo " ${GITEA_INSTANCE_URL}/admin/actions/runners"
|
||||
echo ""
|
||||
else
|
||||
echo ""
|
||||
echo -e "${RED}✗ Registration failed${NC}"
|
||||
echo "Check the logs for details:"
|
||||
echo " docker compose logs gitea-runner"
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
59
deployment/gitea-runner/unregister.sh
Executable file
59
deployment/gitea-runner/unregister.sh
Executable file
@@ -0,0 +1,59 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Gitea Actions Runner Unregistration Script
|
||||
# This script unregisters the runner from the Gitea instance
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
echo -e "${YELLOW}Gitea Actions Runner Unregistration${NC}"
|
||||
echo "========================================"
|
||||
|
||||
# Check if runner is registered
|
||||
if [ ! -f ./data/.runner ]; then
|
||||
echo -e "${YELLOW}Warning: No registration file found${NC}"
|
||||
echo "Runner may not be registered"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
read -p "Are you sure you want to unregister this runner? (y/N): " -n 1 -r
|
||||
echo ""
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "Unregistration cancelled"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Stop services
|
||||
echo ""
|
||||
echo -e "${YELLOW}Stopping services...${NC}"
|
||||
docker compose down
|
||||
|
||||
# Remove registration file
|
||||
echo "Removing registration file..."
|
||||
rm -f ./data/.runner
|
||||
|
||||
# Remove runner data
|
||||
echo ""
|
||||
read -p "Do you want to remove all runner data? (y/N): " -n 1 -r
|
||||
echo ""
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "Removing runner data..."
|
||||
rm -rf ./data/*
|
||||
echo -e "${GREEN}✓ Runner data removed${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✓ Runner unregistered successfully!${NC}"
|
||||
echo ""
|
||||
echo "Note: You may need to manually remove the runner from Gitea UI:"
|
||||
echo " Go to: Site Administration > Actions > Runners"
|
||||
echo " Find the runner and click 'Delete'"
|
||||
echo ""
|
||||
2
deployment/infrastructure/.gitignore
vendored
2
deployment/infrastructure/.gitignore
vendored
@@ -1,2 +0,0 @@
|
||||
# Ignore local Ansible vault pass in infrastructure directory
|
||||
.vault_pass
|
||||
@@ -1,554 +0,0 @@
|
||||
# Production Deployment Analysis & Fix Strategy
|
||||
|
||||
**Date**: 2025-10-27
|
||||
**Status**: CRITICAL - Production website returning HTTP 500 errors
|
||||
**Root Cause**: Database connection configuration error (DB_PORT mismatch)
|
||||
|
||||
---
|
||||
|
||||
## 1. Complete Deployment Flow Analysis
|
||||
|
||||
### Deployment Architecture
|
||||
|
||||
The project uses a **release-based deployment pattern** with shared configuration:
|
||||
|
||||
```
|
||||
/home/deploy/michaelschiemer/
|
||||
├── releases/
|
||||
│ ├── 1761566515/ # Current release (timestamped)
|
||||
│ ├── 1761565432/ # Previous releases
|
||||
│ └── ...
|
||||
├── shared/
|
||||
│ └── .env.production # Shared configuration file
|
||||
└── current -> releases/1761566515/ # Symlink to active release
|
||||
```
|
||||
|
||||
**Key Characteristics**:
|
||||
- **Releases Directory**: Each deployment creates a new timestamped release
|
||||
- **Shared Directory**: Configuration files persist across deployments
|
||||
- **Current Symlink**: Points to the active release
|
||||
- **Symlink Chain**: `current/.env.production` → `shared/.env.production` → Used by application
|
||||
|
||||
### .env File Sources (3 Different Files Identified)
|
||||
|
||||
#### 1. Root Directory: `/home/michael/dev/michaelschiemer/.env.production`
|
||||
- **Size**: 2.9K
|
||||
- **Checksum**: 9f33068713432c1dc4008724dc6923b0
|
||||
- **DB_PORT**: 5432 (CORRECT for PostgreSQL)
|
||||
- **DB_USERNAME**: mdb_user (with underscore)
|
||||
- **DB_PASSWORD**: Qo2KNgGqeYksEhKr57pgugakxlothn8J
|
||||
- **Purpose**: Framework default configuration
|
||||
- **Status**: CORRECT database configuration
|
||||
|
||||
#### 2. Deployment Directory: `/home/michael/dev/michaelschiemer/deployment/applications/environments/.env.production`
|
||||
- **Size**: 4.3K
|
||||
- **Checksum**: b516bf86beed813df03a30f655687b72
|
||||
- **DB_PORT**: 5432 (CORRECT for PostgreSQL)
|
||||
- **DB_USERNAME**: mdb_user (with underscore)
|
||||
- **DB_PASSWORD**: Qo2KNgGqeYksEhKr57pgugakxlothn8J
|
||||
- **Purpose**: Application-specific production configuration
|
||||
- **Status**: CORRECT and MORE COMPLETE (includes Redis, Queue, Mail, Monitoring configs)
|
||||
|
||||
#### 3. Production Server: `/home/deploy/michaelschiemer/shared/.env.production`
|
||||
- **Size**: 3.0K (modified Oct 26 20:56)
|
||||
- **Line 15**: `DB_PORT=3306` (WRONG - MySQL port instead of PostgreSQL)
|
||||
- **Line 67**: `DB_PORT=` (duplicate empty entry)
|
||||
- **DB_USERNAME**: mdb-user (with hyphen - likely wrong)
|
||||
- **DB_PASSWORD**: StartSimple2024! (different from local configs)
|
||||
- **Status**: CORRUPTED - Wrong database configuration causing HTTP 500 errors
|
||||
|
||||
### Deployment Playbook Flow
|
||||
|
||||
**File**: `/home/michael/dev/michaelschiemer/deployment/infrastructure/playbooks/deploy-rsync-based.yml`
|
||||
|
||||
**Critical Configuration**:
|
||||
```yaml
|
||||
local_project_path: "{{ playbook_dir }}/../../.." # 3 dirs up = /home/michael/dev/michaelschiemer
|
||||
shared_files:
|
||||
- .env.production # Marked as SHARED file
|
||||
rsync_excludes:
|
||||
- .env
|
||||
- .env.local
|
||||
- .env.development
|
||||
```
|
||||
|
||||
**Deployment Steps**:
|
||||
1. **Rsync files** from `{{ local_project_path }}` (framework root) to release directory
|
||||
- Excludes: `.env`, `.env.local`, `.env.development`
|
||||
- Includes: `.env.production` from root directory
|
||||
2. **Create release directory**: `/home/deploy/michaelschiemer/releases/{{ timestamp }}`
|
||||
3. **Copy files** to release directory
|
||||
4. **Create symlinks**:
|
||||
- `release/.env.production` → `../../shared/.env.production`
|
||||
- `release/.env` → `../../shared/.env.production`
|
||||
5. **Update current** symlink → latest release
|
||||
6. **Restart containers** via docker-compose
|
||||
|
||||
**CRITICAL ISSUE IDENTIFIED**:
|
||||
The playbook does NOT have a task to initially copy `.env.production` to `shared/.env.production`. It only creates symlinks assuming the file already exists. This means:
|
||||
- Initial setup requires MANUAL copy of `.env.production` to `shared/`
|
||||
- Updates to `.env.production` require MANUAL sync to production server
|
||||
- The rsync'd `.env.production` in release directory is IGNORED (symlink overrides it)
|
||||
|
||||
---
|
||||
|
||||
## 2. Production Server .env Status
|
||||
|
||||
### Current State (BROKEN)
|
||||
|
||||
```bash
|
||||
# /home/deploy/michaelschiemer/shared/.env.production
|
||||
|
||||
Line 15: DB_PORT=3306 # WRONG - MySQL port (should be 5432 for PostgreSQL)
|
||||
Line 67: DB_PORT= # Duplicate empty entry
|
||||
|
||||
DB_USERNAME=mdb-user # Wrong format (should be mdb_user with underscore)
|
||||
DB_PASSWORD=StartSimple2024! # Wrong password (doesn't match local configs)
|
||||
```
|
||||
|
||||
### Container Status
|
||||
|
||||
```
|
||||
CONTAINER STATUS ISSUE
|
||||
php Up 27 minutes (healthy) -
|
||||
db Up 40 minutes (healthy) PostgreSQL running on port 5432
|
||||
redis Up 40 minutes (healthy) -
|
||||
web Up 40 minutes (UNHEALTHY) Nginx cannot connect to PHP due to DB error
|
||||
queue-worker Restarting (1) 4s ago PHP crashing due to DB connection error
|
||||
```
|
||||
|
||||
### Error Pattern
|
||||
|
||||
- **HTTP 500** on all requests (/, /impressum, etc.)
|
||||
- **Root Cause**: PHP application cannot connect to database because:
|
||||
1. `DB_PORT=3306` (MySQL) instead of `5432` (PostgreSQL)
|
||||
2. Wrong username format (`mdb-user` vs `mdb_user`)
|
||||
3. Wrong password
|
||||
- **Impact**: All PHP processes fail to initialize → Nginx returns 500
|
||||
|
||||
---
|
||||
|
||||
## 3. Deployment Command Documentation
|
||||
|
||||
### WORKING Commands (Current Playbook)
|
||||
|
||||
#### Deploy via Ansible Playbook
|
||||
```bash
|
||||
cd /home/michael/dev/michaelschiemer/deployment/infrastructure
|
||||
|
||||
# Full production deployment
|
||||
ansible-playbook \
|
||||
-i inventories/production/hosts.yml \
|
||||
playbooks/deploy-rsync-based.yml \
|
||||
--vault-password-file .vault_pass
|
||||
|
||||
# With specific variables
|
||||
ansible-playbook \
|
||||
-i inventories/production/hosts.yml \
|
||||
playbooks/deploy-rsync-based.yml \
|
||||
--vault-password-file .vault_pass \
|
||||
-e "deployment_branch=main"
|
||||
```
|
||||
|
||||
#### Check Production Status
|
||||
```bash
|
||||
# Check containers
|
||||
ansible web_servers \
|
||||
-i inventories/production/hosts.yml \
|
||||
-m shell -a "docker ps -a" \
|
||||
--vault-password-file .vault_pass
|
||||
|
||||
# Check .env configuration
|
||||
ansible web_servers \
|
||||
-i inventories/production/hosts.yml \
|
||||
-m shell -a "cat /home/deploy/michaelschiemer/shared/.env.production" \
|
||||
--vault-password-file .vault_pass
|
||||
|
||||
# Check application logs
|
||||
ansible web_servers \
|
||||
-i inventories/production/hosts.yml \
|
||||
-m shell -a "docker logs web --tail 50" \
|
||||
--vault-password-file .vault_pass
|
||||
```
|
||||
|
||||
### COMMANDS TO CREATE (User Requirements)
|
||||
|
||||
#### 1. Simple Manual Deploy Script
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# File: /home/michael/dev/michaelschiemer/deployment/infrastructure/scripts/deploy.sh
|
||||
|
||||
set -e
|
||||
|
||||
cd "$(dirname "$0")/.."
|
||||
|
||||
echo "🚀 Deploying to production..."
|
||||
|
||||
ansible-playbook \
|
||||
-i inventories/production/hosts.yml \
|
||||
playbooks/deploy-rsync-based.yml \
|
||||
--vault-password-file .vault_pass
|
||||
|
||||
echo "✅ Deployment complete!"
|
||||
echo "🔍 Check status: docker ps"
|
||||
```
|
||||
|
||||
#### 2. .env Update Script
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# File: /home/michael/dev/michaelschiemer/deployment/infrastructure/scripts/update-env.sh
|
||||
|
||||
set -e
|
||||
|
||||
cd "$(dirname "$0")/../.."
|
||||
|
||||
SOURCE_ENV="deployment/applications/environments/.env.production"
|
||||
REMOTE_PATH="/home/deploy/michaelschiemer/shared/.env.production"
|
||||
|
||||
if [[ ! -f "$SOURCE_ENV" ]]; then
|
||||
echo "❌ Source .env.production not found at: $SOURCE_ENV"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "📤 Uploading .env.production to production server..."
|
||||
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m copy \
|
||||
-a "src=$SOURCE_ENV dest=$REMOTE_PATH mode=0644" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
|
||||
echo "🔄 Restarting containers..."
|
||||
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m shell \
|
||||
-a "cd /home/deploy/michaelschiemer/current && docker-compose restart php web queue-worker" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
|
||||
echo "✅ .env.production updated and containers restarted!"
|
||||
```
|
||||
|
||||
#### 3. Quick Production Sync
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# File: /home/michael/dev/michaelschiemer/deployment/infrastructure/scripts/quick-sync.sh
|
||||
|
||||
set -e
|
||||
|
||||
cd "$(dirname "$0")/../.."
|
||||
|
||||
# Sync code changes (no .env update)
|
||||
rsync -avz \
|
||||
--exclude '.env' \
|
||||
--exclude '.env.local' \
|
||||
--exclude 'node_modules/' \
|
||||
--exclude '.git/' \
|
||||
./ deploy@94.16.110.151:/home/deploy/michaelschiemer/current/
|
||||
|
||||
# Restart containers
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m shell \
|
||||
-a "cd /home/deploy/michaelschiemer/current && docker-compose restart php web" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
|
||||
echo "✅ Quick sync complete!"
|
||||
```
|
||||
|
||||
### SCRIPTS TO REMOVE (Unused/Deprecated)
|
||||
|
||||
1. **`/home/michael/dev/michaelschiemer/deploy.sh`** (if exists in root)
|
||||
- Reason: Conflicting with playbook-based deployment
|
||||
|
||||
2. **`/home/michael/dev/michaelschiemer/.env.local`** (if exists)
|
||||
- Reason: Not used in production, causes confusion
|
||||
|
||||
3. **Duplicate .env files** in root:
|
||||
- Keep: `.env.production` (source of truth for framework defaults)
|
||||
- Remove: `.env.backup.*`, `.env.old`, etc.
|
||||
|
||||
---
|
||||
|
||||
## 4. Fix Strategy (Step-by-Step)
|
||||
|
||||
### IMMEDIATE FIX (Restore Production)
|
||||
|
||||
#### Step 1: Update Production .env.production File
|
||||
|
||||
```bash
|
||||
cd /home/michael/dev/michaelschiemer
|
||||
|
||||
# Copy correct .env.production to production server
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m copy \
|
||||
-a "src=deployment/applications/environments/.env.production dest=/home/deploy/michaelschiemer/shared/.env.production mode=0644" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
```
|
||||
|
||||
**Why this file?**
|
||||
- Most complete configuration (4.3K vs 2.9K)
|
||||
- Includes Redis, Queue, Mail, Monitoring configs
|
||||
- Correct DB_PORT=5432
|
||||
- Correct DB credentials
|
||||
|
||||
#### Step 2: Verify .env.production on Server
|
||||
|
||||
```bash
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m shell \
|
||||
-a "grep -E '(DB_PORT|DB_USERNAME|DB_PASSWORD)' /home/deploy/michaelschiemer/shared/.env.production" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
```
|
||||
|
||||
**Expected Output**:
|
||||
```
|
||||
DB_PORT=5432
|
||||
DB_USERNAME=mdb_user
|
||||
DB_PASSWORD=Qo2KNgGqeYksEhKr57pgugakxlothn8J
|
||||
```
|
||||
|
||||
#### Step 3: Restart Containers
|
||||
|
||||
```bash
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m shell \
|
||||
-a "cd /home/deploy/michaelschiemer/current && docker-compose restart php web queue-worker" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
```
|
||||
|
||||
#### Step 4: Verify Website Functionality
|
||||
|
||||
```bash
|
||||
# Check HTTP status
|
||||
curl -I https://michaelschiemer.de
|
||||
|
||||
# Expected: HTTP/2 200 OK (instead of 500)
|
||||
|
||||
# Check container health
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m shell \
|
||||
-a "docker ps | grep -E '(web|php|queue-worker)'" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
```
|
||||
|
||||
**Expected**: All containers should be "Up" and "healthy"
|
||||
|
||||
### LONG-TERM FIX (Prevent Future Issues)
|
||||
|
||||
#### 1. Update Playbook to Sync .env.production
|
||||
|
||||
Add task to `deploy-rsync-based.yml`:
|
||||
|
||||
```yaml
|
||||
# After "Synchronize project files" task, add:
|
||||
|
||||
- name: Sync .env.production to shared directory
|
||||
copy:
|
||||
src: "{{ local_project_path }}/deployment/applications/environments/.env.production"
|
||||
dest: "{{ project_path }}/shared/.env.production"
|
||||
mode: '0644'
|
||||
when: sync_env_to_shared | default(true)
|
||||
tags:
|
||||
- deploy
|
||||
- config
|
||||
```
|
||||
|
||||
#### 2. Create Helper Scripts
|
||||
|
||||
Create the 3 scripts documented in section 3:
|
||||
- `scripts/deploy.sh` - Simple wrapper for playbook
|
||||
- `scripts/update-env.sh` - Update .env.production only
|
||||
- `scripts/quick-sync.sh` - Quick code sync without full deployment
|
||||
|
||||
#### 3. Establish Source of Truth
|
||||
|
||||
**Decision**: Use `deployment/applications/environments/.env.production` as source of truth
|
||||
- Most complete configuration
|
||||
- Application-specific settings
|
||||
- Includes all production services
|
||||
|
||||
**Action**: Document in README.md:
|
||||
```markdown
|
||||
## Production Configuration
|
||||
|
||||
**Source of Truth**: `deployment/applications/environments/.env.production`
|
||||
|
||||
To update production .env:
|
||||
1. Edit `deployment/applications/environments/.env.production`
|
||||
2. Run `./deployment/infrastructure/scripts/update-env.sh`
|
||||
3. Containers will auto-restart with new config
|
||||
```
|
||||
|
||||
#### 4. Add .env Validation
|
||||
|
||||
Create pre-deployment validation script:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# scripts/validate-env.sh
|
||||
|
||||
ENV_FILE="deployment/applications/environments/.env.production"
|
||||
|
||||
echo "🔍 Validating .env.production..."
|
||||
|
||||
# Check required variables
|
||||
REQUIRED_VARS=(
|
||||
"DB_DRIVER"
|
||||
"DB_HOST"
|
||||
"DB_PORT"
|
||||
"DB_DATABASE"
|
||||
"DB_USERNAME"
|
||||
"DB_PASSWORD"
|
||||
)
|
||||
|
||||
for var in "${REQUIRED_VARS[@]}"; do
|
||||
if ! grep -q "^${var}=" "$ENV_FILE"; then
|
||||
echo "❌ Missing required variable: $var"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Check PostgreSQL port
|
||||
if ! grep -q "^DB_PORT=5432" "$ENV_FILE"; then
|
||||
echo "⚠️ Warning: DB_PORT should be 5432 for PostgreSQL"
|
||||
fi
|
||||
|
||||
echo "✅ .env.production validation passed"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Cleanup Recommendations
|
||||
|
||||
### Files to Remove
|
||||
|
||||
#### In Framework Root (`/home/michael/dev/michaelschiemer/`)
|
||||
```bash
|
||||
# List files to remove
|
||||
find . -maxdepth 1 -name ".env.backup*" -o -name ".env.old*" -o -name ".env.local"
|
||||
|
||||
# Remove after confirmation
|
||||
rm -f .env.backup* .env.old* .env.local
|
||||
```
|
||||
|
||||
#### In Deployment Directory
|
||||
```bash
|
||||
# Check for duplicate/old deployment scripts
|
||||
find deployment/ -name "deploy-old.yml" -o -name "*.backup"
|
||||
```
|
||||
|
||||
#### On Production Server
|
||||
```bash
|
||||
# Clean up old releases (keep last 5)
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m shell \
|
||||
-a "cd /home/deploy/michaelschiemer/releases && ls -t | tail -n +6 | xargs rm -rf" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
|
||||
# Remove duplicate .env files in current release
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m shell \
|
||||
-a "cd /home/deploy/michaelschiemer/current && rm -f .env.backup* .env.old*" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
```
|
||||
|
||||
### Configuration to Keep
|
||||
|
||||
**Essential Files**:
|
||||
- `/.env.production` - Framework defaults (keep for reference)
|
||||
- `/deployment/applications/environments/.env.production` - Source of truth
|
||||
- `/deployment/infrastructure/playbooks/deploy-rsync-based.yml` - Main playbook
|
||||
- `/deployment/infrastructure/inventories/production/hosts.yml` - Inventory
|
||||
|
||||
**Symlinks (Do Not Remove)**:
|
||||
- `/home/deploy/michaelschiemer/current/.env.production` → `shared/.env.production`
|
||||
- `/home/deploy/michaelschiemer/current/.env` → `shared/.env.production`
|
||||
|
||||
---
|
||||
|
||||
## 6. Post-Fix Verification Checklist
|
||||
|
||||
```bash
|
||||
# 1. Website accessible
|
||||
curl -I https://michaelschiemer.de
|
||||
# Expected: HTTP/2 200 OK
|
||||
|
||||
# 2. All containers healthy
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m shell -a "docker ps" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
# Expected: All "Up" and "(healthy)"
|
||||
|
||||
# 3. Database connection working
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m shell -a "docker exec php php -r \"new PDO('pgsql:host=db;port=5432;dbname=michaelschiemer', 'mdb_user', 'Qo2KNgGqeYksEhKr57pgugakxlothn8J');\"" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
# Expected: No errors
|
||||
|
||||
# 4. Application logs clean
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m shell -a "docker logs web --tail 20" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
# Expected: HTTP 200 responses, no 500 errors
|
||||
|
||||
# 5. Queue worker stable
|
||||
ansible web_servers \
|
||||
-i deployment/infrastructure/inventories/production/hosts.yml \
|
||||
-m shell -a "docker ps | grep queue-worker" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
# Expected: "Up" status (not "Restarting")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Future Deployment Best Practices
|
||||
|
||||
1. **Always validate .env before deployment**
|
||||
- Run `scripts/validate-env.sh` pre-deployment
|
||||
- Check DB_PORT=5432 for PostgreSQL
|
||||
- Verify credentials match database server
|
||||
|
||||
2. **Use playbook for all deployments**
|
||||
- Consistent process
|
||||
- Automated rollback capability
|
||||
- Proper symlink management
|
||||
|
||||
3. **Monitor container health post-deployment**
|
||||
- Check `docker ps` output
|
||||
- Verify all containers "(healthy)"
|
||||
- Check application logs for errors
|
||||
|
||||
4. **Keep .env.production in sync**
|
||||
- Single source of truth: `deployment/applications/environments/.env.production`
|
||||
- Use `update-env.sh` script for updates
|
||||
- Never manually edit on production server
|
||||
|
||||
5. **Regular backups**
|
||||
- Backup `shared/.env.production` before changes
|
||||
- Keep last 5 releases for quick rollback
|
||||
- Document any manual production changes
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
**Current Status**: Production broken due to DB_PORT configuration error
|
||||
**Root Cause**: Manual edits to `shared/.env.production` with wrong PostgreSQL port
|
||||
**Fix Time**: ~5 minutes (copy correct .env + restart containers)
|
||||
**Prevention**: Automated .env sync in playbook + validation scripts
|
||||
|
||||
**Next Steps**:
|
||||
1. Execute Step 1-4 of Fix Strategy (IMMEDIATE)
|
||||
2. Verify website returns HTTP 200
|
||||
3. Implement long-term fixes (playbook updates, scripts)
|
||||
4. Document deployment process in README.md
|
||||
@@ -1,286 +0,0 @@
|
||||
# Production Deployment Fix Summary
|
||||
|
||||
**Date**: 2025-10-27
|
||||
**Status**: PARTIALLY FIXED - DB configuration corrected, but additional issues remain
|
||||
|
||||
---
|
||||
|
||||
## What Was Fixed
|
||||
|
||||
### 1. Database Configuration Corrected ✅
|
||||
|
||||
**Problem**: Wrong DB_PORT in production `.env.production`
|
||||
- Line 15: `DB_PORT=3306` (MySQL port)
|
||||
- Line 67: `DB_PORT=` (duplicate empty entry)
|
||||
- Wrong username: `mdb-user` (should be `mdb_user`)
|
||||
- Wrong password
|
||||
|
||||
**Solution Applied**:
|
||||
```bash
|
||||
# Copied correct .env.production from source of truth
|
||||
ansible web_servers -m copy \
|
||||
-a "src=deployment/applications/environments/.env.production \
|
||||
dest=/home/deploy/michaelschiemer/shared/.env.production" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
```
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
DB_PORT=5432 # ✅ Correct
|
||||
DB_USERNAME=mdb_user # ✅ Correct
|
||||
DB_PASSWORD=Qo2KNgGqeYksEhKr57pgugakxlothn8J # ✅ Correct
|
||||
```
|
||||
|
||||
### 2. Containers Restarted ✅
|
||||
|
||||
```bash
|
||||
docker compose restart php web queue-worker
|
||||
```
|
||||
|
||||
**Current Status**:
|
||||
- **php**: Up 6 minutes (healthy) ✅
|
||||
- **db**: Up 53 minutes (healthy) ✅
|
||||
- **redis**: Up 53 minutes (healthy) ✅
|
||||
- **web**: Up 6 minutes (UNHEALTHY) ⚠️
|
||||
- **queue-worker**: Restarting (1) ❌
|
||||
|
||||
---
|
||||
|
||||
## Remaining Issues
|
||||
|
||||
### Issue 1: Web Container Unhealthy ⚠️
|
||||
|
||||
**Symptom**: Website still returns HTTP 500
|
||||
|
||||
**Possible Causes**:
|
||||
1. **PHP-FPM not responding** - Web container can't connect to PHP
|
||||
2. **Application error** - PHP code failing during bootstrap
|
||||
3. **Missing files** - Application files not properly deployed
|
||||
4. **Permissions** - Web server can't access application files
|
||||
|
||||
**Next Steps to Diagnose**:
|
||||
```bash
|
||||
# Check if PHP-FPM is accessible from web container
|
||||
docker exec web curl http://php:9000
|
||||
|
||||
# Check Nginx configuration
|
||||
docker exec web nginx -t
|
||||
|
||||
# Check web container health check
|
||||
docker inspect web --format='{{json .State.Health}}' | jq
|
||||
|
||||
# Check if application files exist
|
||||
docker exec web ls -la /var/www/html/public/index.php
|
||||
```
|
||||
|
||||
### Issue 2: Queue Worker Crashing ❌
|
||||
|
||||
**Symptom**: Continuous restart loop
|
||||
|
||||
**Possible Causes**:
|
||||
1. **Same DB connection issue** (should be fixed now)
|
||||
2. **Missing queue configuration**
|
||||
3. **Redis connection issue**
|
||||
4. **Application code error in queue worker**
|
||||
|
||||
**Next Steps to Diagnose**:
|
||||
```bash
|
||||
# Check queue-worker logs
|
||||
docker logs queue-worker --tail 100
|
||||
|
||||
# Try running queue worker manually
|
||||
docker exec php php artisan queue:work --tries=1 --once
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Scripts Created ✅
|
||||
|
||||
### 1. Simple Deployment Script
|
||||
**Location**: `/home/michael/dev/michaelschiemer/deployment/infrastructure/scripts/deploy.sh`
|
||||
```bash
|
||||
./deployment/infrastructure/scripts/deploy.sh
|
||||
```
|
||||
|
||||
### 2. .env Update Script
|
||||
**Location**: `/home/michael/dev/michaelschiemer/deployment/infrastructure/scripts/update-env.sh`
|
||||
```bash
|
||||
./deployment/infrastructure/scripts/update-env.sh
|
||||
```
|
||||
|
||||
### 3. Quick Sync Script
|
||||
**Location**: `/home/michael/dev/michaelschiemer/deployment/infrastructure/scripts/quick-sync.sh`
|
||||
```bash
|
||||
./deployment/infrastructure/scripts/quick-sync.sh
|
||||
```
|
||||
|
||||
**Note**: All scripts updated to use `docker compose` (v2) instead of `docker-compose` (v1)
|
||||
|
||||
---
|
||||
|
||||
## Documentation Created ✅
|
||||
|
||||
### Comprehensive Deployment Analysis
|
||||
**Location**: `/home/michael/dev/michaelschiemer/deployment/infrastructure/DEPLOYMENT_ANALYSIS.md`
|
||||
|
||||
**Contents**:
|
||||
1. Complete deployment flow analysis
|
||||
2. .env file sources and conflicts
|
||||
3. Deployment command documentation
|
||||
4. Step-by-step fix strategy
|
||||
5. Cleanup recommendations
|
||||
6. Post-fix verification checklist
|
||||
|
||||
---
|
||||
|
||||
## Recommended Next Actions
|
||||
|
||||
### Immediate (To Fix HTTP 500)
|
||||
|
||||
1. **Check Application Bootstrap**:
|
||||
```bash
|
||||
# Test if PHP application can start
|
||||
ansible web_servers -m shell \
|
||||
-a "docker exec php php /var/www/html/public/index.php" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
```
|
||||
|
||||
2. **Check Nginx-PHP Connection**:
|
||||
```bash
|
||||
# Test PHP-FPM socket
|
||||
ansible web_servers -m shell \
|
||||
-a "docker exec web curl -v http://php:9000" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
```
|
||||
|
||||
3. **Check Application Logs**:
|
||||
```bash
|
||||
# Look for PHP errors
|
||||
ansible web_servers -m shell \
|
||||
-a "docker exec php ls -la /var/www/html/storage/logs/" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
```
|
||||
|
||||
4. **Verify File Permissions**:
|
||||
```bash
|
||||
# Check if web server can read files
|
||||
ansible web_servers -m shell \
|
||||
-a "docker exec web ls -la /var/www/html/public/" \
|
||||
--vault-password-file deployment/infrastructure/.vault_pass
|
||||
```
|
||||
|
||||
### Short-Term (Within 24h)
|
||||
|
||||
1. **Fix Web Container Health** - Resolve HTTP 500 errors
|
||||
2. **Fix Queue Worker** - Stop crash loop
|
||||
3. **Full Deployment Test** - Run complete deployment playbook
|
||||
4. **Verify All Services** - Ensure all containers healthy
|
||||
|
||||
### Long-Term (This Week)
|
||||
|
||||
1. **Update Playbook** - Add .env.production sync task
|
||||
2. **Add Validation** - Pre-deployment .env validation script
|
||||
3. **Document Process** - Update README with deployment guide
|
||||
4. **Setup Monitoring** - Add health check alerts
|
||||
5. **Cleanup Old Files** - Remove duplicate .env files
|
||||
|
||||
---
|
||||
|
||||
## Key Learnings
|
||||
|
||||
### 1. Deployment Flow Issues
|
||||
|
||||
**Problem**: Playbook doesn't sync `.env.production` to `shared/`
|
||||
**Impact**: Manual updates required for configuration changes
|
||||
**Solution**: Add sync task to playbook
|
||||
|
||||
### 2. Multiple .env Sources
|
||||
|
||||
**Problem**: 3 different `.env.production` files with conflicting content
|
||||
**Resolution**: Use `deployment/applications/environments/.env.production` as source of truth
|
||||
|
||||
### 3. Docker Compose Version
|
||||
|
||||
**Problem**: Production uses Docker Compose v2 (`docker compose`)
|
||||
**Impact**: Scripts using v1 syntax (`docker-compose`) fail
|
||||
**Solution**: All scripts updated to v2 syntax
|
||||
|
||||
### 4. Symlink Chain Complexity
|
||||
|
||||
**Structure**:
|
||||
```
|
||||
current/.env → shared/.env.production
|
||||
current/.env.production → shared/.env.production
|
||||
```
|
||||
|
||||
**Risk**: If `shared/.env.production` is wrong, ALL releases break
|
||||
**Mitigation**: Validate before deploy, backup before changes
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Check Production Status
|
||||
```bash
|
||||
cd /home/michael/dev/michaelschiemer/deployment/infrastructure
|
||||
|
||||
# Container status
|
||||
ansible web_servers -i inventories/production/hosts.yml \
|
||||
-m shell -a "docker ps" --vault-password-file .vault_pass
|
||||
|
||||
# .env configuration
|
||||
ansible web_servers -i inventories/production/hosts.yml \
|
||||
-m shell -a "cat /home/deploy/michaelschiemer/shared/.env.production" \
|
||||
--vault-password-file .vault_pass
|
||||
|
||||
# Application logs
|
||||
ansible web_servers -i inventories/production/hosts.yml \
|
||||
-m shell -a "docker logs web --tail 50" --vault-password-file .vault_pass
|
||||
```
|
||||
|
||||
### Deploy to Production
|
||||
```bash
|
||||
# Full deployment
|
||||
./deployment/infrastructure/scripts/deploy.sh
|
||||
|
||||
# Update .env only
|
||||
./deployment/infrastructure/scripts/update-env.sh
|
||||
|
||||
# Quick code sync
|
||||
./deployment/infrastructure/scripts/quick-sync.sh
|
||||
```
|
||||
|
||||
### Emergency Rollback
|
||||
```bash
|
||||
# List releases
|
||||
ansible web_servers -i inventories/production/hosts.yml \
|
||||
-m shell -a "ls -la /home/deploy/michaelschiemer/releases/" \
|
||||
--vault-password-file .vault_pass
|
||||
|
||||
# Switch to previous release
|
||||
ansible web_servers -i inventories/production/hosts.yml \
|
||||
-m shell -a "ln -sfn /home/deploy/michaelschiemer/releases/PREVIOUS_TIMESTAMP \
|
||||
/home/deploy/michaelschiemer/current" \
|
||||
--vault-password-file .vault_pass
|
||||
|
||||
# Restart containers
|
||||
ansible web_servers -i inventories/production/hosts.yml \
|
||||
-m shell -a "cd /home/deploy/michaelschiemer/current && docker compose restart" \
|
||||
--vault-password-file .vault_pass
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Support Contacts
|
||||
|
||||
**Documentation**:
|
||||
- Deployment Analysis: `deployment/infrastructure/DEPLOYMENT_ANALYSIS.md`
|
||||
- This Summary: `deployment/infrastructure/DEPLOYMENT_FIX_SUMMARY.md`
|
||||
|
||||
**Scripts**:
|
||||
- All scripts in: `deployment/infrastructure/scripts/`
|
||||
- Make executable: `chmod +x deployment/infrastructure/scripts/*.sh`
|
||||
|
||||
**Configuration**:
|
||||
- Source of Truth: `deployment/applications/environments/.env.production`
|
||||
- Production File: `/home/deploy/michaelschiemer/shared/.env.production`
|
||||
@@ -1,319 +0,0 @@
|
||||
# Custom PHP Framework - Infrastructure Automation
|
||||
|
||||
Modern, secure Ansible infrastructure automation for the Custom PHP Framework with PHP 8.4 optimization.
|
||||
|
||||
## 🏗️ Architecture Overview
|
||||
|
||||
### Security-First Design
|
||||
- **SSH Hardening**: Secure SSH configuration with key-based authentication
|
||||
- **Firewall Protection**: UFW firewall with fail2ban intrusion detection
|
||||
- **SSL/TLS**: Let's Encrypt certificates with modern cipher suites
|
||||
- **Security Headers**: Comprehensive HTTP security headers
|
||||
- **System Hardening**: Kernel parameters, audit logging, and security monitoring
|
||||
|
||||
### Docker-Optimized Runtime
|
||||
- **PHP 8.4**: Optimized Docker containers with custom PHP configuration
|
||||
- **Security Profiles**: AppArmor and seccomp security profiles
|
||||
- **Resource Limits**: Memory and CPU constraints for production workloads
|
||||
- **Health Checks**: Automated container health monitoring
|
||||
|
||||
### Production-Ready Infrastructure
|
||||
- **Environment Separation**: Development, staging, and production configurations
|
||||
- **Monitoring**: System health checks and performance monitoring
|
||||
- **Backup System**: Automated backup with encryption and retention policies
|
||||
- **Log Management**: Centralized logging with rotation and monitoring
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Prerequisites
|
||||
|
||||
```bash
|
||||
# Install Ansible
|
||||
pip install ansible
|
||||
|
||||
# Install required collections
|
||||
ansible-galaxy collection install community.general
|
||||
ansible-galaxy collection install community.crypto
|
||||
ansible-galaxy collection install community.docker
|
||||
```
|
||||
|
||||
### Initial Setup
|
||||
|
||||
1. **Configure Ansible Vault**:
|
||||
```bash
|
||||
cd deployment/infrastructure
|
||||
echo "your_vault_password" > .vault_pass
|
||||
chmod 600 .vault_pass
|
||||
|
||||
# Encrypt sensitive variables
|
||||
ansible-vault encrypt group_vars/all/vault.yml
|
||||
```
|
||||
|
||||
2. **Update Inventory**:
|
||||
- Edit `inventories/production/hosts.yml` with your server details
|
||||
- Update domain and SSL email configuration
|
||||
|
||||
3. **Deploy Infrastructure**:
|
||||
```bash
|
||||
# Production deployment
|
||||
ansible-playbook -i inventories/production site.yml
|
||||
|
||||
# Staging deployment
|
||||
ansible-playbook -i inventories/staging site.yml
|
||||
```
|
||||
|
||||
## 📁 Directory Structure
|
||||
|
||||
```
|
||||
deployment/infrastructure/
|
||||
├── ansible.cfg # Ansible configuration
|
||||
├── site.yml # Main deployment playbook
|
||||
├── inventories/ # Environment-specific inventory
|
||||
│ ├── production/
|
||||
│ ├── staging/
|
||||
│ └── development/
|
||||
├── group_vars/ # Global variables
|
||||
│ └── all/
|
||||
├── roles/ # Ansible roles
|
||||
│ ├── base-security/ # Security hardening
|
||||
│ ├── docker-runtime/ # Docker with PHP 8.4
|
||||
│ ├── nginx-proxy/ # Nginx reverse proxy
|
||||
│ └── monitoring/ # Health monitoring
|
||||
└── playbooks/ # Additional playbooks
|
||||
```
|
||||
|
||||
## 🔒 Security Features
|
||||
|
||||
### SSH Hardening
|
||||
- Key-based authentication only
|
||||
- Strong cipher suites and key exchange algorithms
|
||||
- Connection rate limiting
|
||||
- Security banners and access logging
|
||||
|
||||
### Firewall Configuration
|
||||
- Default deny policy with specific allow rules
|
||||
- Rate limiting for SSH connections
|
||||
- Protection for Docker containers
|
||||
- Environment-specific rule sets
|
||||
|
||||
### SSL/TLS Security
|
||||
- Let's Encrypt certificates with auto-renewal
|
||||
- Modern TLS protocols (1.2, 1.3)
|
||||
- HSTS with preloading
|
||||
- OCSP stapling enabled
|
||||
|
||||
### Application Security
|
||||
- Security headers (CSP, HSTS, X-Frame-Options)
|
||||
- Rate limiting for API endpoints
|
||||
- Input validation and sanitization
|
||||
- OWASP security compliance
|
||||
|
||||
## 🐳 Docker Configuration
|
||||
|
||||
### PHP 8.4 Optimization
|
||||
- Custom PHP 8.4 container with security hardening
|
||||
- OPcache configuration for production performance
|
||||
- Memory and execution time limits
|
||||
- Extension management for framework requirements
|
||||
|
||||
### Container Security
|
||||
- Non-root user execution
|
||||
- Read-only root filesystem where possible
|
||||
- Security profiles (AppArmor, seccomp)
|
||||
- Resource constraints and health checks
|
||||
|
||||
### Network Security
|
||||
- Custom bridge networks with isolation
|
||||
- No inter-container communication by default
|
||||
- Encrypted internal communication
|
||||
- External access controls
|
||||
|
||||
## 📊 Monitoring & Health Checks
|
||||
|
||||
### System Monitoring
|
||||
- CPU, memory, and disk usage monitoring
|
||||
- Load average and process monitoring
|
||||
- Network and I/O performance tracking
|
||||
- Automated alerting for threshold breaches
|
||||
|
||||
### Application Health Checks
|
||||
- HTTP endpoint monitoring
|
||||
- Database connectivity checks
|
||||
- Framework-specific health validation
|
||||
- Container health verification
|
||||
|
||||
### Log Management
|
||||
- Centralized log collection and rotation
|
||||
- Error pattern detection and alerting
|
||||
- Security event logging and monitoring
|
||||
- Performance metrics collection
|
||||
|
||||
## 🔧 Environment Configuration
|
||||
|
||||
### Production Environment
|
||||
- High security settings with strict firewall
|
||||
- Performance optimizations enabled
|
||||
- Comprehensive monitoring and alerting
|
||||
- Daily automated backups
|
||||
|
||||
### Staging Environment
|
||||
- Relaxed security for testing
|
||||
- Debug mode enabled
|
||||
- Basic monitoring
|
||||
- Weekly backups
|
||||
|
||||
### Development Environment
|
||||
- Minimal security restrictions
|
||||
- Full debugging capabilities
|
||||
- No production optimizations
|
||||
- No automated backups
|
||||
|
||||
## 📋 Deployment Playbooks
|
||||
|
||||
### Main Infrastructure (`site.yml`)
|
||||
Deploys complete infrastructure stack:
|
||||
- Base security hardening
|
||||
- Docker runtime environment
|
||||
- Nginx reverse proxy with SSL
|
||||
- System monitoring and health checks
|
||||
|
||||
### Application Deployment (`playbooks/deploy-application.yml`)
|
||||
Handles application-specific deployment:
|
||||
- Code deployment from Git repository
|
||||
- Dependency installation (Composer, NPM)
|
||||
- Database migrations
|
||||
- Asset compilation and optimization
|
||||
- Service restarts and health verification
|
||||
|
||||
## 🛠️ Management Commands
|
||||
|
||||
### Infrastructure Management
|
||||
```bash
|
||||
# Deploy to production
|
||||
ansible-playbook -i inventories/production site.yml
|
||||
|
||||
# Deploy specific role
|
||||
ansible-playbook -i inventories/production site.yml --tags security
|
||||
|
||||
# Run health checks
|
||||
ansible-playbook -i inventories/production site.yml --tags verification
|
||||
|
||||
# Update SSL certificates
|
||||
ansible-playbook -i inventories/production site.yml --tags ssl
|
||||
```
|
||||
|
||||
### Application Management
|
||||
```bash
|
||||
# Deploy application code
|
||||
ansible-playbook -i inventories/production playbooks/deploy-application.yml
|
||||
|
||||
# Deploy specific branch
|
||||
ansible-playbook -i inventories/production playbooks/deploy-application.yml -e deploy_branch=feature/new-feature
|
||||
```
|
||||
|
||||
### Security Operations
|
||||
```bash
|
||||
# Security audit
|
||||
ansible-playbook -i inventories/production site.yml --tags audit
|
||||
|
||||
# Update security configurations
|
||||
ansible-playbook -i inventories/production site.yml --tags security
|
||||
|
||||
# Restart security services
|
||||
ansible-playbook -i inventories/production site.yml --tags security,restart
|
||||
```
|
||||
|
||||
## 🔐 Ansible Vault Usage
|
||||
|
||||
### Encrypting Secrets
|
||||
```bash
|
||||
# Encrypt vault file
|
||||
ansible-vault encrypt group_vars/all/vault.yml
|
||||
|
||||
# Edit encrypted file
|
||||
ansible-vault edit group_vars/all/vault.yml
|
||||
|
||||
# View encrypted file
|
||||
ansible-vault view group_vars/all/vault.yml
|
||||
```
|
||||
|
||||
### Running Playbooks with Vault
|
||||
```bash
|
||||
# Using vault password file (configured in ansible.cfg)
|
||||
ansible-playbook site.yml
|
||||
|
||||
# Prompt for vault password
|
||||
ansible-playbook site.yml --ask-vault-pass
|
||||
|
||||
# Using vault password file explicitly
|
||||
ansible-playbook site.yml --vault-password-file .vault_pass
|
||||
```
|
||||
|
||||
## 📝 Customization
|
||||
|
||||
### Adding Custom Roles
|
||||
1. Create role directory structure
|
||||
2. Define role metadata in `meta/main.yml`
|
||||
3. Add role to main playbook
|
||||
4. Test in development environment
|
||||
|
||||
### Environment-Specific Variables
|
||||
- Update inventory files for environment-specific settings
|
||||
- Modify group variables for global changes
|
||||
- Use vault files for sensitive information
|
||||
|
||||
### SSL Certificate Management
|
||||
- Let's Encrypt: Automatic certificate generation and renewal
|
||||
- Self-signed: For development and testing environments
|
||||
- Custom certificates: Place in appropriate directories
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**SSH Connection Failures**:
|
||||
- Verify SSH key configuration
|
||||
- Check firewall rules and fail2ban status
|
||||
- Ensure user has proper sudo privileges
|
||||
|
||||
**SSL Certificate Problems**:
|
||||
- Verify DNS resolution for domain
|
||||
- Check Let's Encrypt rate limits
|
||||
- Ensure port 80 is accessible for validation
|
||||
|
||||
**Docker Container Issues**:
|
||||
- Check Docker daemon status and logs
|
||||
- Verify image build and pull permissions
|
||||
- Review container resource limits
|
||||
|
||||
**Performance Problems**:
|
||||
- Monitor system resources and logs
|
||||
- Check application and database performance
|
||||
- Review caching and optimization settings
|
||||
|
||||
### Getting Help
|
||||
|
||||
For issues specific to the Custom PHP Framework infrastructure:
|
||||
1. Check Ansible logs in `/var/log/ansible.log`
|
||||
2. Review system logs for specific services
|
||||
3. Use the monitoring dashboard for system health
|
||||
4. Contact the development team at kontakt@michaelschiemer.de
|
||||
|
||||
## 📄 License
|
||||
|
||||
This infrastructure automation is part of the Custom PHP Framework project.
|
||||
Licensed under MIT License - see LICENSE file for details.
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch
|
||||
3. Test changes in development environment
|
||||
4. Submit a pull request with detailed description
|
||||
|
||||
---
|
||||
|
||||
**Domain**: michaelschiemer.de
|
||||
**Environment**: Production-ready with PHP 8.4 optimization
|
||||
**Security**: Enterprise-grade hardening and monitoring
|
||||
**Maintainer**: kontakt@michaelschiemer.de
|
||||
@@ -1,71 +0,0 @@
|
||||
[defaults]
|
||||
# Ansible Configuration for Custom PHP Framework Infrastructure
|
||||
inventory = inventories/production/hosts.yml
|
||||
roles_path = roles
|
||||
host_key_checking = False
|
||||
gathering = smart
|
||||
fact_caching = jsonfile
|
||||
fact_caching_connection = /tmp/ansible_facts_cache
|
||||
fact_caching_timeout = 3600
|
||||
|
||||
# Performance optimizations
|
||||
pipelining = True
|
||||
forks = 5
|
||||
strategy = linear
|
||||
# strategy_plugins = ~/.ansible/plugins/strategy:~/dev/mitogen/ansible_mitogen/plugins/strategy
|
||||
|
||||
# Logging and output
|
||||
log_path = logs/ansible.log
|
||||
stdout_callback = yaml
|
||||
stderr_callback = yaml
|
||||
bin_ansible_callbacks = True
|
||||
verbosity = 1
|
||||
|
||||
# Security settings - Vault password via environment or prompt (disabled for testing)
|
||||
ask_vault_pass = False
|
||||
# vault_encrypt_identity = vault@michaelschiemer.de
|
||||
# vault_identity_list = vault@michaelschiemer.de
|
||||
|
||||
# Connection settings
|
||||
timeout = 60
|
||||
remote_user = deploy
|
||||
private_key_file = ~/.ssh/deploy_key
|
||||
ansible_ssh_common_args = -o StrictHostKeyChecking=yes -o UserKnownHostsFile=~/.ssh/known_hosts -o ControlMaster=auto -o ControlPersist=60s
|
||||
|
||||
# Privilege escalation
|
||||
become = True
|
||||
become_method = sudo
|
||||
become_user = root
|
||||
become_ask_pass = False
|
||||
become_exe = sudo
|
||||
|
||||
[inventory]
|
||||
enable_plugins = host_list, script, auto, yaml, ini, toml
|
||||
|
||||
[ssh_connection]
|
||||
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=yes
|
||||
control_path = ~/.ssh/ansible-%%h-%%p-%%r
|
||||
retries = 3
|
||||
pipelining = True
|
||||
scp_if_ssh = smart
|
||||
transfer_method = smart
|
||||
|
||||
[persistent_connection]
|
||||
connect_timeout = 30
|
||||
command_timeout = 30
|
||||
|
||||
[galaxy]
|
||||
server_list = galaxy, community_galaxy
|
||||
ignore_certs = False
|
||||
|
||||
[galaxy_server.galaxy]
|
||||
url = https://galaxy.ansible.com/
|
||||
# Token should be set via environment: ANSIBLE_GALAXY_TOKEN
|
||||
|
||||
[galaxy_server.community_galaxy]
|
||||
url = https://galaxy.ansible.com/
|
||||
# Token should be set via environment: ANSIBLE_GALAXY_TOKEN
|
||||
|
||||
[diff]
|
||||
context = 3
|
||||
always = False
|
||||
@@ -1,17 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Quick deployment script with force flag
|
||||
# Usage: ./deploy.sh
|
||||
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
echo "🚀 Starting deployment to production..."
|
||||
echo ""
|
||||
|
||||
ansible-playbook \
|
||||
-i inventories/production/hosts.yml \
|
||||
playbooks/deploy-rsync-based.yml \
|
||||
--vault-password-file .vault_pass \
|
||||
--extra-vars 'force_deploy=true'
|
||||
|
||||
echo ""
|
||||
echo "✅ Deployment completed!"
|
||||
@@ -1,164 +0,0 @@
|
||||
---
|
||||
# Environment-specific variable mappings
|
||||
# These variables change behavior based on the environment
|
||||
|
||||
# Environment Detection
|
||||
environment_config:
|
||||
production:
|
||||
debug_enabled: false
|
||||
log_level: "error"
|
||||
cache_enabled: true
|
||||
minify_assets: true
|
||||
ssl_required: true
|
||||
monitoring_level: "full"
|
||||
backup_frequency: "daily"
|
||||
|
||||
staging:
|
||||
debug_enabled: true
|
||||
log_level: "info"
|
||||
cache_enabled: true
|
||||
minify_assets: false
|
||||
ssl_required: true
|
||||
monitoring_level: "basic"
|
||||
backup_frequency: "weekly"
|
||||
|
||||
development:
|
||||
debug_enabled: true
|
||||
log_level: "debug"
|
||||
cache_enabled: false
|
||||
minify_assets: false
|
||||
ssl_required: false
|
||||
monitoring_level: "minimal"
|
||||
backup_frequency: "never"
|
||||
|
||||
# Environment-specific PHP configuration
|
||||
php_config:
|
||||
production:
|
||||
display_errors: "Off"
|
||||
display_startup_errors: "Off"
|
||||
error_reporting: "E_ALL & ~E_DEPRECATED & ~E_STRICT"
|
||||
log_errors: "On"
|
||||
memory_limit: "512M"
|
||||
max_execution_time: 30
|
||||
opcache_validate_timestamps: 0
|
||||
opcache_revalidate_freq: 0
|
||||
|
||||
staging:
|
||||
display_errors: "On"
|
||||
display_startup_errors: "On"
|
||||
error_reporting: "E_ALL"
|
||||
log_errors: "On"
|
||||
memory_limit: "256M"
|
||||
max_execution_time: 60
|
||||
opcache_validate_timestamps: 1
|
||||
opcache_revalidate_freq: 2
|
||||
|
||||
development:
|
||||
display_errors: "On"
|
||||
display_startup_errors: "On"
|
||||
error_reporting: "E_ALL"
|
||||
log_errors: "On"
|
||||
memory_limit: "1G"
|
||||
max_execution_time: 0
|
||||
opcache_validate_timestamps: 1
|
||||
opcache_revalidate_freq: 0
|
||||
|
||||
# Environment-specific database configuration
|
||||
database_config:
|
||||
production:
|
||||
query_cache: true
|
||||
slow_query_log: true
|
||||
long_query_time: 2
|
||||
max_connections: 200
|
||||
innodb_buffer_pool_size: "1G"
|
||||
|
||||
staging:
|
||||
query_cache: true
|
||||
slow_query_log: true
|
||||
long_query_time: 5
|
||||
max_connections: 100
|
||||
innodb_buffer_pool_size: "512M"
|
||||
|
||||
development:
|
||||
query_cache: false
|
||||
slow_query_log: false
|
||||
long_query_time: 10
|
||||
max_connections: 50
|
||||
innodb_buffer_pool_size: "128M"
|
||||
|
||||
# Environment-specific security settings
|
||||
security_config:
|
||||
production:
|
||||
firewall_strict: true
|
||||
rate_limiting: true
|
||||
brute_force_protection: true
|
||||
ssl_only: true
|
||||
hsts_enabled: true
|
||||
security_headers: "strict"
|
||||
fail2ban_enabled: true
|
||||
|
||||
staging:
|
||||
firewall_strict: false
|
||||
rate_limiting: true
|
||||
brute_force_protection: true
|
||||
ssl_only: true
|
||||
hsts_enabled: false
|
||||
security_headers: "standard"
|
||||
fail2ban_enabled: true
|
||||
|
||||
development:
|
||||
firewall_strict: false
|
||||
rate_limiting: false
|
||||
brute_force_protection: false
|
||||
ssl_only: false
|
||||
hsts_enabled: false
|
||||
security_headers: "minimal"
|
||||
fail2ban_enabled: false
|
||||
|
||||
# Environment-specific monitoring configuration
|
||||
monitoring_config:
|
||||
production:
|
||||
health_check_interval: 30
|
||||
metric_collection_interval: 60
|
||||
log_level: "warn"
|
||||
alert_on_errors: true
|
||||
performance_monitoring: true
|
||||
|
||||
staging:
|
||||
health_check_interval: 60
|
||||
metric_collection_interval: 300
|
||||
log_level: "info"
|
||||
alert_on_errors: false
|
||||
performance_monitoring: true
|
||||
|
||||
development:
|
||||
health_check_interval: 300
|
||||
metric_collection_interval: 600
|
||||
log_level: "debug"
|
||||
alert_on_errors: false
|
||||
performance_monitoring: false
|
||||
|
||||
# Environment-specific caching configuration
|
||||
cache_config:
|
||||
production:
|
||||
driver: "redis"
|
||||
default_ttl: 3600
|
||||
prefix: "prod_"
|
||||
|
||||
staging:
|
||||
driver: "redis"
|
||||
default_ttl: 1800
|
||||
prefix: "staging_"
|
||||
|
||||
development:
|
||||
driver: "file"
|
||||
default_ttl: 300
|
||||
prefix: "dev_"
|
||||
|
||||
# Current environment configuration (set by inventory)
|
||||
current_config: "{{ environment_config[environment] }}"
|
||||
current_php_config: "{{ php_config[environment] }}"
|
||||
current_database_config: "{{ database_config[environment] }}"
|
||||
current_security_config: "{{ security_config[environment] }}"
|
||||
current_monitoring_config: "{{ monitoring_config[environment] }}"
|
||||
current_cache_config: "{{ cache_config[environment] }}"
|
||||
@@ -1,157 +0,0 @@
|
||||
---
|
||||
# Global Variables for Container-based PHP Framework Infrastructure
|
||||
# These variables are shared across all environments
|
||||
|
||||
# Project Information
|
||||
project_name: "michaelschiemer"
|
||||
container_image: "{{ container_registry | default('docker.io') }}/{{ image_repository | default('michaelschiemer/php-framework') }}"
|
||||
maintainer_email: "kontakt@michaelschiemer.de"
|
||||
|
||||
# Framework Configuration
|
||||
framework:
|
||||
name: "custom-php-framework"
|
||||
version: "1.0.0"
|
||||
php_version: "8.4"
|
||||
environment: "{{ environment }}"
|
||||
debug_mode: "{{ debug_mode | default(false) }}"
|
||||
container_based: true
|
||||
build_on_server: false
|
||||
|
||||
# Common Package Lists
|
||||
common_packages:
|
||||
- curl
|
||||
- wget
|
||||
- unzip
|
||||
- git
|
||||
- htop
|
||||
- vim
|
||||
- nano
|
||||
- rsync
|
||||
- screen
|
||||
- tmux
|
||||
|
||||
security_packages:
|
||||
- fail2ban
|
||||
- ufw
|
||||
- rkhunter
|
||||
- chkrootkit
|
||||
- lynis
|
||||
- unattended-upgrades
|
||||
- apt-listchanges
|
||||
|
||||
# Timezone and Locale
|
||||
timezone: "Europe/Berlin"
|
||||
locale: "en_US.UTF-8"
|
||||
|
||||
# User Management
|
||||
system_users:
|
||||
- name: deploy
|
||||
groups:
|
||||
- sudo
|
||||
- docker
|
||||
shell: /bin/bash
|
||||
home: /home/deploy
|
||||
create_home: true
|
||||
|
||||
# Directory Structure
|
||||
app_directories:
|
||||
- /var/www/html
|
||||
- /var/www/backups
|
||||
- /var/log/applications
|
||||
- /home/deploy/.docker
|
||||
- /home/deploy/scripts
|
||||
|
||||
# File Permissions
|
||||
default_file_permissions:
|
||||
web_root: "0755"
|
||||
config_files: "0644"
|
||||
scripts: "0755"
|
||||
logs: "0755"
|
||||
private_keys: "0600"
|
||||
public_keys: "0644"
|
||||
|
||||
# Backup Configuration
|
||||
backup_settings:
|
||||
enabled: "{{ BACKUP_ENABLED | default(true) | bool }}"
|
||||
retention_days: "{{ BACKUP_RETENTION_DAYS | default(30) }}"
|
||||
schedule: "0 2 * * *" # Daily at 2 AM
|
||||
compression: true
|
||||
encryption: true
|
||||
remote_storage: "{{ S3_BACKUP_ENABLED | default(false) | bool }}"
|
||||
|
||||
# Log Rotation
|
||||
log_rotation:
|
||||
rotate_count: 52 # Keep 52 weeks
|
||||
rotate_when: weekly
|
||||
compress: true
|
||||
compress_delay: 1
|
||||
missing_ok: true
|
||||
not_if_empty: true
|
||||
|
||||
# Network Configuration
|
||||
network:
|
||||
ipv6_enabled: false
|
||||
firewall_default_policy: deny
|
||||
allowed_ssh_networks:
|
||||
- "0.0.0.0/0" # Restrict this in production
|
||||
|
||||
# Docker Defaults
|
||||
docker_defaults:
|
||||
restart_policy: "always"
|
||||
log_driver: "json-file"
|
||||
log_options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
networks:
|
||||
- framework-network
|
||||
security_opts:
|
||||
- no-new-privileges:true
|
||||
pull_policy: "always"
|
||||
build_policy: "never"
|
||||
|
||||
# Performance Tuning
|
||||
performance:
|
||||
swappiness: 10
|
||||
max_open_files: 65536
|
||||
max_processes: 4096
|
||||
|
||||
# Monitoring Defaults
|
||||
monitoring_defaults:
|
||||
check_interval: 300 # 5 minutes
|
||||
alert_threshold_cpu: 80
|
||||
alert_threshold_memory: 85
|
||||
alert_threshold_disk: 90
|
||||
log_retention_days: 30
|
||||
|
||||
# SSL Defaults
|
||||
ssl_defaults:
|
||||
key_size: 2048
|
||||
protocols:
|
||||
- "TLSv1.2"
|
||||
- "TLSv1.3"
|
||||
cipher_suite: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384"
|
||||
|
||||
# Container Runtime Defaults
|
||||
container_defaults:
|
||||
php_version: "8.4"
|
||||
pull_timeout: 300
|
||||
deploy_timeout: 600
|
||||
health_check_timeout: 30
|
||||
health_check_interval: 10
|
||||
health_check_retries: 15
|
||||
|
||||
# Database Defaults
|
||||
database_defaults:
|
||||
engine: mysql
|
||||
version: "8.0"
|
||||
charset: utf8mb4
|
||||
collation: utf8mb4_unicode_ci
|
||||
max_connections: 100
|
||||
innodb_buffer_pool_size: "128M"
|
||||
|
||||
# Application Defaults
|
||||
app_defaults:
|
||||
session_lifetime: 7200 # 2 hours
|
||||
cache_driver: redis
|
||||
queue_driver: redis
|
||||
mail_driver: smtp
|
||||
@@ -1,96 +0,0 @@
|
||||
---
|
||||
# Encrypted Variables (Ansible Vault)
|
||||
# These variables contain sensitive information and should be encrypted
|
||||
|
||||
# Database Credentials
|
||||
vault_mysql_root_password: "super_secure_root_password_change_me"
|
||||
vault_mysql_user_password: "secure_user_password_change_me"
|
||||
vault_mysql_replication_password: "secure_replication_password_change_me"
|
||||
|
||||
# Application Secrets
|
||||
vault_app_key: "base64:CHANGE_THIS_TO_A_REAL_32_CHARACTER_SECRET_KEY"
|
||||
vault_jwt_secret: "CHANGE_THIS_TO_A_REAL_JWT_SECRET_KEY"
|
||||
vault_encryption_key: "CHANGE_THIS_TO_A_REAL_ENCRYPTION_KEY"
|
||||
|
||||
# Redis Password
|
||||
vault_redis_password: "secure_redis_password_change_me"
|
||||
|
||||
# SMTP Configuration
|
||||
vault_smtp_host: "smtp.example.com"
|
||||
vault_smtp_port: 587
|
||||
vault_smtp_username: "noreply@michaelschiemer.de"
|
||||
vault_smtp_password: "smtp_password_change_me"
|
||||
vault_smtp_encryption: "tls"
|
||||
|
||||
# Third-party API Keys
|
||||
vault_api_keys:
|
||||
stripe_secret: "sk_test_CHANGE_THIS_TO_REAL_STRIPE_SECRET"
|
||||
paypal_client_id: "CHANGE_THIS_TO_REAL_PAYPAL_CLIENT_ID"
|
||||
paypal_client_secret: "CHANGE_THIS_TO_REAL_PAYPAL_SECRET"
|
||||
google_analytics: "GA_TRACKING_ID"
|
||||
recaptcha_site_key: "RECAPTCHA_SITE_KEY"
|
||||
recaptcha_secret_key: "RECAPTCHA_SECRET_KEY"
|
||||
|
||||
# OAuth Configuration
|
||||
vault_oauth:
|
||||
google:
|
||||
client_id: "GOOGLE_CLIENT_ID"
|
||||
client_secret: "GOOGLE_CLIENT_SECRET"
|
||||
github:
|
||||
client_id: "GITHUB_CLIENT_ID"
|
||||
client_secret: "GITHUB_CLIENT_SECRET"
|
||||
|
||||
# Backup Encryption
|
||||
vault_backup_encryption_key: "CHANGE_THIS_TO_A_REAL_BACKUP_ENCRYPTION_KEY"
|
||||
|
||||
# Monitoring Secrets
|
||||
vault_monitoring:
|
||||
slack_webhook: "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
|
||||
pagerduty_key: "PAGERDUTY_INTEGRATION_KEY"
|
||||
|
||||
# Docker Registry Credentials
|
||||
vault_docker_registry:
|
||||
username: "registry_username"
|
||||
password: "registry_password"
|
||||
email: "kontakt@michaelschiemer.de"
|
||||
|
||||
# SSH Keys (base64 encoded)
|
||||
vault_ssh_keys:
|
||||
deploy_private_key: |
|
||||
-----BEGIN OPENSSH PRIVATE KEY-----
|
||||
# CHANGE THIS TO YOUR ACTUAL DEPLOY KEY
|
||||
-----END OPENSSH PRIVATE KEY-----
|
||||
deploy_public_key: "ssh-rsa AAAAB3NzaC1yc2E... deploy@michaelschiemer.de"
|
||||
|
||||
# SSL Certificate Passwords
|
||||
vault_ssl_passwords:
|
||||
private_key_passphrase: "ssl_private_key_passphrase"
|
||||
p12_password: "ssl_p12_password"
|
||||
|
||||
# Security Tokens
|
||||
vault_security_tokens:
|
||||
csrf_secret: "CHANGE_THIS_TO_A_REAL_CSRF_SECRET"
|
||||
api_token_secret: "CHANGE_THIS_TO_A_REAL_API_TOKEN_SECRET"
|
||||
session_secret: "CHANGE_THIS_TO_A_REAL_SESSION_SECRET"
|
||||
|
||||
# External Service Credentials
|
||||
vault_external_services:
|
||||
cloudflare_api_token: "CLOUDFLARE_API_TOKEN"
|
||||
aws_access_key: "AWS_ACCESS_KEY_ID"
|
||||
aws_secret_key: "AWS_SECRET_ACCESS_KEY"
|
||||
|
||||
# Feature Flags and Secrets
|
||||
vault_features:
|
||||
enable_debug_mode: false
|
||||
enable_profiler: false
|
||||
enable_maintenance_mode: false
|
||||
|
||||
# Environment Specific Secrets
|
||||
vault_environment_secrets:
|
||||
production:
|
||||
sentry_dsn: "https://YOUR_SENTRY_DSN@sentry.io/PROJECT_ID"
|
||||
newrelic_license: "NEWRELIC_LICENSE_KEY"
|
||||
staging:
|
||||
sentry_dsn: "https://YOUR_STAGING_SENTRY_DSN@sentry.io/PROJECT_ID"
|
||||
development:
|
||||
debug_token: "DEBUG_TOKEN_FOR_DEVELOPMENT"
|
||||
@@ -1,68 +0,0 @@
|
||||
---
|
||||
# Development Inventory for Custom PHP Framework
|
||||
# Local development environment
|
||||
|
||||
all:
|
||||
vars:
|
||||
# Environment configuration
|
||||
environment: development
|
||||
domain_name: localhost
|
||||
app_name: custom-php-framework
|
||||
|
||||
# SSL Configuration (self-signed for dev)
|
||||
ssl_email: kontakt@michaelschiemer.de
|
||||
ssl_provider: self-signed
|
||||
|
||||
# PHP Configuration
|
||||
php_version: "8.4"
|
||||
php_fpm_version: "8.4"
|
||||
|
||||
# Security settings (minimal for dev)
|
||||
security_level: low
|
||||
firewall_strict_mode: false
|
||||
fail2ban_enabled: false
|
||||
|
||||
# Docker configuration
|
||||
docker_edition: ce
|
||||
docker_version: "latest"
|
||||
docker_compose_version: "2.20.0"
|
||||
|
||||
# Monitoring (disabled for dev)
|
||||
monitoring_enabled: false
|
||||
health_checks_enabled: false
|
||||
|
||||
# Backup configuration (disabled)
|
||||
backup_enabled: false
|
||||
backup_retention_days: 0
|
||||
|
||||
children:
|
||||
web_servers:
|
||||
hosts:
|
||||
localhost:
|
||||
ansible_connection: local
|
||||
ansible_host: 127.0.0.1
|
||||
ansible_user: "{{ ansible_env.USER }}"
|
||||
server_role: development
|
||||
|
||||
# Service configuration (minimal)
|
||||
nginx_worker_processes: 1
|
||||
nginx_worker_connections: 256
|
||||
nginx_port: 443
|
||||
|
||||
# PHP-FPM configuration (minimal)
|
||||
php_fpm_pm_max_children: 10
|
||||
php_fpm_pm_start_servers: 2
|
||||
php_fpm_pm_min_spare_servers: 1
|
||||
php_fpm_pm_max_spare_servers: 5
|
||||
|
||||
# Docker resource limits (minimal)
|
||||
docker_memory_limit: 2g
|
||||
docker_cpu_limit: 1.0
|
||||
|
||||
vars:
|
||||
# Web server specific vars
|
||||
nginx_enabled: true
|
||||
ssl_certificate_path: /etc/ssl/certs/localhost
|
||||
log_level: debug
|
||||
debug_mode: true
|
||||
xdebug_enabled: true
|
||||
@@ -1,64 +0,0 @@
|
||||
---
|
||||
# Production Inventory for michaelschiemer.de
|
||||
# Container-based PHP Framework Infrastructure
|
||||
|
||||
all:
|
||||
vars:
|
||||
# Environment configuration
|
||||
environment: production
|
||||
project_name: michaelschiemer
|
||||
domain_name: michaelschiemer.de
|
||||
|
||||
# Container configuration
|
||||
container_registry: docker.io
|
||||
image_repository: michaelschiemer/php-framework
|
||||
|
||||
# SSL Configuration
|
||||
ssl_email: kontakt@michaelschiemer.de
|
||||
ssl_provider: letsencrypt
|
||||
|
||||
# Security settings
|
||||
security_level: high
|
||||
firewall_strict_mode: true
|
||||
fail2ban_enabled: true
|
||||
|
||||
# Docker configuration
|
||||
docker_edition: ce
|
||||
docker_version: "24.0"
|
||||
|
||||
# Monitoring
|
||||
monitoring_enabled: true
|
||||
health_checks_enabled: true
|
||||
|
||||
# Backup configuration - parameterized from CI
|
||||
backup_enabled: "{{ BACKUP_ENABLED | default(true) | bool }}"
|
||||
backup_retention_days: "{{ BACKUP_RETENTION_DAYS | default(30) }}"
|
||||
|
||||
# CDN configuration
|
||||
cdn_update: "{{ CDN_UPDATE | default(false) | bool }}"
|
||||
|
||||
children:
|
||||
web_servers:
|
||||
hosts:
|
||||
michaelschiemer-prod-web-01:
|
||||
ansible_host: 94.16.110.151
|
||||
ansible_user: deploy
|
||||
ansible_ssh_private_key_file: ~/.ssh/production
|
||||
server_role: primary
|
||||
|
||||
# Server specifications
|
||||
cpu_cores: 4
|
||||
memory_gb: 8
|
||||
disk_gb: 80
|
||||
|
||||
# Production resource limits
|
||||
max_containers: 10
|
||||
docker_memory_limit: 6g
|
||||
docker_cpu_limit: 3.5
|
||||
|
||||
vars:
|
||||
# Production environment variables
|
||||
log_level: warning
|
||||
deploy_timeout: 300
|
||||
health_check_retries: 15
|
||||
rollback_enabled: true
|
||||
@@ -1,73 +0,0 @@
|
||||
---
|
||||
# Staging Inventory for Custom PHP Framework
|
||||
# Test environment for michaelschiemer.de
|
||||
|
||||
all:
|
||||
vars:
|
||||
# Environment configuration
|
||||
environment: staging
|
||||
domain_name: staging.michaelschiemer.de
|
||||
app_name: custom-php-framework
|
||||
|
||||
# SSL Configuration
|
||||
ssl_email: kontakt@michaelschiemer.de
|
||||
ssl_provider: letsencrypt
|
||||
|
||||
# PHP Configuration
|
||||
php_version: "8.4"
|
||||
php_fpm_version: "8.4"
|
||||
|
||||
# Security settings (more relaxed for testing)
|
||||
security_level: medium
|
||||
firewall_strict_mode: false
|
||||
fail2ban_enabled: true
|
||||
|
||||
# Docker configuration
|
||||
docker_edition: ce
|
||||
docker_version: "latest"
|
||||
docker_compose_version: "2.20.0"
|
||||
|
||||
# Monitoring (basic for staging)
|
||||
monitoring_enabled: true
|
||||
health_checks_enabled: true
|
||||
|
||||
# Backup configuration (minimal)
|
||||
backup_enabled: false
|
||||
backup_retention_days: 7
|
||||
|
||||
children:
|
||||
web_servers:
|
||||
hosts:
|
||||
michaelschiemer-staging-web-01:
|
||||
# Can use same server with different ports/containers
|
||||
ansible_host: 94.16.110.151
|
||||
ansible_user: deploy
|
||||
ansible_ssh_private_key_file: ~/.ssh/id_rsa_deploy
|
||||
server_role: staging
|
||||
|
||||
# Server specifications (shared with prod)
|
||||
cpu_cores: 2
|
||||
memory_gb: 4
|
||||
disk_gb: 40
|
||||
|
||||
# Service configuration (reduced for staging)
|
||||
nginx_worker_processes: 2
|
||||
nginx_worker_connections: 512
|
||||
nginx_port: 8080
|
||||
|
||||
# PHP-FPM configuration (reduced)
|
||||
php_fpm_pm_max_children: 20
|
||||
php_fpm_pm_start_servers: 3
|
||||
php_fpm_pm_min_spare_servers: 2
|
||||
php_fpm_pm_max_spare_servers: 10
|
||||
|
||||
# Docker resource limits (reduced)
|
||||
docker_memory_limit: 3g
|
||||
docker_cpu_limit: 1.5
|
||||
|
||||
vars:
|
||||
# Web server specific vars
|
||||
nginx_enabled: true
|
||||
ssl_certificate_path: /etc/letsencrypt/live/{{ domain_name }}
|
||||
log_level: info
|
||||
debug_mode: true
|
||||
@@ -1,14 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Quick script to show PHP logs from production server
|
||||
# Usage: ./logs.sh [lines]
|
||||
# Default: 50 lines
|
||||
|
||||
LINES="${1:-50}"
|
||||
|
||||
echo "📋 Showing last $LINES lines of PHP logs from production..."
|
||||
echo ""
|
||||
|
||||
ssh -i ~/.ssh/production deploy@michaelschiemer.de "docker logs php --tail $LINES"
|
||||
|
||||
echo ""
|
||||
echo "✅ Done!"
|
||||
@@ -1,14 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Show Nginx error logs from production server
|
||||
# Usage: ./nginx-logs.sh [lines]
|
||||
# Default: 50 lines
|
||||
|
||||
LINES="${1:-50}"
|
||||
|
||||
echo "📋 Showing last $LINES lines of Nginx error logs from production..."
|
||||
echo ""
|
||||
|
||||
ssh -i ~/.ssh/production deploy@michaelschiemer.de "docker exec web tail -n $LINES /var/log/nginx/error.log"
|
||||
|
||||
echo ""
|
||||
echo "✅ Done!"
|
||||
@@ -1,239 +0,0 @@
|
||||
# Git-Based Deployment mit Gitea
|
||||
|
||||
## Übersicht
|
||||
|
||||
Das Git-basierte Deployment Playbook (`deploy-git-based.yml`) ermöglicht Zero-Downtime Deployments mit Gitea als Git-Repository-Server.
|
||||
|
||||
## Voraussetzungen
|
||||
|
||||
### 1. Gitea Server Setup
|
||||
|
||||
Der Gitea Server muss für den Production-Server erreichbar sein. Es gibt zwei Optionen:
|
||||
|
||||
#### Option A: Öffentlich erreichbarer Gitea Server (Empfohlen für Production)
|
||||
|
||||
```bash
|
||||
# Gitea muss über das Internet erreichbar sein
|
||||
git_repo: "git@git.michaelschiemer.de:michael/michaelschiemer.git"
|
||||
```
|
||||
|
||||
**Erforderlich**:
|
||||
- Öffentliche IP oder Domain für Gitea
|
||||
- Firewall-Regel für Port 2222 (SSH)
|
||||
- SSL/TLS für Webinterface (Port 9443/3000)
|
||||
|
||||
#### Option B: Gitea auf dem Production-Server
|
||||
|
||||
```bash
|
||||
# Gitea läuft auf demselben Server wie die Anwendung
|
||||
git_repo: "git@localhost:michael/michaelschiemer.git"
|
||||
```
|
||||
|
||||
**Erforderlich**:
|
||||
- Gitea Container auf Production-Server deployen
|
||||
- Docker Compose Setup auf Production-Server
|
||||
- Lokale SSH-Konfiguration
|
||||
|
||||
### 2. SSH Key Setup
|
||||
|
||||
Der Deploy-User auf dem Production-Server benötigt einen SSH-Key:
|
||||
|
||||
```bash
|
||||
# Auf dem Production-Server
|
||||
ssh-keygen -t ed25519 -C "deployment@michaelschiemer" -f ~/.ssh/gitea_deploy_key -N ""
|
||||
|
||||
# Public Key zu Gitea hinzufügen (via Web-UI oder API)
|
||||
cat ~/.ssh/gitea_deploy_key.pub
|
||||
```
|
||||
|
||||
### 3. SSH Keys im Secrets-Verzeichnis
|
||||
|
||||
Die SSH Keys müssen im `deployment/infrastructure/secrets/` Verzeichnis liegen:
|
||||
|
||||
```bash
|
||||
deployment/infrastructure/secrets/
|
||||
├── .gitignore # Schützt Keys vor versehentlichem Commit
|
||||
├── gitea_deploy_key # Private Key
|
||||
└── gitea_deploy_key.pub # Public Key
|
||||
```
|
||||
|
||||
**WICHTIG**: Das `secrets/` Verzeichnis ist via `.gitignore` geschützt und darf NIEMALS committed werden!
|
||||
|
||||
## Deployment-Ablauf
|
||||
|
||||
### 1. SSH Key auf Production-Server kopieren
|
||||
|
||||
Das Playbook kopiert automatisch die SSH Keys aus `secrets/` auf den Production-Server:
|
||||
|
||||
```yaml
|
||||
- name: Copy Gitea deploy SSH private key
|
||||
copy:
|
||||
src: "{{ playbook_dir }}/../secrets/gitea_deploy_key"
|
||||
dest: "/home/{{ app_user }}/.ssh/gitea_deploy_key"
|
||||
mode: '0600'
|
||||
```
|
||||
|
||||
### 2. SSH-Konfiguration
|
||||
|
||||
Das Playbook erstellt automatisch die SSH-Konfiguration:
|
||||
|
||||
```ssh
|
||||
Host localhost
|
||||
HostName localhost
|
||||
Port 2222
|
||||
User git
|
||||
IdentityFile ~/.ssh/gitea_deploy_key
|
||||
StrictHostKeyChecking no
|
||||
UserKnownHostsFile /dev/null
|
||||
|
||||
Host git.michaelschiemer.de
|
||||
HostName git.michaelschiemer.de
|
||||
Port 2222
|
||||
User git
|
||||
IdentityFile ~/.ssh/gitea_deploy_key
|
||||
StrictHostKeyChecking no
|
||||
UserKnownHostsFile /dev/null
|
||||
```
|
||||
|
||||
### 3. Git Clone
|
||||
|
||||
Das Playbook clont das Repository in ein Release-Verzeichnis:
|
||||
|
||||
```bash
|
||||
/var/www/michaelschiemer/
|
||||
├── releases/
|
||||
│ ├── 1761524417/ # Timestamp-basierte Releases
|
||||
│ └── v1.0.0/ # Tag-basierte Releases
|
||||
├── shared/ # Shared Directories (symlinked)
|
||||
│ ├── storage/
|
||||
│ └── .env.production
|
||||
└── current -> releases/1761524417 # Symlink auf aktives Release
|
||||
```
|
||||
|
||||
### 4. Zero-Downtime Deployment
|
||||
|
||||
- Neues Release wird geclont
|
||||
- Dependencies installiert
|
||||
- Symlinks erstellt
|
||||
- `current` Symlink atomar gewechselt
|
||||
- Health Check durchgeführt
|
||||
- Bei Fehler: Automatischer Rollback
|
||||
|
||||
## Deployment ausführen
|
||||
|
||||
### Standard Deployment (main Branch)
|
||||
|
||||
```bash
|
||||
cd deployment/infrastructure
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-git-based.yml
|
||||
```
|
||||
|
||||
### Tag-basiertes Deployment
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-git-based.yml \
|
||||
--extra-vars "release_tag=v1.0.0"
|
||||
```
|
||||
|
||||
### Custom Branch Deployment
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-git-based.yml \
|
||||
--extra-vars "git_branch=develop"
|
||||
```
|
||||
|
||||
## Konfiguration anpassen
|
||||
|
||||
### Git Repository URL ändern
|
||||
|
||||
In `deploy-git-based.yml`:
|
||||
|
||||
```yaml
|
||||
vars:
|
||||
git_repo: "git@git.michaelschiemer.de:michael/michaelschiemer.git"
|
||||
# Oder für lokales Testing:
|
||||
# git_repo: "git@localhost:michael/michaelschiemer.git"
|
||||
```
|
||||
|
||||
### Shared Directories anpassen
|
||||
|
||||
```yaml
|
||||
vars:
|
||||
shared_dirs:
|
||||
- storage/logs
|
||||
- storage/cache
|
||||
- storage/sessions
|
||||
- storage/uploads
|
||||
- public/uploads
|
||||
|
||||
shared_files:
|
||||
- .env.production
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Fehler: "Connection refused" zu Gitea
|
||||
|
||||
**Problem**: Der Production-Server kann Gitea nicht erreichen.
|
||||
|
||||
**Lösung**:
|
||||
1. Prüfe, ob Gitea öffentlich erreichbar ist: `nc -zv git.michaelschiemer.de 2222`
|
||||
2. Prüfe Firewall-Regeln auf dem Gitea-Server
|
||||
3. Für lokales Testing: Verwende rsync-based Deployment stattdessen
|
||||
|
||||
### Fehler: "Permission denied (publickey)"
|
||||
|
||||
**Problem**: SSH Key ist nicht korrekt konfiguriert.
|
||||
|
||||
**Lösung**:
|
||||
1. Prüfe, ob der Public Key in Gitea hinzugefügt wurde
|
||||
2. Prüfe SSH Key Permissions: `chmod 600 ~/.ssh/gitea_deploy_key`
|
||||
3. Teste SSH-Verbindung manuell: `ssh -p 2222 -i ~/.ssh/gitea_deploy_key git@git.michaelschiemer.de`
|
||||
|
||||
### Health Check schlägt fehl
|
||||
|
||||
**Problem**: Deployment-Health-Check failed.
|
||||
|
||||
**Lösung**:
|
||||
1. Automatischer Rollback wurde durchgeführt
|
||||
2. Prüfe Logs: `tail -f /var/www/michaelschiemer/deploy.log`
|
||||
3. Prüfe Application Logs: `/var/www/michaelschiemer/shared/storage/logs/`
|
||||
|
||||
## Comparison: Git-based vs rsync-based
|
||||
|
||||
### Git-based Deployment (Dieser Playbook)
|
||||
|
||||
**Vorteile**:
|
||||
- Zero-Downtime durch Symlink-Switch
|
||||
- Atomare Releases mit Rollback-Fähigkeit
|
||||
- Git-Historie auf Production-Server
|
||||
- Einfache Rollbacks zu vorherigen Releases
|
||||
|
||||
**Nachteile**:
|
||||
- Gitea Server muss erreichbar sein
|
||||
- Zusätzliche Infrastruktur (Gitea)
|
||||
- SSH Key Management erforderlich
|
||||
|
||||
### rsync-based Deployment
|
||||
|
||||
**Vorteile**:
|
||||
- Keine zusätzliche Infrastruktur
|
||||
- Funktioniert mit lokalem Development-Environment
|
||||
- Schneller für kleine Änderungen
|
||||
|
||||
**Nachteile**:
|
||||
- Kein Zero-Downtime ohne zusätzliche Logik
|
||||
- Keine Git-Historie auf Server
|
||||
- Rollback komplizierter
|
||||
|
||||
## Empfehlung
|
||||
|
||||
**Für Production**: Git-based Deployment mit öffentlich erreichbarem Gitea Server
|
||||
**Für Development/Testing**: rsync-based Deployment (bereits implementiert und getestet)
|
||||
|
||||
## Related Files
|
||||
|
||||
- `deploy-git-based.yml` - Git-based Deployment Playbook
|
||||
- `deploy-rsync-based.yml` - rsync-based Deployment Playbook (Alternative)
|
||||
- `rollback-git-based.yml` - Rollback Playbook für Git-Deployments
|
||||
- `secrets/.gitignore` - Schutz für SSH Keys
|
||||
@@ -1,652 +0,0 @@
|
||||
# Rsync-Based Deployment
|
||||
|
||||
**Production-ready Zero-Downtime Deployment** mit Rsync, Release Management und automatischem Rollback.
|
||||
|
||||
## Übersicht
|
||||
|
||||
Das Rsync-basierte Deployment Playbook (`deploy-rsync-based.yml`) bietet eine robuste Lösung für Production Deployments ohne externe Git-Server-Abhängigkeiten.
|
||||
|
||||
**Vorteile**:
|
||||
- ✅ Zero-Downtime durch Symlink-Switch
|
||||
- ✅ Automatischer Rollback bei Health Check Failure
|
||||
- ✅ Git Tag-basiertes Release Management
|
||||
- ✅ Keine Gitea/GitHub Abhängigkeit
|
||||
- ✅ Schnell für kleine Änderungen
|
||||
- ✅ Einfaches Rollback zu vorherigen Releases
|
||||
|
||||
## Deployment-Architektur
|
||||
|
||||
### Release Structure
|
||||
|
||||
```
|
||||
/home/deploy/michaelschiemer/
|
||||
├── releases/
|
||||
│ ├── 1761499893/ # Timestamp-based releases
|
||||
│ ├── v1.0.0/ # Git tag-based releases
|
||||
│ └── v1.2.3/
|
||||
├── shared/ # Shared zwischen Releases
|
||||
│ ├── storage/
|
||||
│ │ └── sessions/
|
||||
│ ├── public/
|
||||
│ │ └── uploads/
|
||||
│ └── .env.production # Shared config
|
||||
├── current -> releases/v1.2.3 # Symlink auf aktives Release
|
||||
└── deploy.log # Deployment history
|
||||
```
|
||||
|
||||
### Zero-Downtime Process
|
||||
|
||||
```
|
||||
1. Build Assets (local)
|
||||
↓
|
||||
2. Rsync to new release directory
|
||||
↓
|
||||
3. Create symlinks zu shared directories
|
||||
↓
|
||||
4. Start Docker containers
|
||||
↓
|
||||
5. Health Check (3 retries)
|
||||
├─ Success → Switch 'current' symlink (atomic)
|
||||
└─ Failure → Rollback zu previous release
|
||||
↓
|
||||
6. Cleanup old releases (keep last 5)
|
||||
```
|
||||
|
||||
## Voraussetzungen
|
||||
|
||||
### 1. SSH Key Setup
|
||||
|
||||
SSH Keys für Production Server müssen konfiguriert sein:
|
||||
|
||||
```bash
|
||||
# SSH config in ~/.ssh/config
|
||||
Host michaelschiemer-prod
|
||||
HostName 94.16.110.151
|
||||
User deploy
|
||||
IdentityFile ~/.ssh/production
|
||||
StrictHostKeyChecking no
|
||||
```
|
||||
|
||||
### 2. Production Server Requirements
|
||||
|
||||
- **User**: `deploy` user mit sudo Rechten
|
||||
- **Docker**: Docker und Docker Compose installiert
|
||||
- **Directory**: `/home/deploy/michaelschiemer` mit korrekten Permissions
|
||||
|
||||
### 3. Local Development Setup
|
||||
|
||||
- **Composer**: Für `composer install`
|
||||
- **NPM**: Für `npm run build`
|
||||
- **Git**: Für Tag-basiertes Release Management (optional)
|
||||
- **Ansible**: Ansible ≥2.13 installiert
|
||||
|
||||
## Deployment-Workflows
|
||||
|
||||
### Standard Deployment (Timestamp-based)
|
||||
|
||||
Deployiert aktuellen Stand ohne Git Tag:
|
||||
|
||||
```bash
|
||||
cd deployment/infrastructure
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml
|
||||
```
|
||||
|
||||
**Release Name**: Unix Timestamp (z.B. `1761499893`)
|
||||
|
||||
### Tagged Release Deployment (Recommended)
|
||||
|
||||
Deployiert spezifischen Git Tag:
|
||||
|
||||
```bash
|
||||
# Option 1: Tag explizit angeben
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml \
|
||||
--extra-vars "release_tag=v1.2.3"
|
||||
|
||||
# Option 2: Aktuellen Git Tag verwenden (auto-detected)
|
||||
git tag v1.2.3
|
||||
git push origin v1.2.3
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml
|
||||
```
|
||||
|
||||
**Release Name**: Git Tag (z.B. `v1.2.3`)
|
||||
|
||||
### Force Deployment (Override Lock)
|
||||
|
||||
Wenn ein Deployment Lock existiert:
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml \
|
||||
--extra-vars "force_deploy=true"
|
||||
```
|
||||
|
||||
## Release Management
|
||||
|
||||
### Git Tag Workflow
|
||||
|
||||
**Semantic Versioning** wird empfohlen:
|
||||
|
||||
```bash
|
||||
# 1. Create Git tag
|
||||
git tag -a v1.2.3 -m "Release v1.2.3: Feature XYZ"
|
||||
git push origin v1.2.3
|
||||
|
||||
# 2. Deploy tagged release
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml \
|
||||
--extra-vars "release_tag=v1.2.3"
|
||||
|
||||
# 3. Verify deployment
|
||||
ssh deploy@94.16.110.151 'ls -la /home/deploy/michaelschiemer/releases/'
|
||||
```
|
||||
|
||||
### Auto-Detection von Git Tags
|
||||
|
||||
Wenn `release_tag` nicht angegeben wird, versucht das Playbook automatisch den aktuellen Git Tag zu verwenden:
|
||||
|
||||
```bash
|
||||
# Auf einem getaggten Commit
|
||||
git describe --tags --exact-match # Zeigt: v1.2.3
|
||||
|
||||
# Deployment verwendet automatisch v1.2.3 als Release Name
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml
|
||||
```
|
||||
|
||||
**Fallback**: Wenn kein Git Tag vorhanden → Timestamp als Release Name
|
||||
|
||||
### Release List anzeigen
|
||||
|
||||
```bash
|
||||
ansible -i inventories/production/hosts.yml web_servers -b -a \
|
||||
"ls -lt /home/deploy/michaelschiemer/releases | head -10"
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```
|
||||
total 20
|
||||
drwxr-xr-x 10 deploy deploy 4096 Oct 26 18:50 v1.2.3
|
||||
drwxr-xr-x 10 deploy deploy 4096 Oct 25 14:32 v1.2.2
|
||||
drwxr-xr-x 10 deploy deploy 4096 Oct 24 10:15 1761499893
|
||||
drwxr-xr-x 10 deploy deploy 4096 Oct 23 09:00 v1.2.1
|
||||
lrwxrwxrwx 1 deploy deploy 56 Oct 26 18:50 current -> /home/deploy/michaelschiemer/releases/v1.2.3
|
||||
```
|
||||
|
||||
## Rollback Mechanisms
|
||||
|
||||
### Automatic Rollback
|
||||
|
||||
Bei Health Check Failure rollback das Playbook automatisch:
|
||||
|
||||
1. **Stop failed release containers**
|
||||
2. **Switch `current` symlink** zurück zu `previous_release`
|
||||
3. **Start previous release containers**
|
||||
4. **Remove failed release directory**
|
||||
5. **Log rollback event**
|
||||
|
||||
**Trigger**: Health Check Status ≠ 200 (nach 3 Retries mit 5s delay)
|
||||
|
||||
### Manual Rollback
|
||||
|
||||
Manueller Rollback zu vorherigem Release:
|
||||
|
||||
```bash
|
||||
# 1. List available releases
|
||||
ansible -i inventories/production/hosts.yml web_servers -b -a \
|
||||
"ls -lt /home/deploy/michaelschiemer/releases"
|
||||
|
||||
# 2. Identify target release (z.B. v1.2.2)
|
||||
TARGET_RELEASE="v1.2.2"
|
||||
|
||||
# 3. Manual rollback via Ansible
|
||||
ansible -i inventories/production/hosts.yml web_servers -b -m shell -a "
|
||||
cd /home/deploy/michaelschiemer && \
|
||||
docker compose -f current/docker-compose.yml -f current/docker-compose.production.yml down && \
|
||||
ln -sfn releases/${TARGET_RELEASE} current && \
|
||||
docker compose -f current/docker-compose.yml -f current/docker-compose.production.yml up -d
|
||||
"
|
||||
|
||||
# 4. Verify rollback
|
||||
curl -k https://94.16.110.151/health/summary
|
||||
```
|
||||
|
||||
**Oder**: Erstelle Rollback Playbook:
|
||||
|
||||
```yaml
|
||||
# playbooks/rollback-rsync.yml
|
||||
- name: Manual Rollback to Previous Release
|
||||
hosts: web_servers
|
||||
become: true
|
||||
vars:
|
||||
app_name: michaelschiemer
|
||||
app_user: deploy
|
||||
app_base_path: "/home/{{ app_user }}/{{ app_name }}"
|
||||
target_release: "{{ rollback_target }}" # --extra-vars "rollback_target=v1.2.2"
|
||||
|
||||
tasks:
|
||||
- name: Stop current release
|
||||
command: docker compose down
|
||||
args:
|
||||
chdir: "{{ app_base_path }}/current"
|
||||
become_user: "{{ app_user }}"
|
||||
|
||||
- name: Switch to target release
|
||||
file:
|
||||
src: "{{ app_base_path }}/releases/{{ target_release }}"
|
||||
dest: "{{ app_base_path }}/current"
|
||||
state: link
|
||||
force: yes
|
||||
|
||||
- name: Start target release
|
||||
command: docker compose -f docker-compose.yml -f docker-compose.production.yml up -d
|
||||
args:
|
||||
chdir: "{{ app_base_path }}/current"
|
||||
become_user: "{{ app_user }}"
|
||||
|
||||
- name: Health check
|
||||
uri:
|
||||
url: "https://{{ ansible_host }}/health/summary"
|
||||
method: GET
|
||||
status_code: 200
|
||||
validate_certs: no
|
||||
retries: 3
|
||||
delay: 5
|
||||
|
||||
# Usage:
|
||||
# ansible-playbook -i inventories/production/hosts.yml playbooks/rollback-rsync.yml --extra-vars "rollback_target=v1.2.2"
|
||||
```
|
||||
|
||||
## Health Checks
|
||||
|
||||
### Configured Health Endpoints
|
||||
|
||||
**Primary Health Check**: `https://{{ ansible_host }}/health/summary`
|
||||
|
||||
**Retry Strategy**:
|
||||
- Retries: 3
|
||||
- Delay: 5 seconds
|
||||
- Success: HTTP 200 status code
|
||||
|
||||
### Health Check Flow
|
||||
|
||||
```yaml
|
||||
- name: Health check - Summary endpoint (HTTPS)
|
||||
uri:
|
||||
url: "https://{{ ansible_host }}/health/summary"
|
||||
method: GET
|
||||
return_content: yes
|
||||
status_code: 200
|
||||
validate_certs: no
|
||||
follow_redirects: none
|
||||
register: health_check
|
||||
retries: 3
|
||||
delay: 5
|
||||
until: health_check.status == 200
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Rollback on health check failure
|
||||
block:
|
||||
- name: Stop failed release containers
|
||||
- name: Switch symlink back to previous release
|
||||
- name: Start previous release containers
|
||||
- name: Remove failed release
|
||||
- name: Log rollback
|
||||
- name: Fail deployment
|
||||
when: health_check.status != 200
|
||||
```
|
||||
|
||||
### Custom Health Endpoints
|
||||
|
||||
Füge weitere Health Checks hinzu:
|
||||
|
||||
```yaml
|
||||
# Nach der Primary Health Check in deploy-rsync-based.yml
|
||||
- name: Health check - Database connectivity
|
||||
uri:
|
||||
url: "https://{{ ansible_host }}/health/database"
|
||||
method: GET
|
||||
status_code: 200
|
||||
validate_certs: no
|
||||
retries: 2
|
||||
delay: 3
|
||||
ignore_errors: yes
|
||||
register: db_health_check
|
||||
|
||||
- name: Health check - Cache service
|
||||
uri:
|
||||
url: "https://{{ ansible_host }}/health/cache"
|
||||
method: GET
|
||||
status_code: 200
|
||||
validate_certs: no
|
||||
retries: 2
|
||||
delay: 3
|
||||
ignore_errors: yes
|
||||
register: cache_health_check
|
||||
|
||||
- name: Aggregate health check results
|
||||
set_fact:
|
||||
overall_health: "{{ health_check.status == 200 and db_health_check.status == 200 and cache_health_check.status == 200 }}"
|
||||
|
||||
- name: Rollback on any health check failure
|
||||
block:
|
||||
# ... rollback steps ...
|
||||
when: not overall_health
|
||||
```
|
||||
|
||||
## Monitoring & Logging
|
||||
|
||||
### Deployment Log
|
||||
|
||||
Alle Deployments werden geloggt:
|
||||
|
||||
```bash
|
||||
# Deployment log anzeigen
|
||||
ssh deploy@94.16.110.151 'tail -50 /home/deploy/michaelschiemer/deploy.log'
|
||||
```
|
||||
|
||||
**Log Format**:
|
||||
```
|
||||
[2024-10-26T18:50:30Z] Deployment started - Release: v1.2.3 - User: michael
|
||||
[2024-10-26T18:50:35Z] Release: v1.2.3 | Git Hash: a1b2c3d | Commit: a1b2c3d4e5f6g7h8i9j0
|
||||
[2024-10-26T18:50:50Z] Symlink switched: /home/deploy/michaelschiemer/current -> releases/v1.2.3
|
||||
[2024-10-26T18:50:55Z] Health check: 200
|
||||
[2024-10-26T18:50:56Z] Cleanup: Kept 5 releases, removed 1
|
||||
[2024-10-26T18:50:57Z] Deployment completed successfully - Release: v1.2.3
|
||||
```
|
||||
|
||||
### Docker Logs
|
||||
|
||||
```bash
|
||||
# Application logs
|
||||
ssh deploy@94.16.110.151 'cd /home/deploy/michaelschiemer/current && docker compose logs -f'
|
||||
|
||||
# Specific service
|
||||
ssh deploy@94.16.110.151 'cd /home/deploy/michaelschiemer/current && docker compose logs -f php'
|
||||
```
|
||||
|
||||
### System Monitoring
|
||||
|
||||
```bash
|
||||
# Disk usage
|
||||
ansible -i inventories/production/hosts.yml web_servers -b -a "df -h /home/deploy/michaelschiemer"
|
||||
|
||||
# Release directory sizes
|
||||
ansible -i inventories/production/hosts.yml web_servers -b -a "du -sh /home/deploy/michaelschiemer/releases/*"
|
||||
|
||||
# Container status
|
||||
ansible -i inventories/production/hosts.yml web_servers -b -a "docker ps"
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Shared Directories
|
||||
|
||||
Konfiguriert in `deploy-rsync-based.yml`:
|
||||
|
||||
```yaml
|
||||
shared_dirs:
|
||||
- storage/sessions
|
||||
- public/uploads
|
||||
|
||||
shared_files:
|
||||
- .env.production
|
||||
```
|
||||
|
||||
**Hinweis**: `storage/logs`, `storage/cache`, `storage/uploads` werden via Docker Volumes verwaltet.
|
||||
|
||||
### Rsync Exclusions
|
||||
|
||||
Files/Directories die NICHT deployiert werden:
|
||||
|
||||
```yaml
|
||||
rsync_excludes:
|
||||
- .git/
|
||||
- .github/
|
||||
- node_modules/
|
||||
- .env
|
||||
- .env.local
|
||||
- .env.development
|
||||
- storage/
|
||||
- public/uploads/
|
||||
- tests/
|
||||
- .idea/
|
||||
- .vscode/
|
||||
- "*.log"
|
||||
- .DS_Store
|
||||
- deployment/
|
||||
- database.sqlite
|
||||
- "*.cache"
|
||||
- .php-cs-fixer.cache
|
||||
- var/cache/
|
||||
- var/logs/
|
||||
```
|
||||
|
||||
### Keep Releases
|
||||
|
||||
Anzahl der beibehaltenen Releases:
|
||||
|
||||
```yaml
|
||||
keep_releases: 5 # Standard: 5 Releases
|
||||
```
|
||||
|
||||
Ändere nach Bedarf:
|
||||
```bash
|
||||
ansible-playbook ... --extra-vars "keep_releases=10"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Problem: Deployment Lock existiert
|
||||
|
||||
**Error**:
|
||||
```
|
||||
FAILED! => msg: Deployment already in progress. Lock file exists: /home/deploy/michaelschiemer/.deploy.lock
|
||||
```
|
||||
|
||||
**Ursache**: Vorheriges Deployment wurde unterbrochen
|
||||
|
||||
**Lösung**:
|
||||
```bash
|
||||
# Option 1: Force deployment
|
||||
ansible-playbook ... --extra-vars "force_deploy=true"
|
||||
|
||||
# Option 2: Lock manuell entfernen
|
||||
ansible -i inventories/production/hosts.yml web_servers -b -m file \
|
||||
-a "path=/home/deploy/michaelschiemer/.deploy.lock state=absent"
|
||||
```
|
||||
|
||||
### Problem: Health Check schlägt fehl
|
||||
|
||||
**Error**:
|
||||
```
|
||||
FAILED! => Deployment failed - health check returned 503. Rolled back to previous release.
|
||||
```
|
||||
|
||||
**Diagnose**:
|
||||
```bash
|
||||
# 1. Check application logs
|
||||
ssh deploy@94.16.110.151 'cd /home/deploy/michaelschiemer/current && docker compose logs --tail=100'
|
||||
|
||||
# 2. Check container status
|
||||
ssh deploy@94.16.110.151 'docker ps -a'
|
||||
|
||||
# 3. Manual health check
|
||||
curl -k -v https://94.16.110.151/health/summary
|
||||
|
||||
# 4. Check deployment log
|
||||
ssh deploy@94.16.110.151 'tail -100 /home/deploy/michaelschiemer/deploy.log'
|
||||
```
|
||||
|
||||
**Häufige Ursachen**:
|
||||
- .env.production fehlt oder fehlerhaft
|
||||
- Database migration fehlgeschlagen
|
||||
- Docker container starten nicht
|
||||
- SSL Zertifikat Probleme
|
||||
|
||||
### Problem: Rsync zu langsam
|
||||
|
||||
**Symptom**: Deployment dauert mehrere Minuten
|
||||
|
||||
**Optimierung**:
|
||||
```yaml
|
||||
# In deploy-rsync-based.yml - rsync command erweitern
|
||||
--compress # Kompression aktiviert
|
||||
--delete-after # Löschen nach Transfer
|
||||
--delay-updates # Atomic updates
|
||||
```
|
||||
|
||||
**Alternative**: Rsync via lokales Netzwerk statt Internet:
|
||||
```yaml
|
||||
# Wenn Production Server im gleichen Netzwerk
|
||||
ansible_host: 192.168.1.100 # Lokale IP statt öffentliche
|
||||
```
|
||||
|
||||
### Problem: Git Tag nicht erkannt
|
||||
|
||||
**Symptom**: Deployment verwendet Timestamp statt Git Tag
|
||||
|
||||
**Diagnose**:
|
||||
```bash
|
||||
# Check ob auf getaggtem Commit
|
||||
git describe --tags --exact-match
|
||||
# Sollte: v1.2.3 (ohne Fehler)
|
||||
|
||||
# Check ob Tag existiert
|
||||
git tag -l
|
||||
```
|
||||
|
||||
**Lösung**:
|
||||
```bash
|
||||
# 1. Tag erstellen falls fehlend
|
||||
git tag v1.2.3
|
||||
git push origin v1.2.3
|
||||
|
||||
# 2. Oder Tag explizit angeben
|
||||
ansible-playbook ... --extra-vars "release_tag=v1.2.3"
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Always Tag Releases
|
||||
|
||||
```bash
|
||||
# Vor Production Deployment immer Git Tag erstellen
|
||||
git tag -a v1.2.3 -m "Release v1.2.3: Feature description"
|
||||
git push origin v1.2.3
|
||||
```
|
||||
|
||||
**Vorteile**:
|
||||
- Klare Release-Historie
|
||||
- Einfaches Rollback zu spezifischen Versionen
|
||||
- Semantic Versioning tracking
|
||||
|
||||
### 2. Test Deployment in Staging First
|
||||
|
||||
```bash
|
||||
# Staging deployment (separate inventory)
|
||||
ansible-playbook -i inventories/staging/hosts.yml playbooks/deploy-rsync-based.yml \
|
||||
--extra-vars "release_tag=v1.2.3"
|
||||
|
||||
# Nach erfolgreichen Tests → Production
|
||||
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml \
|
||||
--extra-vars "release_tag=v1.2.3"
|
||||
```
|
||||
|
||||
### 3. Monitor Deployment Log
|
||||
|
||||
```bash
|
||||
# Real-time deployment monitoring
|
||||
ssh deploy@94.16.110.151 'tail -f /home/deploy/michaelschiemer/deploy.log'
|
||||
```
|
||||
|
||||
### 4. Backup vor Major Releases
|
||||
|
||||
```bash
|
||||
# Database backup vor Major Release
|
||||
ssh deploy@94.16.110.151 'cd /home/deploy/michaelschiemer/current && \
|
||||
docker compose exec php php console.php db:backup'
|
||||
```
|
||||
|
||||
### 5. Verify Health Before Release Tag
|
||||
|
||||
```bash
|
||||
# Health check auf Staging
|
||||
curl -k https://staging.michaelschiemer.de/health/summary
|
||||
|
||||
# Bei Erfolg → Production Tag
|
||||
git tag v1.2.3
|
||||
git push origin v1.2.3
|
||||
```
|
||||
|
||||
## Comparison: Rsync vs Git-based
|
||||
|
||||
### Rsync-based (Current)
|
||||
|
||||
**Vorteile**:
|
||||
- ✅ Keine Git-Server Abhängigkeit
|
||||
- ✅ Funktioniert mit lokalem Development
|
||||
- ✅ Schnell für kleine Änderungen
|
||||
- ✅ Einfaches Setup
|
||||
- ✅ Git Tag Support ohne External Server
|
||||
|
||||
**Nachteile**:
|
||||
- ❌ Keine Git-Historie auf Production Server
|
||||
- ❌ Erfordert lokale Build-Steps (Composer, NPM)
|
||||
- ❌ Rsync über Internet kann langsam sein
|
||||
|
||||
### Git-based
|
||||
|
||||
**Vorteile**:
|
||||
- ✅ Git-Historie auf Production Server
|
||||
- ✅ Atomare Releases mit Git Commits
|
||||
- ✅ Build direkt auf Production Server
|
||||
- ✅ Kein lokales Build erforderlich
|
||||
|
||||
**Nachteile**:
|
||||
- ❌ Gitea Server muss öffentlich erreichbar sein
|
||||
- ❌ Zusätzliche Infrastruktur (Gitea)
|
||||
- ❌ SSH Key Management komplexer
|
||||
|
||||
## Performance Optimizations
|
||||
|
||||
### 1. Pre-built Assets
|
||||
|
||||
Assets werden lokal gebaut → schnelleres Deployment:
|
||||
```yaml
|
||||
pre_tasks:
|
||||
- name: Install Composer dependencies locally
|
||||
- name: Build NPM assets locally
|
||||
```
|
||||
|
||||
### 2. Docker Layer Caching
|
||||
|
||||
Docker Images werden auf Production Server gecached → schnellerer Start.
|
||||
|
||||
### 3. Shared Directories
|
||||
|
||||
Shared directories vermeiden unnötiges Kopieren:
|
||||
- `storage/sessions`
|
||||
- `public/uploads`
|
||||
- `.env.production`
|
||||
|
||||
### 4. Cleanup Old Releases
|
||||
|
||||
Nur 5 Releases behalten → spart Disk Space:
|
||||
```yaml
|
||||
keep_releases: 5
|
||||
```
|
||||
|
||||
## Related Files
|
||||
|
||||
- `deploy-rsync-based.yml` - Rsync-based Deployment Playbook
|
||||
- `deploy-git-based.yml` - Git-based Deployment Playbook (Alternative)
|
||||
- `rollback-git-based.yml` - Git-based Rollback Playbook
|
||||
- `inventories/production/hosts.yml` - Production Server Configuration
|
||||
|
||||
## Zusammenfassung
|
||||
|
||||
Das rsync-based Deployment bietet:
|
||||
- ✅ **Production-Ready** Zero-Downtime Deployment
|
||||
- ✅ **Git Tag Support** für klare Release-Historie
|
||||
- ✅ **Automatischer Rollback** bei Failures
|
||||
- ✅ **Einfaches Setup** ohne externe Dependencies
|
||||
- ✅ **Schnell und Zuverlässig** für Development und Production
|
||||
|
||||
**Empfehlung**: Ideal für lokale Development → Production Workflows ohne zusätzliche Git-Server-Infrastruktur.
|
||||
@@ -1,231 +0,0 @@
|
||||
---
|
||||
# Production Container Deployment Playbook
|
||||
# Deploys pre-built container images for Custom PHP Framework
|
||||
|
||||
- name: Deploy Custom PHP Framework Application
|
||||
hosts: web_servers
|
||||
become: true
|
||||
gather_facts: true
|
||||
|
||||
vars:
|
||||
# Environment variable with proper fallback
|
||||
deployment_env: "{{ deploy_environment | default('production') }}"
|
||||
app_path: "/var/www/html"
|
||||
backup_path: "/var/www/backups"
|
||||
image_tag: "{{ IMAGE_TAG | default('latest') }}"
|
||||
domain_name: "{{ DOMAIN_NAME | default('michaelschiemer.de') }}"
|
||||
backup_enabled: "{{ BACKUP_ENABLED | default(true) | bool }}"
|
||||
backup_retention_days: "{{ BACKUP_RETENTION_DAYS | default(30) }}"
|
||||
cdn_update: "{{ CDN_UPDATE | default(false) | bool }}"
|
||||
# Pfade für Templates/Compose relativ zum Playbook-Verzeichnis
|
||||
compose_base_src: "{{ playbook_dir }}/../../../docker-compose.yml"
|
||||
compose_overlay_src: "{{ playbook_dir }}/../../applications/docker-compose.{{ deployment_env }}.yml"
|
||||
env_template_src: "{{ playbook_dir }}/../../applications/environments/.env.{{ deployment_env }}.template"
|
||||
# Compose-Projektname: Standardmäßig Verzeichnisname von app_path (z. B. 'html')
|
||||
compose_project: "{{ compose_project_name | default(app_path | basename) }}"
|
||||
|
||||
pre_tasks:
|
||||
- name: Verify deployment requirements
|
||||
assert:
|
||||
that:
|
||||
- app_path is defined
|
||||
- domain_name is defined
|
||||
- image_tag is defined
|
||||
- image_tag != 'latest' or deployment_env != 'production'
|
||||
fail_msg: "Production deployment requires specific image tag (not 'latest')"
|
||||
tags: always
|
||||
|
||||
- name: Create required directories
|
||||
ansible.builtin.file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
owner: deploy
|
||||
group: deploy
|
||||
mode: '0755'
|
||||
loop:
|
||||
- "{{ app_path }}"
|
||||
- "{{ backup_path }}"
|
||||
- /var/log/applications
|
||||
tags: always
|
||||
|
||||
- name: Store current image tag for rollback
|
||||
ansible.builtin.shell: |
|
||||
if [ -f {{ app_path }}/.env.{{ deployment_env }} ]; then
|
||||
grep '^IMAGE_TAG=' {{ app_path }}/.env.{{ deployment_env }} | cut -d'=' -f2 > {{ app_path }}/.last_release || echo 'none'
|
||||
fi
|
||||
ignore_errors: true
|
||||
tags: backup
|
||||
|
||||
tasks:
|
||||
- name: Check for existing deployment
|
||||
ansible.builtin.stat:
|
||||
path: "{{ app_path }}/docker-compose.yml"
|
||||
register: existing_deployment
|
||||
tags: deploy
|
||||
|
||||
- name: Render environment file from template
|
||||
ansible.builtin.template:
|
||||
src: "{{ env_template_src }}"
|
||||
dest: "{{ app_path }}/.env.{{ deployment_env }}"
|
||||
owner: deploy
|
||||
group: deploy
|
||||
mode: '0600'
|
||||
backup: true
|
||||
vars:
|
||||
IMAGE_TAG: "{{ image_tag }}"
|
||||
DOMAIN_NAME: "{{ domain_name }}"
|
||||
# no_log: true # Disabled for debugging
|
||||
tags: deploy
|
||||
|
||||
- name: Copy Docker Compose files (base + overlay)
|
||||
ansible.builtin.copy:
|
||||
src: "{{ item.src }}"
|
||||
dest: "{{ app_path }}/{{ item.dest }}"
|
||||
owner: deploy
|
||||
group: deploy
|
||||
mode: '0644'
|
||||
loop:
|
||||
- { src: "{{ compose_base_src }}", dest: "docker-compose.yml" }
|
||||
- { src: "{{ compose_overlay_src }}", dest: "docker-compose.{{ deployment_env }}.yml" }
|
||||
tags: deploy
|
||||
|
||||
- name: Stop existing services gracefully if present
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ app_path }}"
|
||||
files:
|
||||
- docker-compose.yml
|
||||
- "docker-compose.{{ deployment_env }}.yml"
|
||||
env_files:
|
||||
- ".env.{{ deployment_env }}"
|
||||
state: stopped
|
||||
timeout: 60
|
||||
when: existing_deployment.stat.exists
|
||||
ignore_errors: true
|
||||
tags: deploy
|
||||
|
||||
- name: Create storage volumes with proper permissions
|
||||
ansible.builtin.file:
|
||||
path: "{{ app_path }}/{{ item }}"
|
||||
state: directory
|
||||
owner: www-data
|
||||
group: www-data
|
||||
mode: '0775'
|
||||
loop:
|
||||
- storage
|
||||
- storage/logs
|
||||
- storage/cache
|
||||
- var
|
||||
- var/logs
|
||||
- src/Framework/Cache/storage
|
||||
- src/Framework/Cache/storage/cache
|
||||
tags: deploy
|
||||
|
||||
- name: Deploy application with Docker Compose v2
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ app_path }}"
|
||||
files:
|
||||
- docker-compose.yml
|
||||
- "docker-compose.{{ deployment_env }}.yml"
|
||||
env_files:
|
||||
- ".env.{{ deployment_env }}"
|
||||
pull: "always"
|
||||
build: "never"
|
||||
state: present
|
||||
recreate: "auto"
|
||||
remove_orphans: true
|
||||
timeout: 300
|
||||
tags: deploy
|
||||
|
||||
- name: Wait for PHP container to be healthy (label-based)
|
||||
community.docker.docker_container_info:
|
||||
filters:
|
||||
label:
|
||||
- "com.docker.compose.service=php"
|
||||
- "com.docker.compose.project={{ compose_project }}"
|
||||
register: php_info
|
||||
retries: 20
|
||||
delay: 10
|
||||
until: php_info.containers is defined and
|
||||
(php_info.containers | length) > 0 and
|
||||
(
|
||||
(php_info.containers[0].State.Health is defined and php_info.containers[0].State.Health.Status == "healthy")
|
||||
or
|
||||
php_info.containers[0].State.Status == "running"
|
||||
)
|
||||
tags: deploy
|
||||
|
||||
- name: Run database migrations
|
||||
community.docker.docker_container_exec:
|
||||
container: "{{ php_info.containers[0].Id }}"
|
||||
command: php console.php db:migrate --force
|
||||
chdir: /var/www/html
|
||||
tags: deploy
|
||||
|
||||
- name: Clear application caches
|
||||
community.docker.docker_container_exec:
|
||||
container: "{{ php_info.containers[0].Id }}"
|
||||
command: "php console.php {{ item }}"
|
||||
chdir: /var/www/html
|
||||
loop:
|
||||
- cache:clear
|
||||
- view:clear
|
||||
ignore_errors: true
|
||||
tags: deploy
|
||||
|
||||
- name: Wait for application to be ready
|
||||
ansible.builtin.uri:
|
||||
url: "https://{{ domain_name }}/health"
|
||||
method: GET
|
||||
status_code: 200
|
||||
timeout: 30
|
||||
headers:
|
||||
User-Agent: "Mozilla/5.0 (Ansible Health Check)"
|
||||
validate_certs: true
|
||||
register: http_health
|
||||
retries: 15
|
||||
delay: 10
|
||||
until: http_health.status == 200
|
||||
tags: deploy
|
||||
|
||||
- name: Store successful deployment tag
|
||||
ansible.builtin.copy:
|
||||
content: "{{ image_tag }}"
|
||||
dest: "{{ app_path }}/.last_successful_release"
|
||||
owner: deploy
|
||||
group: deploy
|
||||
mode: '0644'
|
||||
tags: deploy
|
||||
|
||||
post_tasks:
|
||||
- name: Clean up old backups
|
||||
ansible.builtin.find:
|
||||
paths: "{{ backup_path }}"
|
||||
age: "{{ backup_retention_days }}d"
|
||||
file_type: directory
|
||||
register: old_backups
|
||||
when: backup_enabled
|
||||
tags: cleanup
|
||||
|
||||
- name: Remove old backup directories
|
||||
ansible.builtin.file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ old_backups.files }}"
|
||||
when: backup_enabled and old_backups.files is defined
|
||||
tags: cleanup
|
||||
|
||||
- name: CDN update notification
|
||||
ansible.builtin.debug:
|
||||
msg: "CDN update would be executed here (run separate CDN playbook)"
|
||||
when: cdn_update | default(false) | bool
|
||||
tags: cdn
|
||||
|
||||
- name: Deployment success notification
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "Application deployment completed successfully"
|
||||
- "Image Tag: {{ image_tag }}"
|
||||
- "Environment: {{ deployment_env }}"
|
||||
- "Domain: {{ domain_name }}"
|
||||
- "CDN Updated: {{ cdn_update }}"
|
||||
tags: always
|
||||
@@ -1,442 +0,0 @@
|
||||
---
|
||||
# Git-based Deployment Playbook with Releases/Symlink Pattern (Gitea)
|
||||
# Implements production-ready deployment with zero-downtime and rollback support
|
||||
# Uses Gitea as Git repository server with SSH-based authentication
|
||||
#
|
||||
# Prerequisites:
|
||||
# - SSH deploy key must be placed in deployment/infrastructure/secrets/gitea_deploy_key
|
||||
# - Deploy key must be added to Gitea repository or user account
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-git-based.yml
|
||||
# ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-git-based.yml --extra-vars "git_branch=main"
|
||||
# ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-git-based.yml --extra-vars "release_tag=v1.0.0"
|
||||
|
||||
- name: Deploy Custom PHP Framework (Git-based with Releases)
|
||||
hosts: web_servers
|
||||
become: true
|
||||
|
||||
vars:
|
||||
# Application configuration
|
||||
app_name: michaelschiemer
|
||||
app_user: deploy
|
||||
app_group: deploy
|
||||
|
||||
# Deployment paths
|
||||
app_base_path: "/var/www/{{ app_name }}"
|
||||
releases_path: "{{ app_base_path }}/releases"
|
||||
shared_path: "{{ app_base_path }}/shared"
|
||||
current_path: "{{ app_base_path }}/current"
|
||||
|
||||
# Git configuration (Gitea)
|
||||
# Use localhost for local testing, git.michaelschiemer.de for production
|
||||
git_repo: "git@localhost:michael/michaelschiemer.git"
|
||||
git_branch: "{{ release_tag | default('main') }}"
|
||||
git_ssh_key: "/home/{{ app_user }}/.ssh/gitea_deploy_key"
|
||||
|
||||
# Release configuration
|
||||
release_timestamp: "{{ ansible_date_time.epoch }}"
|
||||
release_name: "{{ release_tag | default(release_timestamp) }}"
|
||||
release_path: "{{ releases_path }}/{{ release_name }}"
|
||||
|
||||
# Deployment settings
|
||||
keep_releases: 5
|
||||
composer_install_flags: "--no-dev --optimize-autoloader --no-interaction"
|
||||
|
||||
# Shared directories and files
|
||||
shared_dirs:
|
||||
- storage/logs
|
||||
- storage/cache
|
||||
- storage/sessions
|
||||
- storage/uploads
|
||||
- public/uploads
|
||||
|
||||
shared_files:
|
||||
- .env.production
|
||||
|
||||
tasks:
|
||||
# ==========================================
|
||||
# 1. SSH Key Setup for Gitea Access
|
||||
# ==========================================
|
||||
|
||||
- name: Create .ssh directory for deploy user
|
||||
file:
|
||||
path: "/home/{{ app_user }}/.ssh"
|
||||
state: directory
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0700'
|
||||
|
||||
- name: Copy Gitea deploy SSH private key
|
||||
copy:
|
||||
src: "{{ playbook_dir }}/../secrets/gitea_deploy_key"
|
||||
dest: "{{ git_ssh_key }}"
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0600'
|
||||
|
||||
- name: Copy Gitea deploy SSH public key
|
||||
copy:
|
||||
src: "{{ playbook_dir }}/../secrets/gitea_deploy_key.pub"
|
||||
dest: "{{ git_ssh_key }}.pub"
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0644'
|
||||
|
||||
- name: Configure SSH for Gitea (disable StrictHostKeyChecking)
|
||||
blockinfile:
|
||||
path: "/home/{{ app_user }}/.ssh/config"
|
||||
create: yes
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0600'
|
||||
marker: "# {mark} ANSIBLE MANAGED BLOCK - Gitea SSH Config"
|
||||
block: |
|
||||
Host localhost
|
||||
HostName localhost
|
||||
Port 2222
|
||||
User git
|
||||
IdentityFile {{ git_ssh_key }}
|
||||
StrictHostKeyChecking no
|
||||
UserKnownHostsFile /dev/null
|
||||
|
||||
Host git.michaelschiemer.de
|
||||
HostName git.michaelschiemer.de
|
||||
Port 2222
|
||||
User git
|
||||
IdentityFile {{ git_ssh_key }}
|
||||
StrictHostKeyChecking no
|
||||
UserKnownHostsFile /dev/null
|
||||
|
||||
# ==========================================
|
||||
# 2. Directory Structure Setup
|
||||
# ==========================================
|
||||
|
||||
- name: Create base application directory
|
||||
file:
|
||||
path: "{{ app_base_path }}"
|
||||
state: directory
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0755'
|
||||
|
||||
- name: Check if deployment lock exists
|
||||
stat:
|
||||
path: "{{ app_base_path }}/.deploy.lock"
|
||||
register: deploy_lock
|
||||
|
||||
- name: Fail if deployment is already in progress
|
||||
fail:
|
||||
msg: "Deployment already in progress. Lock file exists: {{ app_base_path }}/.deploy.lock"
|
||||
when: deploy_lock.stat.exists
|
||||
|
||||
- name: Create deployment lock
|
||||
file:
|
||||
path: "{{ app_base_path }}/.deploy.lock"
|
||||
state: touch
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0644'
|
||||
|
||||
- name: Log deployment start
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Deployment started - Release: {{ release_name }} - User: {{ ansible_user_id }}"
|
||||
create: yes
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
|
||||
- name: Create releases directory
|
||||
file:
|
||||
path: "{{ releases_path }}"
|
||||
state: directory
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0755'
|
||||
|
||||
- name: Create shared directory
|
||||
file:
|
||||
path: "{{ shared_path }}"
|
||||
state: directory
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0755'
|
||||
|
||||
- name: Create shared subdirectories
|
||||
file:
|
||||
path: "{{ shared_path }}/{{ item }}"
|
||||
state: directory
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0755'
|
||||
loop: "{{ shared_dirs }}"
|
||||
|
||||
# ==========================================
|
||||
# 2. Git Repository Clone
|
||||
# ==========================================
|
||||
|
||||
- name: Clone repository to new release directory
|
||||
git:
|
||||
repo: "{{ git_repo }}"
|
||||
dest: "{{ release_path }}"
|
||||
version: "{{ git_branch }}"
|
||||
force: yes
|
||||
depth: 1
|
||||
become_user: "{{ app_user }}"
|
||||
register: git_clone
|
||||
|
||||
- name: Get current commit hash
|
||||
command: git rev-parse HEAD
|
||||
args:
|
||||
chdir: "{{ release_path }}"
|
||||
register: commit_hash
|
||||
changed_when: false
|
||||
|
||||
- name: Log commit hash
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Commit: {{ commit_hash.stdout }}"
|
||||
|
||||
# ==========================================
|
||||
# 3. Shared Files/Directories Symlinks
|
||||
# ==========================================
|
||||
|
||||
- name: Remove shared directories from release (they will be symlinked)
|
||||
file:
|
||||
path: "{{ release_path }}/{{ item }}"
|
||||
state: absent
|
||||
loop: "{{ shared_dirs }}"
|
||||
|
||||
- name: Create symlinks for shared directories
|
||||
file:
|
||||
src: "{{ shared_path }}/{{ item }}"
|
||||
dest: "{{ release_path }}/{{ item }}"
|
||||
state: link
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
loop: "{{ shared_dirs }}"
|
||||
|
||||
- name: Create symlinks for shared files
|
||||
file:
|
||||
src: "{{ shared_path }}/{{ item }}"
|
||||
dest: "{{ release_path }}/{{ item }}"
|
||||
state: link
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
loop: "{{ shared_files }}"
|
||||
when: shared_files | length > 0
|
||||
|
||||
# ==========================================
|
||||
# 4. Dependencies Installation
|
||||
# ==========================================
|
||||
|
||||
- name: Install Composer dependencies
|
||||
composer:
|
||||
command: install
|
||||
arguments: "{{ composer_install_flags }}"
|
||||
working_dir: "{{ release_path }}"
|
||||
become_user: "{{ app_user }}"
|
||||
environment:
|
||||
COMPOSER_HOME: "/home/{{ app_user }}/.composer"
|
||||
|
||||
- name: Check if package.json exists
|
||||
stat:
|
||||
path: "{{ release_path }}/package.json"
|
||||
register: package_json
|
||||
|
||||
- name: Install NPM dependencies and build assets
|
||||
block:
|
||||
- name: Install NPM dependencies
|
||||
npm:
|
||||
path: "{{ release_path }}"
|
||||
state: present
|
||||
production: yes
|
||||
become_user: "{{ app_user }}"
|
||||
|
||||
- name: Build production assets
|
||||
command: npm run build
|
||||
args:
|
||||
chdir: "{{ release_path }}"
|
||||
become_user: "{{ app_user }}"
|
||||
when: package_json.stat.exists
|
||||
|
||||
# ==========================================
|
||||
# 5. File Permissions
|
||||
# ==========================================
|
||||
|
||||
- name: Set correct ownership for release
|
||||
file:
|
||||
path: "{{ release_path }}"
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
recurse: yes
|
||||
|
||||
- name: Make console script executable
|
||||
file:
|
||||
path: "{{ release_path }}/console.php"
|
||||
mode: '0755'
|
||||
ignore_errors: yes
|
||||
|
||||
# ==========================================
|
||||
# 6. Database Migrations (Optional)
|
||||
# ==========================================
|
||||
|
||||
- name: Run database migrations
|
||||
command: php console.php db:migrate --no-interaction
|
||||
args:
|
||||
chdir: "{{ release_path }}"
|
||||
become_user: "{{ app_user }}"
|
||||
when: run_migrations | default(false) | bool
|
||||
register: migrations_result
|
||||
|
||||
- name: Log migration result
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Migrations: {{ migrations_result.stdout | default('skipped') }}"
|
||||
when: run_migrations | default(false) | bool
|
||||
|
||||
# ==========================================
|
||||
# 7. Symlink Switch (Zero-Downtime)
|
||||
# ==========================================
|
||||
|
||||
- name: Get current release (before switch)
|
||||
stat:
|
||||
path: "{{ current_path }}"
|
||||
register: current_release_before
|
||||
|
||||
- name: Store previous release path for rollback
|
||||
set_fact:
|
||||
previous_release: "{{ current_release_before.stat.lnk_source | default('none') }}"
|
||||
|
||||
- name: Switch current symlink to new release (atomic operation)
|
||||
file:
|
||||
src: "{{ release_path }}"
|
||||
dest: "{{ current_path }}"
|
||||
state: link
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
force: yes
|
||||
|
||||
- name: Log symlink switch
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Symlink switched: {{ current_path }} -> {{ release_path }}"
|
||||
|
||||
# ==========================================
|
||||
# 8. Health Checks
|
||||
# ==========================================
|
||||
|
||||
- name: Wait for application to be ready
|
||||
wait_for:
|
||||
timeout: 10
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Health check - Summary endpoint
|
||||
uri:
|
||||
url: "http://{{ ansible_host }}/health/summary"
|
||||
method: GET
|
||||
return_content: yes
|
||||
status_code: 200
|
||||
register: health_check
|
||||
retries: 3
|
||||
delay: 5
|
||||
until: health_check.status == 200
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Log health check result
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Health check: {{ health_check.status | default('FAILED') }}"
|
||||
|
||||
- name: Rollback on health check failure
|
||||
block:
|
||||
- name: Switch symlink back to previous release
|
||||
file:
|
||||
src: "{{ previous_release }}"
|
||||
dest: "{{ current_path }}"
|
||||
state: link
|
||||
force: yes
|
||||
when: previous_release != 'none'
|
||||
|
||||
- name: Remove failed release
|
||||
file:
|
||||
path: "{{ release_path }}"
|
||||
state: absent
|
||||
|
||||
- name: Log rollback
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] ROLLBACK: Health check failed, reverted to {{ previous_release }}"
|
||||
|
||||
- name: Fail deployment
|
||||
fail:
|
||||
msg: "Deployment failed - health check returned {{ health_check.status }}. Rolled back to previous release."
|
||||
when: health_check.status != 200
|
||||
|
||||
# ==========================================
|
||||
# 9. Cleanup Old Releases
|
||||
# ==========================================
|
||||
|
||||
- name: Get list of all releases
|
||||
find:
|
||||
paths: "{{ releases_path }}"
|
||||
file_type: directory
|
||||
register: all_releases
|
||||
|
||||
- name: Sort releases by creation time
|
||||
set_fact:
|
||||
sorted_releases: "{{ all_releases.files | sort(attribute='ctime', reverse=true) }}"
|
||||
|
||||
- name: Remove old releases (keep last {{ keep_releases }})
|
||||
file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ sorted_releases[keep_releases:] }}"
|
||||
when: sorted_releases | length > keep_releases
|
||||
|
||||
- name: Log cleanup
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Cleanup: Kept {{ [sorted_releases | length, keep_releases] | min }} releases, removed {{ [sorted_releases | length - keep_releases, 0] | max }}"
|
||||
|
||||
post_tasks:
|
||||
- name: Cleanup and logging
|
||||
block:
|
||||
- name: Remove deployment lock
|
||||
file:
|
||||
path: "{{ app_base_path }}/.deploy.lock"
|
||||
state: absent
|
||||
|
||||
- name: Log deployment completion
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Deployment completed successfully - Release: {{ release_name }}"
|
||||
|
||||
- name: Display deployment summary
|
||||
debug:
|
||||
msg:
|
||||
- "=========================================="
|
||||
- "Deployment Summary"
|
||||
- "=========================================="
|
||||
- "Release: {{ release_name }}"
|
||||
- "Commit: {{ commit_hash.stdout }}"
|
||||
- "Path: {{ release_path }}"
|
||||
- "Current: {{ current_path }}"
|
||||
- "Health Check: {{ health_check.status | default('N/A') }}"
|
||||
- "Previous Release: {{ previous_release }}"
|
||||
- "=========================================="
|
||||
|
||||
rescue:
|
||||
- name: Remove deployment lock on failure
|
||||
file:
|
||||
path: "{{ app_base_path }}/.deploy.lock"
|
||||
state: absent
|
||||
|
||||
- name: Log deployment failure
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] DEPLOYMENT FAILED - Release: {{ release_name }}"
|
||||
|
||||
- name: Fail with error message
|
||||
fail:
|
||||
msg: "Deployment failed. Check {{ app_base_path }}/deploy.log for details."
|
||||
@@ -1,556 +0,0 @@
|
||||
---
|
||||
# Rsync-based Deployment Playbook with Releases/Symlink Pattern
|
||||
# Implements production-ready deployment with zero-downtime and rollback support
|
||||
# No GitHub dependency - deploys directly from local machine
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml
|
||||
# ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml --extra-vars "release_tag=v1.0.0"
|
||||
|
||||
- name: Deploy Custom PHP Framework (Rsync-based with Releases)
|
||||
hosts: web_servers
|
||||
become: true
|
||||
|
||||
vars:
|
||||
# Application configuration
|
||||
app_name: michaelschiemer
|
||||
app_user: deploy
|
||||
app_group: deploy
|
||||
|
||||
# Deployment paths
|
||||
app_base_path: "/home/{{ app_user }}/{{ app_name }}"
|
||||
releases_path: "{{ app_base_path }}/releases"
|
||||
shared_path: "{{ app_base_path }}/shared"
|
||||
current_path: "{{ app_base_path }}/current"
|
||||
|
||||
# Local source directory (project root on your machine)
|
||||
local_project_path: "{{ playbook_dir }}/../../.."
|
||||
|
||||
# Release configuration
|
||||
release_timestamp: "{{ ansible_date_time.epoch }}"
|
||||
# Note: effective_release_tag is set in pre_tasks based on Git tags
|
||||
release_name: "{{ effective_release_tag | default(release_tag | default(release_timestamp)) }}"
|
||||
release_path: "{{ releases_path }}/{{ release_name }}"
|
||||
|
||||
# Deployment settings
|
||||
keep_releases: 5
|
||||
composer_install_flags: "--no-dev --optimize-autoloader --no-interaction"
|
||||
|
||||
# Shared directories and files
|
||||
# Shared directories that need symlinks
|
||||
# NOTE: storage/logs, storage/cache, storage/uploads are handled by Docker volumes
|
||||
shared_dirs:
|
||||
- storage/sessions
|
||||
- public/uploads
|
||||
|
||||
shared_files:
|
||||
- .env.production
|
||||
|
||||
# Rsync exclusions
|
||||
rsync_excludes:
|
||||
- .git/
|
||||
- .github/
|
||||
- node_modules/
|
||||
- .env
|
||||
- .env.local
|
||||
- .env.development
|
||||
- storage/
|
||||
- public/uploads/
|
||||
- tests/
|
||||
- .idea/
|
||||
- .vscode/
|
||||
- "*.log"
|
||||
- .DS_Store
|
||||
- deployment/
|
||||
- database.sqlite
|
||||
- "*.cache"
|
||||
- .php-cs-fixer.cache
|
||||
- var/cache/
|
||||
- var/logs/
|
||||
- "*.php85/"
|
||||
- src/**/*.php85/
|
||||
|
||||
pre_tasks:
|
||||
# Git Tag Detection and Validation
|
||||
- name: Get current Git tag (if release_tag not specified)
|
||||
local_action:
|
||||
module: command
|
||||
cmd: git describe --tags --exact-match
|
||||
chdir: "{{ local_project_path }}"
|
||||
register: git_current_tag
|
||||
become: false
|
||||
ignore_errors: yes
|
||||
when: release_tag is not defined
|
||||
|
||||
- name: Get current Git commit hash
|
||||
local_action:
|
||||
module: command
|
||||
cmd: git rev-parse --short HEAD
|
||||
chdir: "{{ local_project_path }}"
|
||||
register: git_commit_hash
|
||||
become: false
|
||||
|
||||
- name: Set release_name from Git tag or timestamp
|
||||
set_fact:
|
||||
effective_release_tag: "{{ release_tag | default(git_current_tag.stdout if (git_current_tag is defined and git_current_tag.rc == 0) else release_timestamp) }}"
|
||||
git_hash: "{{ git_commit_hash.stdout }}"
|
||||
|
||||
- name: Display deployment information
|
||||
debug:
|
||||
msg:
|
||||
- "=========================================="
|
||||
- "Deployment Information"
|
||||
- "=========================================="
|
||||
- "Release: {{ effective_release_tag }}"
|
||||
- "Git Hash: {{ git_hash }}"
|
||||
- "Source: {{ local_project_path }}"
|
||||
- "Target: {{ ansible_host }}"
|
||||
- "=========================================="
|
||||
|
||||
- name: Install Composer dependencies locally before deployment
|
||||
local_action:
|
||||
module: command
|
||||
cmd: composer install {{ composer_install_flags }}
|
||||
chdir: "{{ local_project_path }}"
|
||||
become: false
|
||||
|
||||
- name: Build NPM assets locally before deployment
|
||||
local_action:
|
||||
module: command
|
||||
cmd: npm run build
|
||||
chdir: "{{ local_project_path }}"
|
||||
become: false
|
||||
- name: Check if deployment lock exists
|
||||
stat:
|
||||
path: "{{ app_base_path }}/.deploy.lock"
|
||||
register: deploy_lock
|
||||
|
||||
- name: Remove stale deployment lock if force flag is set
|
||||
file:
|
||||
path: "{{ app_base_path }}/.deploy.lock"
|
||||
state: absent
|
||||
when: deploy_lock.stat.exists and (force_deploy | default(false))
|
||||
|
||||
- name: Fail if deployment is already in progress (without force)
|
||||
fail:
|
||||
msg: "Deployment already in progress. Lock file exists: {{ app_base_path }}/.deploy.lock. Use --extra-vars 'force_deploy=true' to override."
|
||||
when: deploy_lock.stat.exists and not (force_deploy | default(false))
|
||||
|
||||
- name: Create deployment lock
|
||||
file:
|
||||
path: "{{ app_base_path }}/.deploy.lock"
|
||||
state: touch
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0644'
|
||||
|
||||
- name: Log deployment start
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Deployment started - Release: {{ release_name }} - User: {{ ansible_user_id }}"
|
||||
create: yes
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
|
||||
tasks:
|
||||
# ==========================================
|
||||
# 1. Directory Structure Setup
|
||||
# ==========================================
|
||||
|
||||
- name: Create base application directory
|
||||
file:
|
||||
path: "{{ app_base_path }}"
|
||||
state: directory
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0755'
|
||||
|
||||
- name: Create releases directory
|
||||
file:
|
||||
path: "{{ releases_path }}"
|
||||
state: directory
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0755'
|
||||
|
||||
- name: Create shared directory
|
||||
file:
|
||||
path: "{{ shared_path }}"
|
||||
state: directory
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0755'
|
||||
|
||||
- name: Create shared subdirectories
|
||||
file:
|
||||
path: "{{ shared_path }}/{{ item }}"
|
||||
state: directory
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0755'
|
||||
loop: "{{ shared_dirs }}"
|
||||
|
||||
# ==========================================
|
||||
# 2. Rsync Application Code to New Release
|
||||
# ==========================================
|
||||
|
||||
- name: Remove old release directory if exists (prevent permission issues)
|
||||
file:
|
||||
path: "{{ release_path }}"
|
||||
state: absent
|
||||
|
||||
- name: Create new release directory
|
||||
file:
|
||||
path: "{{ release_path }}"
|
||||
state: directory
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0755'
|
||||
|
||||
- name: Temporarily rename .dockerignore to prevent rsync -F from reading it
|
||||
command: mv {{ local_project_path }}/.dockerignore {{ local_project_path }}/.dockerignore.bak
|
||||
delegate_to: localhost
|
||||
become: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Sync application code to new release via rsync (raw command to avoid -F flag)
|
||||
command: >
|
||||
rsync --delay-updates --compress --delete-after --archive --rsh='ssh -i {{ ansible_ssh_private_key_file }} -o StrictHostKeyChecking=no' --no-g --no-o
|
||||
{% for exclude in rsync_excludes %}--exclude='{{ exclude }}' {% endfor %}
|
||||
{{ local_project_path }}/ {{ app_user }}@{{ ansible_host }}:{{ release_path }}/
|
||||
delegate_to: localhost
|
||||
become: false
|
||||
|
||||
- name: Restore .dockerignore after rsync
|
||||
command: mv {{ local_project_path }}/.dockerignore.bak {{ local_project_path }}/.dockerignore
|
||||
delegate_to: localhost
|
||||
become: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Set correct ownership for release
|
||||
file:
|
||||
path: "{{ release_path }}"
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
recurse: yes
|
||||
|
||||
- name: Get local git commit hash (if available)
|
||||
command: git rev-parse HEAD
|
||||
args:
|
||||
chdir: "{{ local_project_path }}"
|
||||
register: commit_hash
|
||||
delegate_to: localhost
|
||||
become: false
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Log release and commit information
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Release: {{ effective_release_tag }} | Git Hash: {{ git_hash | default('N/A') }} | Commit: {{ commit_hash.stdout | default('N/A') }}"
|
||||
when: commit_hash.rc == 0
|
||||
|
||||
# ==========================================
|
||||
# 3. Shared Files/Directories Symlinks
|
||||
# ==========================================
|
||||
|
||||
- name: Remove shared directories from release (they will be symlinked)
|
||||
file:
|
||||
path: "{{ release_path }}/{{ item }}"
|
||||
state: absent
|
||||
loop: "{{ shared_dirs }}"
|
||||
|
||||
- name: Create parent directories for symlinks
|
||||
file:
|
||||
path: "{{ release_path }}/{{ item | dirname }}"
|
||||
state: directory
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0755'
|
||||
loop: "{{ shared_dirs }}"
|
||||
# Skip if dirname is current directory ('.')
|
||||
when: (item | dirname) != '.'
|
||||
|
||||
- name: Create symlinks for shared directories
|
||||
file:
|
||||
src: "{{ shared_path }}/{{ item }}"
|
||||
dest: "{{ release_path }}/{{ item }}"
|
||||
state: link
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
force: yes
|
||||
loop: "{{ shared_dirs }}"
|
||||
|
||||
- name: Remove .env.production from release (will be symlinked)
|
||||
file:
|
||||
path: "{{ release_path }}/.env.production"
|
||||
state: absent
|
||||
|
||||
- name: Create symlink for .env.production
|
||||
file:
|
||||
src: "{{ shared_path }}/.env.production"
|
||||
dest: "{{ release_path }}/.env.production"
|
||||
state: link
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
force: yes
|
||||
|
||||
- name: Create .env symlink with relative path to shared .env.production for Docker container access
|
||||
file:
|
||||
src: "../../shared/.env.production"
|
||||
dest: "{{ release_path }}/.env"
|
||||
state: link
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
force: yes
|
||||
|
||||
# ==========================================
|
||||
# 4. Dependencies Installation
|
||||
# ==========================================
|
||||
|
||||
# Composer dependencies and NPM assets are already built locally and rsync'd
|
||||
# No need to run composer install or npm build on the server
|
||||
|
||||
# ==========================================
|
||||
# 5. File Permissions
|
||||
# ==========================================
|
||||
|
||||
- name: Make console script executable
|
||||
file:
|
||||
path: "{{ release_path }}/console.php"
|
||||
mode: '0755'
|
||||
ignore_errors: yes
|
||||
|
||||
# ==========================================
|
||||
# 6. Database Migrations (Optional)
|
||||
# ==========================================
|
||||
|
||||
- name: Run database migrations
|
||||
command: php console.php db:migrate --no-interaction
|
||||
args:
|
||||
chdir: "{{ release_path }}"
|
||||
become_user: "{{ app_user }}"
|
||||
when: run_migrations | default(false) | bool
|
||||
register: migrations_result
|
||||
|
||||
- name: Log migration result
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Migrations: {{ migrations_result.stdout | default('skipped') }}"
|
||||
when: run_migrations | default(false) | bool
|
||||
|
||||
# ==========================================
|
||||
# 7. Prepare for Deployment
|
||||
# ==========================================
|
||||
|
||||
- name: Get current release (before switch)
|
||||
stat:
|
||||
path: "{{ current_path }}"
|
||||
register: current_release_before
|
||||
|
||||
- name: Stop existing Docker containers (if any)
|
||||
command: docker compose -f docker-compose.yml -f docker-compose.production.yml down
|
||||
args:
|
||||
chdir: "{{ current_path }}"
|
||||
become_user: "{{ app_user }}"
|
||||
when: current_release_before.stat.exists
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Remove any remaining containers (force cleanup all)
|
||||
shell: |
|
||||
docker stop certbot db redis php web queue-worker 2>/dev/null || true
|
||||
docker rm certbot db redis php web queue-worker 2>/dev/null || true
|
||||
become_user: "{{ app_user }}"
|
||||
ignore_errors: yes
|
||||
|
||||
# ==========================================
|
||||
# 8. Symlink Switch (Zero-Downtime)
|
||||
# ==========================================
|
||||
|
||||
- name: Store previous release path for rollback
|
||||
set_fact:
|
||||
previous_release: "{{ current_release_before.stat.lnk_source | default('none') }}"
|
||||
|
||||
- name: Switch current symlink to new release (atomic operation)
|
||||
file:
|
||||
src: "{{ release_path }}"
|
||||
dest: "{{ current_path }}"
|
||||
state: link
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
force: yes
|
||||
|
||||
- name: Log symlink switch
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Symlink switched: {{ current_path }} -> {{ release_path }}"
|
||||
|
||||
# ==========================================
|
||||
# 8.5. SSL Certificate Setup
|
||||
# ==========================================
|
||||
|
||||
- name: Create SSL directory in release
|
||||
file:
|
||||
path: "{{ release_path }}/ssl"
|
||||
state: directory
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
mode: '0755'
|
||||
|
||||
- name: Copy SSL certificates from certbot to release (if they exist)
|
||||
shell: |
|
||||
if docker ps | grep -q certbot; then
|
||||
docker cp certbot:/etc/letsencrypt/archive/michaelschiemer.de/fullchain1.pem {{ release_path }}/ssl/fullchain.pem 2>/dev/null || true
|
||||
docker cp certbot:/etc/letsencrypt/archive/michaelschiemer.de/privkey1.pem {{ release_path }}/ssl/privkey.pem 2>/dev/null || true
|
||||
chown {{ app_user }}:{{ app_group }} {{ release_path }}/ssl/*.pem 2>/dev/null || true
|
||||
fi
|
||||
args:
|
||||
chdir: "{{ current_path }}"
|
||||
ignore_errors: yes
|
||||
|
||||
# ==========================================
|
||||
# 9. Start Docker Containers
|
||||
# ==========================================
|
||||
|
||||
- name: Start Docker containers with new release
|
||||
command: docker compose -f docker-compose.yml -f docker-compose.production.yml up -d --build
|
||||
args:
|
||||
chdir: "{{ current_path }}"
|
||||
become_user: "{{ app_user }}"
|
||||
|
||||
- name: Wait for containers to be ready
|
||||
pause:
|
||||
seconds: 15
|
||||
|
||||
# ==========================================
|
||||
# 10. Health Checks
|
||||
# ==========================================
|
||||
|
||||
- name: Wait for application to be ready
|
||||
pause:
|
||||
seconds: 10
|
||||
|
||||
- name: Health check - Nginx ping endpoint (HTTPS)
|
||||
uri:
|
||||
url: "https://{{ ansible_host }}/ping"
|
||||
method: GET
|
||||
return_content: yes
|
||||
status_code: 200
|
||||
validate_certs: no
|
||||
follow_redirects: none
|
||||
register: health_check
|
||||
retries: 3
|
||||
delay: 5
|
||||
until: health_check.status == 200
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Log health check result
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Health check: {{ health_check.status | default('FAILED') }}"
|
||||
|
||||
- name: Rollback on health check failure
|
||||
block:
|
||||
- name: Stop failed release containers
|
||||
command: docker compose down
|
||||
args:
|
||||
chdir: "{{ current_path }}"
|
||||
become_user: "{{ app_user }}"
|
||||
|
||||
- name: Switch symlink back to previous release
|
||||
file:
|
||||
src: "{{ previous_release }}"
|
||||
dest: "{{ current_path }}"
|
||||
state: link
|
||||
force: yes
|
||||
when: previous_release != 'none'
|
||||
|
||||
- name: Start previous release containers
|
||||
command: docker compose -f docker-compose.yml -f docker-compose.production.yml up -d
|
||||
args:
|
||||
chdir: "{{ current_path }}"
|
||||
become_user: "{{ app_user }}"
|
||||
when: previous_release != 'none'
|
||||
|
||||
- name: Remove failed release
|
||||
file:
|
||||
path: "{{ release_path }}"
|
||||
state: absent
|
||||
|
||||
- name: Log rollback
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] ROLLBACK: Health check failed, reverted to {{ previous_release }}"
|
||||
|
||||
- name: Fail deployment
|
||||
fail:
|
||||
msg: "Deployment failed - health check returned {{ health_check.status }}. Rolled back to previous release."
|
||||
when: health_check.status != 200
|
||||
|
||||
# ==========================================
|
||||
# 11. Cleanup Old Releases
|
||||
# ==========================================
|
||||
|
||||
- name: Get list of all releases
|
||||
find:
|
||||
paths: "{{ releases_path }}"
|
||||
file_type: directory
|
||||
register: all_releases
|
||||
|
||||
- name: Sort releases by creation time
|
||||
set_fact:
|
||||
sorted_releases: "{{ all_releases.files | sort(attribute='ctime', reverse=true) }}"
|
||||
|
||||
- name: Remove old releases (keep last {{ keep_releases }})
|
||||
file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ sorted_releases[keep_releases:] }}"
|
||||
when: sorted_releases | length > keep_releases
|
||||
|
||||
- name: Log cleanup
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Cleanup: Kept {{ [sorted_releases | length, keep_releases] | min }} releases, removed {{ [sorted_releases | length - keep_releases, 0] | max }}"
|
||||
|
||||
post_tasks:
|
||||
- name: Cleanup and logging
|
||||
block:
|
||||
- name: Remove deployment lock
|
||||
file:
|
||||
path: "{{ app_base_path }}/.deploy.lock"
|
||||
state: absent
|
||||
|
||||
- name: Log deployment completion
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] Deployment completed successfully - Release: {{ release_name }}"
|
||||
|
||||
- name: Display deployment summary
|
||||
debug:
|
||||
msg:
|
||||
- "=========================================="
|
||||
- "Deployment Summary"
|
||||
- "=========================================="
|
||||
- "Release: {{ release_name }}"
|
||||
- "Commit: {{ commit_hash.stdout | default('N/A') }}"
|
||||
- "Path: {{ release_path }}"
|
||||
- "Current: {{ current_path }}"
|
||||
- "Health Check: {{ health_check.status | default('N/A') }}"
|
||||
- "Previous Release: {{ previous_release }}"
|
||||
- "=========================================="
|
||||
|
||||
rescue:
|
||||
- name: Remove deployment lock on failure
|
||||
file:
|
||||
path: "{{ app_base_path }}/.deploy.lock"
|
||||
state: absent
|
||||
|
||||
- name: Log deployment failure
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] DEPLOYMENT FAILED - Release: {{ release_name }}"
|
||||
|
||||
- name: Fail with error message
|
||||
fail:
|
||||
msg: "Deployment failed. Check {{ app_base_path }}/deploy.log for details."
|
||||
@@ -1,257 +0,0 @@
|
||||
---
|
||||
# Initial server setup playbook for fresh Netcup VPS
|
||||
# Configures security, creates deploy user, and prepares server
|
||||
# Run once on fresh server installation
|
||||
|
||||
- name: Initial Server Setup for Custom PHP Framework
|
||||
hosts: web_servers
|
||||
become: true
|
||||
gather_facts: true
|
||||
|
||||
vars:
|
||||
deploy_user: "{{ deploy_user_name | default('deploy') }}"
|
||||
deploy_user_shell: /bin/bash
|
||||
|
||||
pre_tasks:
|
||||
- name: Verify this is a fresh server setup
|
||||
assert:
|
||||
that:
|
||||
- fresh_server_setup is defined and fresh_server_setup == true
|
||||
- create_deploy_user is defined and create_deploy_user == true
|
||||
fail_msg: "This playbook is only for fresh server setup. Set fresh_server_setup=true to continue."
|
||||
tags: always
|
||||
|
||||
- name: Update apt cache
|
||||
apt:
|
||||
update_cache: true
|
||||
cache_valid_time: 3600
|
||||
tags: system
|
||||
|
||||
tasks:
|
||||
# System Updates and Basic Packages
|
||||
- name: Upgrade all packages
|
||||
apt:
|
||||
upgrade: full
|
||||
autoremove: true
|
||||
autoclean: true
|
||||
tags: system
|
||||
|
||||
- name: Install essential packages
|
||||
apt:
|
||||
name:
|
||||
- curl
|
||||
- wget
|
||||
- git
|
||||
- unzip
|
||||
- zip
|
||||
- vim
|
||||
- htop
|
||||
- tree
|
||||
- rsync
|
||||
- ca-certificates
|
||||
- gnupg
|
||||
- lsb-release
|
||||
- software-properties-common
|
||||
- apt-transport-https
|
||||
- ufw
|
||||
- fail2ban
|
||||
state: present
|
||||
tags: system
|
||||
|
||||
# User Management
|
||||
- name: Create deploy user
|
||||
user:
|
||||
name: "{{ deploy_user }}"
|
||||
comment: "Deployment user for Custom PHP Framework"
|
||||
shell: "{{ deploy_user_shell }}"
|
||||
home: "/home/{{ deploy_user }}"
|
||||
create_home: true
|
||||
groups: "{{ deploy_user_groups | default(['sudo']) }}"
|
||||
append: true
|
||||
tags: users
|
||||
|
||||
- name: Set up authorized_keys for deploy user
|
||||
authorized_key:
|
||||
user: "{{ deploy_user }}"
|
||||
state: present
|
||||
key: "{{ lookup('file', ansible_ssh_private_key_file + '.pub') }}"
|
||||
comment: "Deploy key for {{ deploy_user }}@{{ inventory_hostname }}"
|
||||
tags: users
|
||||
|
||||
- name: Allow deploy user sudo without password
|
||||
lineinfile:
|
||||
dest: /etc/sudoers.d/{{ deploy_user }}
|
||||
line: "{{ deploy_user }} ALL=(ALL) NOPASSWD:ALL"
|
||||
state: present
|
||||
mode: '0440'
|
||||
create: true
|
||||
validate: 'visudo -cf %s'
|
||||
tags: users
|
||||
|
||||
# SSH Security Hardening
|
||||
- name: Configure SSH security
|
||||
lineinfile:
|
||||
dest: /etc/ssh/sshd_config
|
||||
regexp: "{{ item.regexp }}"
|
||||
line: "{{ item.line }}"
|
||||
backup: true
|
||||
loop:
|
||||
- { regexp: '^#?PasswordAuthentication', line: 'PasswordAuthentication no' }
|
||||
- { regexp: '^#?PubkeyAuthentication', line: 'PubkeyAuthentication yes' }
|
||||
- { regexp: '^#?PermitRootLogin', line: 'PermitRootLogin prohibit-password' }
|
||||
- { regexp: '^#?PermitEmptyPasswords', line: 'PermitEmptyPasswords no' }
|
||||
- { regexp: '^#?MaxAuthTries', line: 'MaxAuthTries 3' }
|
||||
- { regexp: '^#?ClientAliveInterval', line: 'ClientAliveInterval 300' }
|
||||
- { regexp: '^#?ClientAliveCountMax', line: 'ClientAliveCountMax 2' }
|
||||
notify: restart sshd
|
||||
tags: security
|
||||
|
||||
- name: Restrict SSH to specific users
|
||||
lineinfile:
|
||||
dest: /etc/ssh/sshd_config
|
||||
line: "AllowUsers root {{ deploy_user }}"
|
||||
state: present
|
||||
notify: restart sshd
|
||||
tags: security
|
||||
|
||||
# Firewall Configuration
|
||||
- name: Configure UFW default policies
|
||||
ufw:
|
||||
policy: "{{ item.policy }}"
|
||||
direction: "{{ item.direction }}"
|
||||
loop:
|
||||
- { policy: 'deny', direction: 'incoming' }
|
||||
- { policy: 'allow', direction: 'outgoing' }
|
||||
tags: firewall
|
||||
|
||||
- name: Allow SSH through firewall
|
||||
ufw:
|
||||
rule: allow
|
||||
name: OpenSSH
|
||||
tags: firewall
|
||||
|
||||
- name: Allow HTTP and HTTPS through firewall
|
||||
ufw:
|
||||
rule: allow
|
||||
port: "{{ item }}"
|
||||
proto: tcp
|
||||
loop:
|
||||
- 80
|
||||
- 443
|
||||
tags: firewall
|
||||
|
||||
- name: Enable UFW
|
||||
ufw:
|
||||
state: enabled
|
||||
tags: firewall
|
||||
|
||||
# Fail2ban Configuration
|
||||
- name: Configure fail2ban for SSH
|
||||
copy:
|
||||
dest: /etc/fail2ban/jail.local
|
||||
content: |
|
||||
[DEFAULT]
|
||||
bantime = 3600
|
||||
findtime = 600
|
||||
maxretry = 3
|
||||
|
||||
[sshd]
|
||||
enabled = true
|
||||
port = ssh
|
||||
filter = sshd
|
||||
logpath = /var/log/auth.log
|
||||
maxretry = 3
|
||||
bantime = 3600
|
||||
backup: true
|
||||
notify: restart fail2ban
|
||||
tags: security
|
||||
|
||||
# System Optimization
|
||||
- name: Configure swappiness
|
||||
sysctl:
|
||||
name: vm.swappiness
|
||||
value: '10'
|
||||
state: present
|
||||
tags: performance
|
||||
|
||||
- name: Configure filesystem parameters
|
||||
sysctl:
|
||||
name: "{{ item.name }}"
|
||||
value: "{{ item.value }}"
|
||||
state: present
|
||||
loop:
|
||||
- { name: 'fs.file-max', value: '2097152' }
|
||||
- { name: 'net.core.somaxconn', value: '65535' }
|
||||
- { name: 'net.ipv4.tcp_max_syn_backlog', value: '65535' }
|
||||
tags: performance
|
||||
|
||||
# Time Synchronization
|
||||
- name: Install and configure NTP
|
||||
apt:
|
||||
name: ntp
|
||||
state: present
|
||||
tags: system
|
||||
|
||||
- name: Ensure NTP is running and enabled
|
||||
systemd:
|
||||
name: ntp
|
||||
state: started
|
||||
enabled: true
|
||||
tags: system
|
||||
|
||||
# Directory Structure
|
||||
- name: Create application directories
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
owner: "{{ deploy_user }}"
|
||||
group: "{{ deploy_user }}"
|
||||
mode: '0755'
|
||||
loop:
|
||||
- /var/www
|
||||
- /var/www/html
|
||||
- /var/www/backups
|
||||
- /var/www/logs
|
||||
- /var/log/custom-php-framework
|
||||
tags: directories
|
||||
|
||||
# Log Rotation
|
||||
- name: Configure log rotation for application
|
||||
copy:
|
||||
dest: /etc/logrotate.d/custom-php-framework
|
||||
content: |
|
||||
/var/log/custom-php-framework/*.log {
|
||||
daily
|
||||
missingok
|
||||
rotate 30
|
||||
compress
|
||||
notifempty
|
||||
create 644 www-data www-data
|
||||
postrotate
|
||||
/bin/systemctl reload-or-restart docker || true
|
||||
endscript
|
||||
}
|
||||
tags: logs
|
||||
|
||||
handlers:
|
||||
- name: restart sshd
|
||||
systemd:
|
||||
name: sshd
|
||||
state: restarted
|
||||
|
||||
- name: restart fail2ban
|
||||
systemd:
|
||||
name: fail2ban
|
||||
state: restarted
|
||||
|
||||
post_tasks:
|
||||
- name: Display setup completion info
|
||||
debug:
|
||||
msg:
|
||||
- "Initial server setup completed successfully!"
|
||||
- "Deploy user '{{ deploy_user }}' created with sudo privileges"
|
||||
- "SSH key authentication configured"
|
||||
- "Firewall enabled (SSH, HTTP, HTTPS allowed)"
|
||||
- "Fail2ban configured for SSH protection"
|
||||
- "Next: Update inventory to use deploy user and run infrastructure setup"
|
||||
tags: always
|
||||
@@ -1,142 +0,0 @@
|
||||
---
|
||||
# Git-based Rollback Playbook
|
||||
# Rolls back to the previous release by switching the symlink
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook -i inventories/production/hosts.yml playbooks/rollback-git-based.yml
|
||||
# ansible-playbook -i inventories/production/hosts.yml playbooks/rollback-git-based.yml --extra-vars "rollback_to=20241025123456"
|
||||
|
||||
- name: Rollback Custom PHP Framework (Git-based)
|
||||
hosts: web_servers
|
||||
become: true
|
||||
|
||||
vars:
|
||||
app_name: michaelschiemer
|
||||
app_user: deploy
|
||||
app_group: deploy
|
||||
app_base_path: "/var/www/{{ app_name }}"
|
||||
releases_path: "{{ app_base_path }}/releases"
|
||||
current_path: "{{ app_base_path }}/current"
|
||||
|
||||
pre_tasks:
|
||||
- name: Check if deployment lock exists
|
||||
stat:
|
||||
path: "{{ app_base_path }}/.deploy.lock"
|
||||
register: deploy_lock
|
||||
|
||||
- name: Fail if deployment is in progress
|
||||
fail:
|
||||
msg: "Cannot rollback - deployment in progress"
|
||||
when: deploy_lock.stat.exists
|
||||
|
||||
- name: Create rollback lock
|
||||
file:
|
||||
path: "{{ app_base_path }}/.rollback.lock"
|
||||
state: touch
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
|
||||
tasks:
|
||||
- name: Get current release
|
||||
stat:
|
||||
path: "{{ current_path }}"
|
||||
register: current_release
|
||||
|
||||
- name: Fail if no current release exists
|
||||
fail:
|
||||
msg: "No current release found at {{ current_path }}"
|
||||
when: not current_release.stat.exists
|
||||
|
||||
- name: Get list of all releases
|
||||
find:
|
||||
paths: "{{ releases_path }}"
|
||||
file_type: directory
|
||||
register: all_releases
|
||||
|
||||
- name: Sort releases by creation time (newest first)
|
||||
set_fact:
|
||||
sorted_releases: "{{ all_releases.files | sort(attribute='ctime', reverse=true) }}"
|
||||
|
||||
- name: Determine target release for rollback
|
||||
set_fact:
|
||||
target_release: "{{ rollback_to if rollback_to is defined else sorted_releases[1].path }}"
|
||||
|
||||
- name: Verify target release exists
|
||||
stat:
|
||||
path: "{{ target_release }}"
|
||||
register: target_release_stat
|
||||
|
||||
- name: Fail if target release doesn't exist
|
||||
fail:
|
||||
msg: "Target release not found: {{ target_release }}"
|
||||
when: not target_release_stat.stat.exists
|
||||
|
||||
- name: Display rollback information
|
||||
debug:
|
||||
msg:
|
||||
- "Current release: {{ current_release.stat.lnk_source }}"
|
||||
- "Rolling back to: {{ target_release }}"
|
||||
|
||||
- name: Switch symlink to previous release
|
||||
file:
|
||||
src: "{{ target_release }}"
|
||||
dest: "{{ current_path }}"
|
||||
state: link
|
||||
owner: "{{ app_user }}"
|
||||
group: "{{ app_group }}"
|
||||
force: yes
|
||||
|
||||
- name: Wait for application to be ready
|
||||
wait_for:
|
||||
timeout: 5
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Health check after rollback
|
||||
uri:
|
||||
url: "http://{{ ansible_host }}/health/summary"
|
||||
method: GET
|
||||
return_content: yes
|
||||
status_code: 200
|
||||
register: health_check
|
||||
retries: 3
|
||||
delay: 5
|
||||
until: health_check.status == 200
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Log rollback
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] ROLLBACK: {{ current_release.stat.lnk_source }} -> {{ target_release }} - Health: {{ health_check.status | default('FAILED') }}"
|
||||
create: yes
|
||||
|
||||
- name: Display rollback result
|
||||
debug:
|
||||
msg:
|
||||
- "=========================================="
|
||||
- "Rollback completed"
|
||||
- "Previous: {{ current_release.stat.lnk_source }}"
|
||||
- "Current: {{ target_release }}"
|
||||
- "Health check: {{ health_check.status | default('FAILED') }}"
|
||||
- "=========================================="
|
||||
|
||||
post_tasks:
|
||||
- name: Remove rollback lock
|
||||
file:
|
||||
path: "{{ app_base_path }}/.rollback.lock"
|
||||
state: absent
|
||||
|
||||
rescue:
|
||||
- name: Remove rollback lock on failure
|
||||
file:
|
||||
path: "{{ app_base_path }}/.rollback.lock"
|
||||
state: absent
|
||||
|
||||
- name: Log rollback failure
|
||||
lineinfile:
|
||||
path: "{{ app_base_path }}/deploy.log"
|
||||
line: "[{{ ansible_date_time.iso8601 }}] ROLLBACK FAILED"
|
||||
create: yes
|
||||
|
||||
- name: Fail with error message
|
||||
fail:
|
||||
msg: "Rollback failed"
|
||||
@@ -1,235 +0,0 @@
|
||||
---
|
||||
# Application Rollback Playbook
|
||||
# Rolls back to a specific image tag using container deployment
|
||||
---
|
||||
# Rollback Playbook: setzt IMAGE_TAG zurück und recreatet Services
|
||||
|
||||
- name: Rollback Custom PHP Framework Application
|
||||
hosts: web_servers
|
||||
become: true
|
||||
gather_facts: false
|
||||
|
||||
vars:
|
||||
app_path: "/var/www/html"
|
||||
domain_name: "{{ DOMAIN_NAME | default('michaelschiemer.de') }}"
|
||||
rollback_tag: "{{ ROLLBACK_TAG | default('') }}"
|
||||
compose_project: "{{ compose_project_name | default(app_path | basename) }}"
|
||||
|
||||
pre_tasks:
|
||||
- name: Validate ROLLBACK_TAG is provided
|
||||
ansible.builtin.fail:
|
||||
msg: "Setze ROLLBACK_TAG auf ein gültiges Image-Tag."
|
||||
when: rollback_tag | length == 0
|
||||
|
||||
tasks:
|
||||
- name: Ensure environment file exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ app_path }}/.env.{{ environment }}"
|
||||
register: env_file
|
||||
|
||||
- name: Fail if environment file is missing
|
||||
ansible.builtin.fail:
|
||||
msg: "Environment-Datei fehlt: {{ app_path }}/.env.{{ environment }}"
|
||||
when: not env_file.stat.exists
|
||||
|
||||
- name: Write IMAGE_TAG to env file
|
||||
ansible.builtin.lineinfile:
|
||||
path: "{{ app_path }}/.env.{{ environment }}"
|
||||
regexp: '^IMAGE_TAG='
|
||||
line: "IMAGE_TAG={{ rollback_tag }}"
|
||||
create: no
|
||||
backrefs: false
|
||||
mode: "0600"
|
||||
|
||||
- name: Recreate services with rollback tag
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ app_path }}"
|
||||
files:
|
||||
- docker-compose.yml
|
||||
- "docker-compose.{{ environment }}.yml"
|
||||
env_files:
|
||||
- ".env.{{ environment }}"
|
||||
pull: false
|
||||
build: false
|
||||
state: present
|
||||
recreate: smart
|
||||
remove_orphans: true
|
||||
timeout: 300
|
||||
|
||||
- name: Wait for PHP container to be healthy (label-based)
|
||||
community.docker.docker_container_info:
|
||||
filters:
|
||||
label:
|
||||
- "com.docker.compose.service=php"
|
||||
- "com.docker.compose.project={{ compose_project }}"
|
||||
register: php_info
|
||||
retries: 20
|
||||
delay: 10
|
||||
until: php_info.containers is defined and
|
||||
(php_info.containers | length) > 0 and
|
||||
(
|
||||
(php_info.containers[0].State.Health is defined and php_info.containers[0].State.Health.Status == "healthy")
|
||||
or
|
||||
php_info.containers[0].State.Status == "running"
|
||||
)
|
||||
|
||||
- name: Verify application HTTP health
|
||||
ansible.builtin.uri:
|
||||
url: "https://{{ domain_name }}/health"
|
||||
method: GET
|
||||
status_code: 200
|
||||
timeout: 30
|
||||
validate_certs: true
|
||||
register: http_health
|
||||
retries: 15
|
||||
delay: 10
|
||||
until: http_health.status == 200
|
||||
|
||||
post_tasks:
|
||||
- name: Rollback completed
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "Rollback erfolgreich"
|
||||
- "Neues aktives Image-Tag: {{ rollback_tag }}"
|
||||
- name: Rollback Custom PHP Framework Application
|
||||
hosts: web_servers
|
||||
become: true
|
||||
gather_facts: true
|
||||
|
||||
vars:
|
||||
app_path: "/var/www/html"
|
||||
rollback_tag: "{{ ROLLBACK_TAG | mandatory }}"
|
||||
domain_name: "{{ DOMAIN_NAME | default('michaelschiemer.de') }}"
|
||||
environment: "{{ ENV | default('production') }}"
|
||||
|
||||
pre_tasks:
|
||||
- name: Verify rollback requirements
|
||||
assert:
|
||||
that:
|
||||
- rollback_tag is defined
|
||||
- rollback_tag != ''
|
||||
- rollback_tag != 'latest'
|
||||
fail_msg: "Rollback requires specific ROLLBACK_TAG (not 'latest')"
|
||||
tags: always
|
||||
|
||||
- name: Check if target tag exists locally
|
||||
community.docker.docker_image_info:
|
||||
name: "{{ project_name | default('michaelschiemer') }}:{{ rollback_tag }}"
|
||||
register: rollback_image_info
|
||||
ignore_errors: true
|
||||
tags: always
|
||||
|
||||
- name: Pull rollback image if not available locally
|
||||
community.docker.docker_image:
|
||||
name: "{{ project_name | default('michaelschiemer') }}:{{ rollback_tag }}"
|
||||
source: pull
|
||||
force_source: true
|
||||
when: rollback_image_info.images | length == 0
|
||||
tags: always
|
||||
|
||||
- name: Store current deployment for emergency recovery
|
||||
shell: |
|
||||
if [ -f {{ app_path }}/.env.{{ environment }} ]; then
|
||||
grep '^IMAGE_TAG=' {{ app_path }}/.env.{{ environment }} | cut -d'=' -f2 > {{ app_path }}/.emergency_recovery_tag || echo 'none'
|
||||
fi
|
||||
tags: backup
|
||||
|
||||
tasks:
|
||||
- name: Update environment with rollback tag
|
||||
template:
|
||||
src: "{{ environment }}.env.template"
|
||||
dest: "{{ app_path }}/.env.{{ environment }}"
|
||||
owner: deploy
|
||||
group: deploy
|
||||
mode: '0600'
|
||||
backup: true
|
||||
vars:
|
||||
IMAGE_TAG: "{{ rollback_tag }}"
|
||||
DOMAIN_NAME: "{{ domain_name }}"
|
||||
no_log: true
|
||||
tags: rollback
|
||||
|
||||
- name: Stop current services
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ app_path }}"
|
||||
files:
|
||||
- docker-compose.yml
|
||||
- "docker-compose.{{ environment }}.yml"
|
||||
env_files:
|
||||
- ".env.{{ environment }}"
|
||||
state: stopped
|
||||
timeout: 120
|
||||
tags: rollback
|
||||
|
||||
- name: Deploy rollback version
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ app_path }}"
|
||||
files:
|
||||
- docker-compose.yml
|
||||
- "docker-compose.{{ environment }}.yml"
|
||||
env_files:
|
||||
- ".env.{{ environment }}"
|
||||
pull: "never" # Use local image
|
||||
build: "never"
|
||||
state: present
|
||||
recreate: "always" # Force recreate for rollback
|
||||
timeout: 300
|
||||
tags: rollback
|
||||
|
||||
- name: Wait for containers to be healthy after rollback
|
||||
community.docker.docker_container_info:
|
||||
name: "{{ item }}"
|
||||
register: container_info
|
||||
retries: 15
|
||||
delay: 10
|
||||
until: container_info.container.State.Health.Status == "healthy" or container_info.container.State.Status == "running"
|
||||
loop:
|
||||
- "{{ ansible_hostname }}_php_1"
|
||||
- "{{ ansible_hostname }}_web_1"
|
||||
- "{{ ansible_hostname }}_db_1"
|
||||
- "{{ ansible_hostname }}_redis_1"
|
||||
ignore_errors: true
|
||||
tags: rollback
|
||||
|
||||
- name: Verify application health after rollback
|
||||
uri:
|
||||
url: "https://{{ domain_name }}/health"
|
||||
method: GET
|
||||
status_code: 200
|
||||
timeout: 30
|
||||
headers:
|
||||
User-Agent: "Mozilla/5.0 (Ansible Rollback Check)"
|
||||
validate_certs: true
|
||||
retries: 10
|
||||
delay: 15
|
||||
tags: rollback
|
||||
|
||||
- name: Update successful rollback tag
|
||||
copy:
|
||||
content: "{{ rollback_tag }}"
|
||||
dest: "{{ app_path }}/.last_successful_release"
|
||||
owner: deploy
|
||||
group: deploy
|
||||
mode: '0644'
|
||||
tags: rollback
|
||||
|
||||
post_tasks:
|
||||
- name: Rollback success notification
|
||||
debug:
|
||||
msg:
|
||||
- "Application rollback completed successfully"
|
||||
- "Rolled back to: {{ rollback_tag }}"
|
||||
- "Environment: {{ environment }}"
|
||||
- "Domain: {{ domain_name }}"
|
||||
- "Emergency recovery tag stored for further rollback if needed"
|
||||
tags: always
|
||||
|
||||
- name: Log rollback event
|
||||
lineinfile:
|
||||
path: "{{ app_path }}/rollback.log"
|
||||
line: "{{ ansible_date_time.iso8601 }} - Rollback to {{ rollback_tag }} from {{ environment }} completed successfully"
|
||||
create: true
|
||||
owner: deploy
|
||||
group: deploy
|
||||
mode: '0644'
|
||||
tags: always
|
||||
@@ -1,170 +0,0 @@
|
||||
---
|
||||
# Docker Setup Playbook
|
||||
# Ensures Docker and Docker Compose are installed and configured
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook -i inventories/production/hosts.yml playbooks/setup-docker.yml
|
||||
|
||||
- name: Setup Docker for Production
|
||||
hosts: web_servers
|
||||
become: true
|
||||
|
||||
vars:
|
||||
app_user: deploy
|
||||
docker_compose_version: "2.24.0"
|
||||
|
||||
tasks:
|
||||
# ==========================================
|
||||
# 1. Verify Docker Installation
|
||||
# ==========================================
|
||||
|
||||
- name: Check if Docker is installed
|
||||
command: docker --version
|
||||
register: docker_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display Docker version
|
||||
debug:
|
||||
msg: "Docker is already installed: {{ docker_check.stdout }}"
|
||||
when: docker_check.rc == 0
|
||||
|
||||
- name: Install Docker if not present
|
||||
block:
|
||||
- name: Update apt cache
|
||||
apt:
|
||||
update_cache: yes
|
||||
|
||||
- name: Install prerequisites
|
||||
apt:
|
||||
name:
|
||||
- apt-transport-https
|
||||
- ca-certificates
|
||||
- curl
|
||||
- gnupg
|
||||
- lsb-release
|
||||
state: present
|
||||
|
||||
- name: Add Docker GPG key
|
||||
apt_key:
|
||||
url: https://download.docker.com/linux/ubuntu/gpg
|
||||
state: present
|
||||
|
||||
- name: Add Docker repository
|
||||
apt_repository:
|
||||
repo: "deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
|
||||
state: present
|
||||
|
||||
- name: Install Docker
|
||||
apt:
|
||||
name:
|
||||
- docker-ce
|
||||
- docker-ce-cli
|
||||
- containerd.io
|
||||
state: present
|
||||
update_cache: yes
|
||||
when: docker_check.rc != 0
|
||||
|
||||
# ==========================================
|
||||
# 2. Configure Docker
|
||||
# ==========================================
|
||||
|
||||
- name: Add deploy user to docker group
|
||||
user:
|
||||
name: "{{ app_user }}"
|
||||
groups: docker
|
||||
append: yes
|
||||
|
||||
- name: Ensure Docker service is enabled and started
|
||||
systemd:
|
||||
name: docker
|
||||
enabled: yes
|
||||
state: started
|
||||
|
||||
# ==========================================
|
||||
# 3. Install Docker Compose Plugin
|
||||
# ==========================================
|
||||
|
||||
- name: Check if Docker Compose plugin is installed
|
||||
command: docker compose version
|
||||
register: compose_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display Docker Compose version
|
||||
debug:
|
||||
msg: "Docker Compose is already installed: {{ compose_check.stdout }}"
|
||||
when: compose_check.rc == 0
|
||||
|
||||
# ==========================================
|
||||
# 4. Configure Docker Daemon
|
||||
# ==========================================
|
||||
|
||||
- name: Create Docker daemon configuration
|
||||
copy:
|
||||
dest: /etc/docker/daemon.json
|
||||
content: |
|
||||
{
|
||||
"log-driver": "json-file",
|
||||
"log-opts": {
|
||||
"max-size": "10m",
|
||||
"max-file": "3"
|
||||
},
|
||||
"live-restore": true
|
||||
}
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: Restart Docker
|
||||
|
||||
# ==========================================
|
||||
# 5. Firewall Configuration
|
||||
# ==========================================
|
||||
|
||||
- name: Allow HTTP traffic
|
||||
ufw:
|
||||
rule: allow
|
||||
port: '80'
|
||||
proto: tcp
|
||||
|
||||
- name: Allow HTTPS traffic
|
||||
ufw:
|
||||
rule: allow
|
||||
port: '443'
|
||||
proto: tcp
|
||||
|
||||
# ==========================================
|
||||
# 6. Verification
|
||||
# ==========================================
|
||||
|
||||
- name: Get Docker info
|
||||
command: docker info
|
||||
register: docker_info
|
||||
changed_when: false
|
||||
|
||||
- name: Get Docker Compose version
|
||||
command: docker compose version
|
||||
register: compose_version
|
||||
changed_when: false
|
||||
|
||||
- name: Display setup summary
|
||||
debug:
|
||||
msg:
|
||||
- "=========================================="
|
||||
- "Docker Setup Complete"
|
||||
- "=========================================="
|
||||
- "Docker Version: {{ docker_check.stdout }}"
|
||||
- "Docker Compose: {{ compose_version.stdout }}"
|
||||
- "User '{{ app_user }}' added to docker group"
|
||||
- "Firewall: HTTP (80) and HTTPS (443) allowed"
|
||||
- "=========================================="
|
||||
- ""
|
||||
- "Next Steps:"
|
||||
- "1. Log out and back in for docker group to take effect"
|
||||
- "2. Run deployment playbook to start containers"
|
||||
|
||||
handlers:
|
||||
- name: Restart Docker
|
||||
systemd:
|
||||
name: docker
|
||||
state: restarted
|
||||
@@ -1,113 +0,0 @@
|
||||
---
|
||||
# Optional CDN Update Playbook
|
||||
# Only runs when CDN_UPDATE=true is passed
|
||||
|
||||
- name: Update CDN Configuration (Optional)
|
||||
hosts: web_servers
|
||||
become: true
|
||||
gather_facts: true
|
||||
|
||||
vars:
|
||||
domain_name: "{{ DOMAIN_NAME | default('michaelschiemer.de') }}"
|
||||
cdn_enabled: "{{ CDN_UPDATE | default(false) | bool }}"
|
||||
nginx_conf_path: "/etc/nginx/sites-available/{{ domain_name }}"
|
||||
|
||||
pre_tasks:
|
||||
- name: Check if CDN update is enabled
|
||||
debug:
|
||||
msg: "CDN update is {{ 'enabled' if cdn_enabled else 'disabled' }}"
|
||||
tags: always
|
||||
|
||||
- name: Skip CDN tasks if not enabled
|
||||
meta: end_play
|
||||
when: not cdn_enabled
|
||||
tags: always
|
||||
|
||||
tasks:
|
||||
- name: Check if Nginx configuration exists
|
||||
stat:
|
||||
path: "{{ nginx_conf_path }}"
|
||||
register: nginx_config_check
|
||||
tags: cdn
|
||||
|
||||
- name: Fail if Nginx config not found
|
||||
fail:
|
||||
msg: "Nginx configuration not found at {{ nginx_conf_path }}"
|
||||
when: not nginx_config_check.stat.exists
|
||||
tags: cdn
|
||||
|
||||
- name: Backup current Nginx configuration
|
||||
copy:
|
||||
src: "{{ nginx_conf_path }}"
|
||||
dest: "{{ nginx_conf_path }}.backup.{{ ansible_date_time.epoch }}"
|
||||
remote_src: true
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags: cdn
|
||||
|
||||
- name: Update Nginx configuration for CDN
|
||||
lineinfile:
|
||||
path: "{{ nginx_conf_path }}"
|
||||
regexp: '^\s*add_header\s+X-CDN-Cache'
|
||||
line: ' add_header X-CDN-Cache "ENABLED" always;'
|
||||
insertafter: '^\s*add_header\s+X-Frame-Options'
|
||||
backup: true
|
||||
notify: reload nginx
|
||||
tags: cdn
|
||||
|
||||
- name: Add CDN cache headers
|
||||
blockinfile:
|
||||
path: "{{ nginx_conf_path }}"
|
||||
marker: "# {mark} CDN CACHE HEADERS"
|
||||
insertafter: "location ~ \\.(?:css|js|woff2?|svg|gif|ico|jpe?g|png)\\$ {"
|
||||
block: |
|
||||
expires 1y;
|
||||
add_header Cache-Control "public, immutable";
|
||||
add_header X-CDN-Served "true";
|
||||
backup: true
|
||||
notify: reload nginx
|
||||
tags: cdn
|
||||
|
||||
- name: Validate Nginx configuration
|
||||
command: nginx -t
|
||||
register: nginx_test
|
||||
failed_when: nginx_test.rc != 0
|
||||
tags: cdn
|
||||
|
||||
- name: CDN configuration success
|
||||
debug:
|
||||
msg:
|
||||
- "CDN configuration updated successfully"
|
||||
- "Domain: {{ domain_name }}"
|
||||
- "Nginx config: {{ nginx_conf_path }}"
|
||||
tags: cdn
|
||||
|
||||
handlers:
|
||||
- name: reload nginx
|
||||
systemd:
|
||||
name: nginx
|
||||
state: reloaded
|
||||
tags: cdn
|
||||
|
||||
post_tasks:
|
||||
- name: Verify CDN headers are working
|
||||
uri:
|
||||
url: "https://{{ domain_name }}/favicon.ico"
|
||||
method: HEAD
|
||||
headers:
|
||||
User-Agent: "Mozilla/5.0 (Ansible CDN Check)"
|
||||
return_content: false
|
||||
status_code: [200, 404] # 404 is ok for favicon test
|
||||
register: cdn_test
|
||||
tags: cdn
|
||||
|
||||
- name: CDN verification results
|
||||
debug:
|
||||
msg:
|
||||
- "CDN Test Results:"
|
||||
- "Status: {{ cdn_test.status }}"
|
||||
- "Cache-Control: {{ cdn_test.cache_control | default('Not set') }}"
|
||||
- "X-CDN-Served: {{ cdn_test.x_cdn_served | default('Not set') }}"
|
||||
when: cdn_test is defined
|
||||
tags: cdn
|
||||
@@ -1,21 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Restart specific Docker container on production server
|
||||
# Usage: ./restart.sh [container_name]
|
||||
# Example: ./restart.sh php
|
||||
# Without argument: restarts all containers
|
||||
|
||||
CONTAINER="${1:-all}"
|
||||
|
||||
echo "🔄 Restarting container(s) on production server..."
|
||||
echo ""
|
||||
|
||||
if [ "$CONTAINER" = "all" ]; then
|
||||
echo "Restarting ALL containers..."
|
||||
ssh -i ~/.ssh/production deploy@michaelschiemer.de "cd /home/deploy/michaelschiemer/current && docker compose -f docker-compose.yml -f docker-compose.production.yml restart"
|
||||
else
|
||||
echo "Restarting container: $CONTAINER"
|
||||
ssh -i ~/.ssh/production deploy@michaelschiemer.de "docker restart $CONTAINER"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "✅ Done!"
|
||||
@@ -1,163 +0,0 @@
|
||||
---
|
||||
# Base Security Role Default Variables
|
||||
|
||||
# SSH Configuration
|
||||
ssh_port: 22
|
||||
ssh_permit_root_login: false
|
||||
ssh_password_authentication: false
|
||||
ssh_pubkey_authentication: true
|
||||
ssh_challenge_response_authentication: false
|
||||
ssh_gss_api_authentication: false
|
||||
ssh_x11_forwarding: false
|
||||
ssh_max_auth_tries: 3
|
||||
ssh_client_alive_interval: 300
|
||||
ssh_client_alive_count_max: 2
|
||||
ssh_max_sessions: 2
|
||||
ssh_tcp_keep_alive: true
|
||||
ssh_compression: false
|
||||
ssh_use_dns: false
|
||||
ssh_permit_tunnel: false
|
||||
ssh_permit_user_environment: false
|
||||
ssh_banner: /etc/ssh/ssh_banner
|
||||
|
||||
# Allowed SSH users and groups
|
||||
ssh_allowed_users:
|
||||
- "{{ ansible_user }}"
|
||||
- deploy
|
||||
ssh_allowed_groups:
|
||||
- sudo
|
||||
- adm
|
||||
|
||||
# SSH Key Management
|
||||
ssh_authorized_keys_exclusive: true
|
||||
ssh_host_key_algorithms:
|
||||
- ssh-ed25519
|
||||
- ecdsa-sha2-nistp521
|
||||
- ecdsa-sha2-nistp384
|
||||
- ecdsa-sha2-nistp256
|
||||
- rsa-sha2-512
|
||||
- rsa-sha2-256
|
||||
|
||||
# UFW Firewall Configuration
|
||||
ufw_enabled: true
|
||||
ufw_default_incoming: deny
|
||||
ufw_default_outgoing: allow
|
||||
ufw_default_forward: deny
|
||||
ufw_logging: "on"
|
||||
ufw_reset: false
|
||||
|
||||
# Default firewall rules
|
||||
ufw_rules:
|
||||
- rule: allow
|
||||
port: "{{ ssh_port }}"
|
||||
proto: tcp
|
||||
comment: "SSH"
|
||||
- rule: allow
|
||||
port: "80"
|
||||
proto: tcp
|
||||
comment: "HTTP"
|
||||
- rule: allow
|
||||
port: "443"
|
||||
proto: tcp
|
||||
comment: "HTTPS"
|
||||
|
||||
# Fail2ban Configuration
|
||||
fail2ban_enabled: "{{ fail2ban_enabled | default(true) }}"
|
||||
fail2ban_loglevel: INFO
|
||||
fail2ban_socket: /var/run/fail2ban/fail2ban.sock
|
||||
fail2ban_pidfile: /var/run/fail2ban/fail2ban.pid
|
||||
|
||||
# Default Fail2ban jails
|
||||
fail2ban_jails:
|
||||
- name: sshd
|
||||
enabled: true
|
||||
port: "{{ ssh_port }}"
|
||||
filter: sshd
|
||||
logpath: /var/log/auth.log
|
||||
maxretry: 3
|
||||
findtime: 600
|
||||
bantime: 1800
|
||||
backend: systemd
|
||||
|
||||
- name: nginx-http-auth
|
||||
enabled: true
|
||||
port: http,https
|
||||
filter: nginx-http-auth
|
||||
logpath: /var/log/nginx/error.log
|
||||
maxretry: 3
|
||||
findtime: 600
|
||||
bantime: 1800
|
||||
|
||||
- name: nginx-limit-req
|
||||
enabled: true
|
||||
port: http,https
|
||||
filter: nginx-limit-req
|
||||
logpath: /var/log/nginx/error.log
|
||||
maxretry: 5
|
||||
findtime: 600
|
||||
bantime: 1800
|
||||
|
||||
# System Security Settings
|
||||
security_kernel_parameters:
|
||||
# Network security
|
||||
net.ipv4.tcp_syncookies: 1
|
||||
net.ipv4.ip_forward: 0
|
||||
net.ipv4.conf.all.send_redirects: 0
|
||||
net.ipv4.conf.default.send_redirects: 0
|
||||
net.ipv4.conf.all.accept_redirects: 0
|
||||
net.ipv4.conf.default.accept_redirects: 0
|
||||
net.ipv4.conf.all.accept_source_route: 0
|
||||
net.ipv4.conf.default.accept_source_route: 0
|
||||
net.ipv4.conf.all.log_martians: 1
|
||||
net.ipv4.conf.default.log_martians: 1
|
||||
net.ipv4.icmp_echo_ignore_broadcasts: 1
|
||||
net.ipv4.icmp_ignore_bogus_error_responses: 1
|
||||
net.ipv4.conf.all.rp_filter: 1
|
||||
net.ipv4.conf.default.rp_filter: 1
|
||||
|
||||
# IPv6 security
|
||||
net.ipv6.conf.all.accept_redirects: 0
|
||||
net.ipv6.conf.default.accept_redirects: 0
|
||||
net.ipv6.conf.all.accept_ra: 0
|
||||
net.ipv6.conf.default.accept_ra: 0
|
||||
|
||||
# Kernel security
|
||||
kernel.randomize_va_space: 2
|
||||
kernel.kptr_restrict: 2
|
||||
kernel.dmesg_restrict: 1
|
||||
kernel.printk: "3 3 3 3"
|
||||
kernel.unprivileged_bpf_disabled: 1
|
||||
net.core.bpf_jit_harden: 2
|
||||
|
||||
# Package updates and security
|
||||
security_packages:
|
||||
- fail2ban
|
||||
- ufw
|
||||
- unattended-upgrades
|
||||
- apt-listchanges
|
||||
- needrestart
|
||||
- rkhunter
|
||||
- chkrootkit
|
||||
- lynis
|
||||
|
||||
# Automatic security updates
|
||||
unattended_upgrades_enabled: true
|
||||
unattended_upgrades_automatic_reboot: false
|
||||
unattended_upgrades_automatic_reboot_time: "06:00"
|
||||
unattended_upgrades_origins_patterns:
|
||||
- origin=Ubuntu,archive=${distro_codename}-security
|
||||
- origin=Ubuntu,archive=${distro_codename}-updates
|
||||
|
||||
# System hardening
|
||||
disable_unused_services:
|
||||
- rpcbind
|
||||
- nfs-common
|
||||
- portmap
|
||||
- xinetd
|
||||
- telnet
|
||||
- rsh-server
|
||||
- rsh-redone-server
|
||||
|
||||
# User and permission settings
|
||||
security_umask: "027"
|
||||
security_login_timeout: 300
|
||||
@@ -1,67 +0,0 @@
|
||||
---
|
||||
# Base Security Role Handlers
|
||||
|
||||
- name: restart ssh
|
||||
service:
|
||||
name: ssh
|
||||
state: restarted
|
||||
listen: restart ssh
|
||||
|
||||
- name: reload ssh
|
||||
service:
|
||||
name: ssh
|
||||
state: reloaded
|
||||
listen: reload ssh
|
||||
|
||||
- name: restart fail2ban
|
||||
service:
|
||||
name: fail2ban
|
||||
state: restarted
|
||||
listen: restart fail2ban
|
||||
|
||||
- name: reload fail2ban
|
||||
service:
|
||||
name: fail2ban
|
||||
state: reloaded
|
||||
listen: reload fail2ban
|
||||
|
||||
- name: restart auditd
|
||||
service:
|
||||
name: auditd
|
||||
state: restarted
|
||||
listen: restart auditd
|
||||
|
||||
- name: reload systemd
|
||||
systemd:
|
||||
daemon_reload: true
|
||||
listen: reload systemd
|
||||
|
||||
- name: restart ufw
|
||||
service:
|
||||
name: ufw
|
||||
state: restarted
|
||||
listen: restart ufw
|
||||
|
||||
- name: reload ufw
|
||||
command: ufw --force reload
|
||||
listen: reload ufw
|
||||
|
||||
- name: restart unattended-upgrades
|
||||
service:
|
||||
name: unattended-upgrades
|
||||
state: restarted
|
||||
listen: restart unattended-upgrades
|
||||
|
||||
- name: update aide database
|
||||
command: aideinit
|
||||
listen: update aide database
|
||||
|
||||
- name: restart rsyslog
|
||||
service:
|
||||
name: rsyslog
|
||||
state: restarted
|
||||
listen: restart rsyslog
|
||||
|
||||
- name: update rkhunter
|
||||
command: rkhunter --propupd
|
||||
listen: update rkhunter
|
||||
@@ -1,31 +0,0 @@
|
||||
---
|
||||
galaxy_info:
|
||||
role_name: base-security
|
||||
author: Custom PHP Framework Team
|
||||
description: Base security hardening for servers
|
||||
company: michaelschiemer.de
|
||||
license: MIT
|
||||
min_ansible_version: 2.12
|
||||
platforms:
|
||||
- name: Ubuntu
|
||||
versions:
|
||||
- "20.04"
|
||||
- "22.04"
|
||||
- "24.04"
|
||||
- name: Debian
|
||||
versions:
|
||||
- "11"
|
||||
- "12"
|
||||
galaxy_tags:
|
||||
- security
|
||||
- ssh
|
||||
- firewall
|
||||
- fail2ban
|
||||
- hardening
|
||||
|
||||
dependencies:
|
||||
# No external dependencies - keep it self-contained
|
||||
|
||||
collections:
|
||||
- community.general
|
||||
- ansible.posix
|
||||
@@ -1,143 +0,0 @@
|
||||
---
|
||||
# Fail2ban Configuration
|
||||
|
||||
- name: Install fail2ban
|
||||
package:
|
||||
name: fail2ban
|
||||
state: present
|
||||
tags:
|
||||
- fail2ban
|
||||
- packages
|
||||
|
||||
- name: Create fail2ban configuration directory
|
||||
file:
|
||||
path: /etc/fail2ban/jail.d
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- fail2ban
|
||||
- directories
|
||||
|
||||
- name: Configure fail2ban main settings
|
||||
template:
|
||||
src: fail2ban.local.j2
|
||||
dest: /etc/fail2ban/fail2ban.local
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
notify: restart fail2ban
|
||||
tags:
|
||||
- fail2ban
|
||||
- config
|
||||
|
||||
- name: Configure fail2ban default jail settings
|
||||
template:
|
||||
src: jail.local.j2
|
||||
dest: /etc/fail2ban/jail.local
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
notify: restart fail2ban
|
||||
tags:
|
||||
- fail2ban
|
||||
- config
|
||||
- jail
|
||||
|
||||
- name: Create custom fail2ban jails
|
||||
template:
|
||||
src: custom-jails.local.j2
|
||||
dest: /etc/fail2ban/jail.d/custom-jails.local
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
notify: restart fail2ban
|
||||
tags:
|
||||
- fail2ban
|
||||
- jails
|
||||
- custom
|
||||
|
||||
- name: Create custom fail2ban filters
|
||||
template:
|
||||
src: "{{ item }}.conf.j2"
|
||||
dest: "/etc/fail2ban/filter.d/{{ item }}.conf"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
loop:
|
||||
- nginx-limit-req
|
||||
- nginx-http-auth
|
||||
- php-framework
|
||||
notify: restart fail2ban
|
||||
tags:
|
||||
- fail2ban
|
||||
- filters
|
||||
|
||||
- name: Create fail2ban action for PHP Framework
|
||||
template:
|
||||
src: php-framework-action.conf.j2
|
||||
dest: /etc/fail2ban/action.d/php-framework-notify.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: restart fail2ban
|
||||
tags:
|
||||
- fail2ban
|
||||
- actions
|
||||
|
||||
- name: Ensure fail2ban service is enabled and running
|
||||
service:
|
||||
name: fail2ban
|
||||
state: started
|
||||
enabled: true
|
||||
tags:
|
||||
- fail2ban
|
||||
- service
|
||||
|
||||
- name: Check fail2ban status
|
||||
command: fail2ban-client status
|
||||
register: fail2ban_status
|
||||
changed_when: false
|
||||
tags:
|
||||
- fail2ban
|
||||
- status
|
||||
|
||||
- name: Display fail2ban jail status
|
||||
command: fail2ban-client status {{ item.name }}
|
||||
register: jail_status
|
||||
changed_when: false
|
||||
loop: "{{ fail2ban_jails }}"
|
||||
when: item.enabled | bool
|
||||
tags:
|
||||
- fail2ban
|
||||
- status
|
||||
- jails
|
||||
|
||||
- name: Create fail2ban log rotation
|
||||
template:
|
||||
src: fail2ban-logrotate.j2
|
||||
dest: /etc/logrotate.d/fail2ban
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- fail2ban
|
||||
- logrotate
|
||||
|
||||
- name: Configure fail2ban systemd service override
|
||||
template:
|
||||
src: fail2ban-override.conf.j2
|
||||
dest: /etc/systemd/system/fail2ban.service.d/override.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify:
|
||||
- reload systemd
|
||||
- restart fail2ban
|
||||
tags:
|
||||
- fail2ban
|
||||
- systemd
|
||||
@@ -1,142 +0,0 @@
|
||||
---
|
||||
# UFW Firewall Configuration
|
||||
|
||||
- name: Reset UFW to defaults
|
||||
ufw:
|
||||
state: reset
|
||||
when: ufw_reset | bool
|
||||
tags:
|
||||
- firewall
|
||||
- reset
|
||||
|
||||
- name: Set UFW default policies
|
||||
ufw:
|
||||
policy: "{{ item.policy }}"
|
||||
direction: "{{ item.direction }}"
|
||||
loop:
|
||||
- { policy: "{{ ufw_default_incoming }}", direction: incoming }
|
||||
- { policy: "{{ ufw_default_outgoing }}", direction: outgoing }
|
||||
- { policy: "{{ ufw_default_forward }}", direction: routed }
|
||||
tags:
|
||||
- firewall
|
||||
- policy
|
||||
|
||||
- name: Configure UFW logging
|
||||
ufw:
|
||||
logging: "{{ ufw_logging }}"
|
||||
tags:
|
||||
- firewall
|
||||
- logging
|
||||
|
||||
- name: Allow SSH before enabling firewall
|
||||
ufw:
|
||||
rule: allow
|
||||
port: "{{ ssh_port }}"
|
||||
proto: tcp
|
||||
comment: "SSH Access - Priority"
|
||||
tags:
|
||||
- firewall
|
||||
- ssh
|
||||
|
||||
- name: Configure UFW rules
|
||||
ufw:
|
||||
rule: "{{ item.rule }}"
|
||||
port: "{{ item.port | default(omit) }}"
|
||||
proto: "{{ item.proto | default(omit) }}"
|
||||
src: "{{ item.src | default(omit) }}"
|
||||
dest: "{{ item.dest | default(omit) }}"
|
||||
interface: "{{ item.interface | default(omit) }}"
|
||||
direction: "{{ item.direction | default(omit) }}"
|
||||
comment: "{{ item.comment | default(omit) }}"
|
||||
loop: "{{ ufw_rules }}"
|
||||
tags:
|
||||
- firewall
|
||||
- rules
|
||||
|
||||
- name: Add environment-specific firewall rules
|
||||
ufw:
|
||||
rule: "{{ item.rule }}"
|
||||
port: "{{ item.port | default(omit) }}"
|
||||
proto: "{{ item.proto | default(omit) }}"
|
||||
src: "{{ item.src | default(omit) }}"
|
||||
comment: "{{ item.comment | default(omit) }}"
|
||||
loop: "{{ environment_specific_rules | default([]) }}"
|
||||
tags:
|
||||
- firewall
|
||||
- rules
|
||||
- environment
|
||||
|
||||
- name: Configure production-specific strict rules
|
||||
ufw:
|
||||
rule: "{{ item.rule }}"
|
||||
port: "{{ item.port | default(omit) }}"
|
||||
proto: "{{ item.proto | default(omit) }}"
|
||||
src: "{{ item.src | default(omit) }}"
|
||||
comment: "{{ item.comment | default(omit) }}"
|
||||
loop:
|
||||
- rule: deny
|
||||
port: "3306"
|
||||
proto: tcp
|
||||
comment: "Block external MySQL access"
|
||||
- rule: deny
|
||||
port: "6379"
|
||||
proto: tcp
|
||||
comment: "Block external Redis access"
|
||||
- rule: deny
|
||||
port: "9090"
|
||||
proto: tcp
|
||||
comment: "Block external Prometheus access"
|
||||
- rule: limit
|
||||
port: "{{ ssh_port }}"
|
||||
proto: tcp
|
||||
comment: "Rate limit SSH connections"
|
||||
when: environment == 'production' and firewall_strict_mode | bool
|
||||
tags:
|
||||
- firewall
|
||||
- production
|
||||
- strict
|
||||
|
||||
- name: Allow Docker container communication
|
||||
ufw:
|
||||
rule: allow
|
||||
interface: docker0
|
||||
direction: in
|
||||
comment: "Docker container communication"
|
||||
ignore_errors: true # Docker may not be installed yet
|
||||
tags:
|
||||
- firewall
|
||||
- docker
|
||||
|
||||
- name: Allow established and related connections
|
||||
ufw:
|
||||
rule: allow
|
||||
direction: in
|
||||
interface: any
|
||||
from_ip: any
|
||||
to_ip: any
|
||||
comment: "Allow established connections"
|
||||
tags:
|
||||
- firewall
|
||||
- established
|
||||
|
||||
- name: Enable UFW firewall
|
||||
ufw:
|
||||
state: enabled
|
||||
tags:
|
||||
- firewall
|
||||
- enable
|
||||
|
||||
- name: Check UFW status
|
||||
command: ufw status verbose
|
||||
register: ufw_status
|
||||
changed_when: false
|
||||
tags:
|
||||
- firewall
|
||||
- status
|
||||
|
||||
- name: Display UFW status
|
||||
debug:
|
||||
var: ufw_status.stdout_lines
|
||||
tags:
|
||||
- firewall
|
||||
- status
|
||||
@@ -1,69 +0,0 @@
|
||||
---
|
||||
# Base Security Role - Main Tasks
|
||||
|
||||
- name: Include OS-specific variables
|
||||
include_vars: "{{ ansible_os_family }}.yml"
|
||||
tags:
|
||||
- security
|
||||
- config
|
||||
|
||||
- name: Update package cache
|
||||
package:
|
||||
update_cache: true
|
||||
cache_valid_time: 3600
|
||||
tags:
|
||||
- security
|
||||
- packages
|
||||
|
||||
- name: Install security packages
|
||||
package:
|
||||
name: "{{ security_packages }}"
|
||||
state: present
|
||||
tags:
|
||||
- security
|
||||
- packages
|
||||
|
||||
- name: Configure system security settings
|
||||
include_tasks: system-hardening.yml
|
||||
tags:
|
||||
- security
|
||||
- hardening
|
||||
|
||||
- name: Configure SSH security
|
||||
include_tasks: ssh-hardening.yml
|
||||
tags:
|
||||
- security
|
||||
- ssh
|
||||
|
||||
- name: Configure UFW firewall
|
||||
include_tasks: firewall.yml
|
||||
when: ufw_enabled | bool
|
||||
tags:
|
||||
- security
|
||||
- firewall
|
||||
|
||||
- name: Configure Fail2ban
|
||||
include_tasks: fail2ban.yml
|
||||
when: fail2ban_enabled | bool
|
||||
tags:
|
||||
- security
|
||||
- fail2ban
|
||||
|
||||
- name: Configure automatic security updates
|
||||
include_tasks: security-updates.yml
|
||||
when: unattended_upgrades_enabled | bool
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
|
||||
- name: Disable unused services
|
||||
include_tasks: service-hardening.yml
|
||||
tags:
|
||||
- security
|
||||
- services
|
||||
|
||||
- name: Apply security audit recommendations
|
||||
include_tasks: security-audit.yml
|
||||
tags:
|
||||
- security
|
||||
- audit
|
||||
@@ -1,185 +0,0 @@
|
||||
---
|
||||
# Security Audit and Compliance Checks
|
||||
|
||||
- name: Install security audit tools
|
||||
package:
|
||||
name: "{{ item }}"
|
||||
state: present
|
||||
loop:
|
||||
- lynis
|
||||
- rkhunter
|
||||
- chkrootkit
|
||||
- debsums
|
||||
- aide
|
||||
tags:
|
||||
- security
|
||||
- audit
|
||||
- tools
|
||||
|
||||
- name: Initialize AIDE database
|
||||
command: aideinit
|
||||
args:
|
||||
creates: /var/lib/aide/aide.db.new
|
||||
tags:
|
||||
- security
|
||||
- aide
|
||||
- integrity
|
||||
|
||||
- name: Move AIDE database to production location
|
||||
command: mv /var/lib/aide/aide.db.new /var/lib/aide/aide.db
|
||||
args:
|
||||
creates: /var/lib/aide/aide.db
|
||||
tags:
|
||||
- security
|
||||
- aide
|
||||
- integrity
|
||||
|
||||
- name: Configure AIDE for file integrity monitoring
|
||||
template:
|
||||
src: aide.conf.j2
|
||||
dest: /etc/aide/aide.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0600'
|
||||
backup: true
|
||||
tags:
|
||||
- security
|
||||
- aide
|
||||
- config
|
||||
|
||||
- name: Schedule AIDE integrity checks
|
||||
cron:
|
||||
name: "AIDE integrity check"
|
||||
minute: "0"
|
||||
hour: "3"
|
||||
job: "/usr/bin/aide --check 2>&1 | mail -s 'AIDE Integrity Check - {{ inventory_hostname }}' {{ ssl_email }}"
|
||||
user: root
|
||||
tags:
|
||||
- security
|
||||
- aide
|
||||
- cron
|
||||
|
||||
- name: Configure rkhunter
|
||||
template:
|
||||
src: rkhunter.conf.j2
|
||||
dest: /etc/rkhunter.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
tags:
|
||||
- security
|
||||
- rkhunter
|
||||
- config
|
||||
|
||||
- name: Update rkhunter database
|
||||
command: rkhunter --update
|
||||
changed_when: false
|
||||
tags:
|
||||
- security
|
||||
- rkhunter
|
||||
- update
|
||||
|
||||
- name: Configure rkhunter properties
|
||||
command: rkhunter --propupd
|
||||
changed_when: false
|
||||
tags:
|
||||
- security
|
||||
- rkhunter
|
||||
- properties
|
||||
|
||||
- name: Schedule rkhunter scans
|
||||
cron:
|
||||
name: "RKhunter rootkit scan"
|
||||
minute: "30"
|
||||
hour: "3"
|
||||
job: "/usr/bin/rkhunter --cronjob --report-warnings-only 2>&1 | mail -s 'RKhunter Scan - {{ inventory_hostname }}' {{ ssl_email }}"
|
||||
user: root
|
||||
tags:
|
||||
- security
|
||||
- rkhunter
|
||||
- cron
|
||||
|
||||
- name: Configure Lynis for system auditing
|
||||
template:
|
||||
src: lynis.conf.j2
|
||||
dest: /etc/lynis/default.prf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- security
|
||||
- lynis
|
||||
- config
|
||||
|
||||
- name: Run initial security audit with Lynis
|
||||
command: lynis audit system --quick --quiet
|
||||
register: lynis_audit
|
||||
changed_when: false
|
||||
tags:
|
||||
- security
|
||||
- lynis
|
||||
- audit
|
||||
|
||||
- name: Schedule weekly Lynis security audits
|
||||
cron:
|
||||
name: "Lynis security audit"
|
||||
minute: "0"
|
||||
hour: "4"
|
||||
weekday: "0"
|
||||
job: "/usr/sbin/lynis audit system --cronjob | mail -s 'Lynis Security Audit - {{ inventory_hostname }}' {{ ssl_email }}"
|
||||
user: root
|
||||
tags:
|
||||
- security
|
||||
- lynis
|
||||
- cron
|
||||
|
||||
- name: Create security monitoring script
|
||||
template:
|
||||
src: security-monitor.sh.j2
|
||||
dest: /usr/local/bin/security-monitor.sh
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- security
|
||||
- monitoring
|
||||
- scripts
|
||||
|
||||
- name: Schedule security monitoring
|
||||
cron:
|
||||
name: "Security monitoring"
|
||||
minute: "*/15"
|
||||
job: "/usr/local/bin/security-monitor.sh"
|
||||
user: root
|
||||
tags:
|
||||
- security
|
||||
- monitoring
|
||||
- cron
|
||||
|
||||
- name: Create security incident response script
|
||||
template:
|
||||
src: security-incident.sh.j2
|
||||
dest: /usr/local/bin/security-incident.sh
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- security
|
||||
- incident
|
||||
- response
|
||||
|
||||
- name: Verify system security configuration
|
||||
command: "{{ item.command }}"
|
||||
register: security_checks
|
||||
changed_when: false
|
||||
failed_when: security_checks.rc != 0 and item.required | default(true)
|
||||
loop:
|
||||
- { command: "sshd -t", name: "SSH configuration" }
|
||||
- { command: "ufw status", name: "UFW firewall status", required: false }
|
||||
- { command: "fail2ban-client status", name: "Fail2ban status", required: false }
|
||||
- { command: "systemctl is-active auditd", name: "Audit daemon", required: false }
|
||||
tags:
|
||||
- security
|
||||
- verification
|
||||
- validation
|
||||
@@ -1,144 +0,0 @@
|
||||
---
|
||||
# Automatic Security Updates Configuration
|
||||
|
||||
- name: Install unattended-upgrades package
|
||||
package:
|
||||
name: unattended-upgrades
|
||||
state: present
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- packages
|
||||
|
||||
- name: Configure unattended-upgrades
|
||||
template:
|
||||
src: 50unattended-upgrades.j2
|
||||
dest: /etc/apt/apt.conf.d/50unattended-upgrades
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- config
|
||||
|
||||
- name: Enable automatic updates
|
||||
template:
|
||||
src: 20auto-upgrades.j2
|
||||
dest: /etc/apt/apt.conf.d/20auto-upgrades
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- config
|
||||
|
||||
- name: Configure automatic reboot for kernel updates
|
||||
lineinfile:
|
||||
path: /etc/apt/apt.conf.d/50unattended-upgrades
|
||||
regexp: '^Unattended-Upgrade::Automatic-Reboot\s+'
|
||||
line: 'Unattended-Upgrade::Automatic-Reboot "{{ unattended_upgrades_automatic_reboot | lower }}";'
|
||||
create: true
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- reboot
|
||||
|
||||
- name: Configure reboot time
|
||||
lineinfile:
|
||||
path: /etc/apt/apt.conf.d/50unattended-upgrades
|
||||
regexp: '^Unattended-Upgrade::Automatic-Reboot-Time\s+'
|
||||
line: 'Unattended-Upgrade::Automatic-Reboot-Time "{{ unattended_upgrades_automatic_reboot_time }}";'
|
||||
when: unattended_upgrades_automatic_reboot | bool
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- reboot
|
||||
|
||||
- name: Configure email notifications for updates
|
||||
lineinfile:
|
||||
path: /etc/apt/apt.conf.d/50unattended-upgrades
|
||||
regexp: '^Unattended-Upgrade::Mail\s+'
|
||||
line: 'Unattended-Upgrade::Mail "{{ ssl_email }}";'
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- notifications
|
||||
|
||||
- name: Install apt-listchanges for change notifications
|
||||
package:
|
||||
name: apt-listchanges
|
||||
state: present
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- packages
|
||||
|
||||
- name: Configure apt-listchanges
|
||||
template:
|
||||
src: listchanges.conf.j2
|
||||
dest: /etc/apt/listchanges.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- notifications
|
||||
|
||||
- name: Install needrestart for service restart detection
|
||||
package:
|
||||
name: needrestart
|
||||
state: present
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- packages
|
||||
|
||||
- name: Configure needrestart
|
||||
template:
|
||||
src: needrestart.conf.j2
|
||||
dest: /etc/needrestart/needrestart.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- services
|
||||
|
||||
- name: Create update notification script
|
||||
template:
|
||||
src: update-notification.sh.j2
|
||||
dest: /usr/local/bin/update-notification.sh
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- scripts
|
||||
|
||||
- name: Schedule regular security updates check
|
||||
cron:
|
||||
name: "Security updates check"
|
||||
minute: "0"
|
||||
hour: "2"
|
||||
job: "/usr/bin/unattended-upgrade --dry-run && /usr/local/bin/update-notification.sh"
|
||||
user: root
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- cron
|
||||
|
||||
- name: Verify unattended-upgrades service
|
||||
service:
|
||||
name: unattended-upgrades
|
||||
state: started
|
||||
enabled: true
|
||||
tags:
|
||||
- security
|
||||
- updates
|
||||
- service
|
||||
@@ -1,149 +0,0 @@
|
||||
---
|
||||
# Service Hardening and Unused Service Removal
|
||||
|
||||
- name: Stop and disable unused services
|
||||
service:
|
||||
name: "{{ item }}"
|
||||
state: stopped
|
||||
enabled: false
|
||||
loop: "{{ disable_unused_services }}"
|
||||
ignore_errors: true
|
||||
tags:
|
||||
- security
|
||||
- services
|
||||
- cleanup
|
||||
|
||||
- name: Remove unused service packages
|
||||
package:
|
||||
name: "{{ item }}"
|
||||
state: absent
|
||||
loop: "{{ disable_unused_services }}"
|
||||
ignore_errors: true
|
||||
tags:
|
||||
- security
|
||||
- services
|
||||
- packages
|
||||
|
||||
- name: Mask dangerous services
|
||||
systemd:
|
||||
name: "{{ item }}"
|
||||
masked: true
|
||||
loop:
|
||||
- rpcbind.service
|
||||
- rpcbind.socket
|
||||
- nfs-server.service
|
||||
- nfs-lock.service
|
||||
- nfs-idmap.service
|
||||
ignore_errors: true
|
||||
tags:
|
||||
- security
|
||||
- services
|
||||
- systemd
|
||||
|
||||
- name: Configure service security settings
|
||||
template:
|
||||
src: service-security.conf.j2
|
||||
dest: /etc/systemd/system/{{ item }}.service.d/security.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
loop:
|
||||
- nginx
|
||||
- php8.4-fpm
|
||||
notify: reload systemd
|
||||
tags:
|
||||
- security
|
||||
- services
|
||||
- systemd
|
||||
|
||||
- name: Create systemd security override directory
|
||||
file:
|
||||
path: "/etc/systemd/system/{{ item }}.service.d"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
loop:
|
||||
- nginx
|
||||
- php8.4-fpm
|
||||
- docker
|
||||
tags:
|
||||
- security
|
||||
- services
|
||||
- directories
|
||||
|
||||
- name: Harden Docker service (if installed)
|
||||
template:
|
||||
src: docker-security.conf.j2
|
||||
dest: /etc/systemd/system/docker.service.d/security.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: reload systemd
|
||||
ignore_errors: true
|
||||
tags:
|
||||
- security
|
||||
- services
|
||||
- docker
|
||||
|
||||
- name: Configure service restart policies
|
||||
lineinfile:
|
||||
path: /etc/systemd/system/{{ item.service }}.service.d/restart.conf
|
||||
regexp: '^Restart='
|
||||
line: 'Restart={{ item.policy }}'
|
||||
create: true
|
||||
loop:
|
||||
- { service: "nginx", policy: "always" }
|
||||
- { service: "php8.4-fpm", policy: "always" }
|
||||
- { service: "fail2ban", policy: "always" }
|
||||
notify: reload systemd
|
||||
tags:
|
||||
- security
|
||||
- services
|
||||
- reliability
|
||||
|
||||
- name: Set service timeouts for security
|
||||
lineinfile:
|
||||
path: /etc/systemd/system/{{ item.service }}.service.d/timeout.conf
|
||||
regexp: '^TimeoutStopSec='
|
||||
line: 'TimeoutStopSec={{ item.timeout }}'
|
||||
create: true
|
||||
loop:
|
||||
- { service: "nginx", timeout: "30s" }
|
||||
- { service: "php8.4-fpm", timeout: "30s" }
|
||||
- { service: "docker", timeout: "60s" }
|
||||
notify: reload systemd
|
||||
tags:
|
||||
- security
|
||||
- services
|
||||
- timeouts
|
||||
|
||||
- name: Enable core security services
|
||||
service:
|
||||
name: "{{ item }}"
|
||||
state: started
|
||||
enabled: true
|
||||
loop:
|
||||
- ufw
|
||||
- fail2ban
|
||||
- auditd
|
||||
- unattended-upgrades
|
||||
tags:
|
||||
- security
|
||||
- services
|
||||
- enable
|
||||
|
||||
- name: Verify critical service status
|
||||
command: systemctl is-active {{ item }}
|
||||
register: service_status
|
||||
changed_when: false
|
||||
failed_when: service_status.rc != 0
|
||||
loop:
|
||||
- ssh
|
||||
- ufw
|
||||
- fail2ban
|
||||
- auditd
|
||||
tags:
|
||||
- security
|
||||
- services
|
||||
- verification
|
||||
@@ -1,119 +0,0 @@
|
||||
---
|
||||
# SSH Hardening Configuration
|
||||
|
||||
- name: Create SSH banner
|
||||
copy:
|
||||
content: |
|
||||
**************************************************************************
|
||||
* WARNING: AUTHORIZED ACCESS ONLY *
|
||||
**************************************************************************
|
||||
* This system is for authorized users only. All activities are logged *
|
||||
* and monitored. Unauthorized access is prohibited and may result in *
|
||||
* civil and/or criminal penalties. *
|
||||
* *
|
||||
* Custom PHP Framework - {{ domain_name }} *
|
||||
* Environment: {{ environment | upper }} *
|
||||
**************************************************************************
|
||||
dest: "{{ ssh_banner }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: restart ssh
|
||||
tags:
|
||||
- ssh
|
||||
- banner
|
||||
|
||||
- name: Generate strong SSH host keys
|
||||
command: ssh-keygen -t {{ item }} -f /etc/ssh/ssh_host_{{ item }}_key -N ""
|
||||
args:
|
||||
creates: /etc/ssh/ssh_host_{{ item }}_key
|
||||
loop:
|
||||
- ed25519
|
||||
- ecdsa
|
||||
- rsa
|
||||
notify: restart ssh
|
||||
tags:
|
||||
- ssh
|
||||
- keys
|
||||
|
||||
- name: Set correct permissions on SSH host keys
|
||||
file:
|
||||
path: /etc/ssh/ssh_host_{{ item }}_key
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0600'
|
||||
loop:
|
||||
- ed25519
|
||||
- ecdsa
|
||||
- rsa
|
||||
tags:
|
||||
- ssh
|
||||
- keys
|
||||
- permissions
|
||||
|
||||
- name: Configure SSH daemon
|
||||
template:
|
||||
src: sshd_config.j2
|
||||
dest: /etc/ssh/sshd_config
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
notify: restart ssh
|
||||
tags:
|
||||
- ssh
|
||||
- config
|
||||
|
||||
- name: Create SSH client configuration
|
||||
template:
|
||||
src: ssh_config.j2
|
||||
dest: /etc/ssh/ssh_config
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
tags:
|
||||
- ssh
|
||||
- config
|
||||
|
||||
- name: Ensure SSH service is enabled and running
|
||||
service:
|
||||
name: ssh
|
||||
state: started
|
||||
enabled: true
|
||||
tags:
|
||||
- ssh
|
||||
- service
|
||||
|
||||
- name: Configure SSH authorized keys for deploy user
|
||||
authorized_key:
|
||||
user: "{{ ansible_user }}"
|
||||
state: present
|
||||
key: "{{ lookup('file', '~/.ssh/id_rsa_deploy.pub') }}"
|
||||
exclusive: "{{ ssh_authorized_keys_exclusive }}"
|
||||
when: ansible_user != 'root'
|
||||
tags:
|
||||
- ssh
|
||||
- keys
|
||||
- users
|
||||
|
||||
- name: Remove default SSH keys for security
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: absent
|
||||
loop:
|
||||
- /etc/ssh/ssh_host_dsa_key
|
||||
- /etc/ssh/ssh_host_dsa_key.pub
|
||||
tags:
|
||||
- ssh
|
||||
- keys
|
||||
- cleanup
|
||||
|
||||
- name: Verify SSH configuration syntax
|
||||
command: sshd -t
|
||||
register: ssh_config_test
|
||||
changed_when: false
|
||||
failed_when: ssh_config_test.rc != 0
|
||||
tags:
|
||||
- ssh
|
||||
- validation
|
||||
@@ -1,167 +0,0 @@
|
||||
---
|
||||
# System Security Hardening
|
||||
|
||||
- name: Apply kernel security parameters
|
||||
sysctl:
|
||||
name: "{{ item.key }}"
|
||||
value: "{{ item.value }}"
|
||||
state: present
|
||||
sysctl_set: true
|
||||
reload: true
|
||||
loop: "{{ security_kernel_parameters | dict2items }}"
|
||||
tags:
|
||||
- security
|
||||
- kernel
|
||||
- sysctl
|
||||
|
||||
- name: Create security limits configuration
|
||||
template:
|
||||
src: security-limits.conf.j2
|
||||
dest: /etc/security/limits.d/99-security.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- security
|
||||
- limits
|
||||
|
||||
- name: Configure login.defs for security
|
||||
lineinfile:
|
||||
path: /etc/login.defs
|
||||
regexp: "^{{ item.key }}"
|
||||
line: "{{ item.key }} {{ item.value }}"
|
||||
backup: true
|
||||
loop:
|
||||
- { key: "UMASK", value: "{{ security_umask }}" }
|
||||
- { key: "PASS_MAX_DAYS", value: "90" }
|
||||
- { key: "PASS_MIN_DAYS", value: "1" }
|
||||
- { key: "PASS_WARN_AGE", value: "7" }
|
||||
- { key: "LOGIN_TIMEOUT", value: "{{ security_login_timeout }}" }
|
||||
- { key: "ENCRYPT_METHOD", value: "SHA512" }
|
||||
tags:
|
||||
- security
|
||||
- login
|
||||
- password
|
||||
|
||||
- name: Secure shared memory
|
||||
mount:
|
||||
path: /dev/shm
|
||||
src: tmpfs
|
||||
fstype: tmpfs
|
||||
opts: "defaults,noexec,nosuid,nodev,size=512M"
|
||||
state: mounted
|
||||
tags:
|
||||
- security
|
||||
- memory
|
||||
- filesystem
|
||||
|
||||
- name: Configure audit system
|
||||
package:
|
||||
name: auditd
|
||||
state: present
|
||||
tags:
|
||||
- security
|
||||
- audit
|
||||
|
||||
- name: Create audit rules for security monitoring
|
||||
template:
|
||||
src: audit-rules.rules.j2
|
||||
dest: /etc/audit/rules.d/99-security.rules
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0600'
|
||||
backup: true
|
||||
notify: restart auditd
|
||||
tags:
|
||||
- security
|
||||
- audit
|
||||
- rules
|
||||
|
||||
- name: Ensure auditd service is enabled and running
|
||||
service:
|
||||
name: auditd
|
||||
state: started
|
||||
enabled: true
|
||||
tags:
|
||||
- security
|
||||
- audit
|
||||
- service
|
||||
|
||||
- name: Remove unnecessary packages
|
||||
package:
|
||||
name: "{{ item }}"
|
||||
state: absent
|
||||
loop:
|
||||
- telnet
|
||||
- rsh-client
|
||||
- rsh-redone-client
|
||||
- talk
|
||||
- ntalk
|
||||
- xinetd
|
||||
- inetutils-inetd
|
||||
ignore_errors: true
|
||||
tags:
|
||||
- security
|
||||
- cleanup
|
||||
- packages
|
||||
|
||||
- name: Set correct permissions on critical files
|
||||
file:
|
||||
path: "{{ item.path }}"
|
||||
owner: "{{ item.owner | default('root') }}"
|
||||
group: "{{ item.group | default('root') }}"
|
||||
mode: "{{ item.mode }}"
|
||||
loop:
|
||||
- { path: "/etc/passwd", mode: "0644" }
|
||||
- { path: "/etc/shadow", mode: "0640", group: "shadow" }
|
||||
- { path: "/etc/group", mode: "0644" }
|
||||
- { path: "/etc/gshadow", mode: "0640", group: "shadow" }
|
||||
- { path: "/boot", mode: "0700" }
|
||||
- { path: "/etc/ssh", mode: "0755" }
|
||||
- { path: "/etc/crontab", mode: "0600" }
|
||||
- { path: "/etc/cron.hourly", mode: "0700" }
|
||||
- { path: "/etc/cron.daily", mode: "0700" }
|
||||
- { path: "/etc/cron.weekly", mode: "0700" }
|
||||
- { path: "/etc/cron.monthly", mode: "0700" }
|
||||
- { path: "/etc/cron.d", mode: "0700" }
|
||||
tags:
|
||||
- security
|
||||
- permissions
|
||||
- files
|
||||
|
||||
- name: Configure process accounting
|
||||
package:
|
||||
name: acct
|
||||
state: present
|
||||
tags:
|
||||
- security
|
||||
- accounting
|
||||
|
||||
- name: Enable process accounting
|
||||
service:
|
||||
name: acct
|
||||
state: started
|
||||
enabled: true
|
||||
tags:
|
||||
- security
|
||||
- accounting
|
||||
- service
|
||||
|
||||
- name: Configure system banner
|
||||
copy:
|
||||
content: |
|
||||
Custom PHP Framework Production Server
|
||||
{{ domain_name }} - {{ environment | upper }}
|
||||
|
||||
Unauthorized access is prohibited.
|
||||
All activities are monitored and logged.
|
||||
|
||||
System administered by: {{ ssl_email }}
|
||||
dest: /etc/motd
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- security
|
||||
- banner
|
||||
- motd
|
||||
@@ -1,63 +0,0 @@
|
||||
# Custom Fail2ban Jails for Custom PHP Framework
|
||||
# Generated by Ansible - Do not edit manually
|
||||
|
||||
{% for jail in fail2ban_jails %}
|
||||
[{{ jail.name }}]
|
||||
enabled = {{ jail.enabled | ternary('true', 'false') }}
|
||||
{% if jail.port is defined %}
|
||||
port = {{ jail.port }}
|
||||
{% endif %}
|
||||
{% if jail.filter is defined %}
|
||||
filter = {{ jail.filter }}
|
||||
{% endif %}
|
||||
{% if jail.logpath is defined %}
|
||||
logpath = {{ jail.logpath }}
|
||||
{% endif %}
|
||||
{% if jail.maxretry is defined %}
|
||||
maxretry = {{ jail.maxretry }}
|
||||
{% endif %}
|
||||
{% if jail.findtime is defined %}
|
||||
findtime = {{ jail.findtime }}
|
||||
{% endif %}
|
||||
{% if jail.bantime is defined %}
|
||||
bantime = {{ jail.bantime }}
|
||||
{% endif %}
|
||||
{% if jail.backend is defined %}
|
||||
backend = {{ jail.backend }}
|
||||
{% endif %}
|
||||
action = %(action_mwl)s
|
||||
|
||||
{% endfor %}
|
||||
|
||||
# PHP Framework specific jail
|
||||
[php-framework]
|
||||
enabled = true
|
||||
port = http,https
|
||||
filter = php-framework
|
||||
logpath = /var/log/nginx/access.log
|
||||
/var/log/nginx/error.log
|
||||
maxretry = 5
|
||||
findtime = 600
|
||||
bantime = 3600
|
||||
action = %(action_mwl)s
|
||||
php-framework-notify
|
||||
|
||||
# Docker container protection
|
||||
[docker-php]
|
||||
enabled = {{ 'true' if environment == 'production' else 'false' }}
|
||||
port = http,https
|
||||
filter = docker-php
|
||||
logpath = /var/log/docker/*.log
|
||||
maxretry = 3
|
||||
findtime = 300
|
||||
bantime = 1800
|
||||
|
||||
# Custom application errors
|
||||
[app-errors]
|
||||
enabled = true
|
||||
port = http,https
|
||||
filter = nginx-limit-req
|
||||
logpath = /var/log/nginx/error.log
|
||||
maxretry = 10
|
||||
findtime = 600
|
||||
bantime = 600
|
||||
@@ -1,20 +0,0 @@
|
||||
# Fail2ban Main Configuration for Custom PHP Framework
|
||||
# Generated by Ansible - Do not edit manually
|
||||
|
||||
[Definition]
|
||||
loglevel = {{ fail2ban_loglevel }}
|
||||
socket = {{ fail2ban_socket }}
|
||||
pidfile = {{ fail2ban_pidfile }}
|
||||
|
||||
# Database configuration
|
||||
dbfile = /var/lib/fail2ban/fail2ban.sqlite3
|
||||
dbmaxmatches = 10
|
||||
|
||||
# Backend
|
||||
backend = systemd
|
||||
|
||||
# Email Configuration
|
||||
[mta]
|
||||
sender = fail2ban-{{ inventory_hostname }}@{{ domain_name }}
|
||||
destemail = {{ ssl_email }}
|
||||
action = %(action_mwl)s
|
||||
@@ -1,73 +0,0 @@
|
||||
# SSH Configuration for Custom PHP Framework - {{ environment | upper }}
|
||||
# Generated by Ansible - Do not edit manually
|
||||
|
||||
# Basic Configuration
|
||||
Port {{ ssh_port }}
|
||||
Protocol 2
|
||||
AddressFamily inet
|
||||
|
||||
# Authentication
|
||||
PermitRootLogin {{ ssh_permit_root_login | ternary('yes', 'no') }}
|
||||
PasswordAuthentication {{ ssh_password_authentication | ternary('yes', 'no') }}
|
||||
PubkeyAuthentication {{ ssh_pubkey_authentication | ternary('yes', 'no') }}
|
||||
AuthorizedKeysFile .ssh/authorized_keys
|
||||
ChallengeResponseAuthentication {{ ssh_challenge_response_authentication | ternary('yes', 'no') }}
|
||||
GSSAPIAuthentication {{ ssh_gss_api_authentication | ternary('yes', 'no') }}
|
||||
UsePAM yes
|
||||
|
||||
# Security Settings
|
||||
MaxAuthTries {{ ssh_max_auth_tries }}
|
||||
ClientAliveInterval {{ ssh_client_alive_interval }}
|
||||
ClientAliveCountMax {{ ssh_client_alive_count_max }}
|
||||
MaxSessions {{ ssh_max_sessions }}
|
||||
TCPKeepAlive {{ ssh_tcp_keep_alive | ternary('yes', 'no') }}
|
||||
Compression {{ ssh_compression | ternary('yes', 'no') }}
|
||||
UseDNS {{ ssh_use_dns | ternary('yes', 'no') }}
|
||||
|
||||
# Tunnel and Forwarding
|
||||
X11Forwarding {{ ssh_x11_forwarding | ternary('yes', 'no') }}
|
||||
PermitTunnel {{ ssh_permit_tunnel | ternary('yes', 'no') }}
|
||||
PermitUserEnvironment {{ ssh_permit_user_environment | ternary('yes', 'no') }}
|
||||
AllowTcpForwarding no
|
||||
AllowStreamLocalForwarding no
|
||||
GatewayPorts no
|
||||
|
||||
# Host Key Configuration
|
||||
{% for algorithm in ssh_host_key_algorithms %}
|
||||
HostKey /etc/ssh/ssh_host_{{ algorithm.split('-')[0] }}_key
|
||||
{% endfor %}
|
||||
|
||||
# Allowed Users and Groups
|
||||
{% if ssh_allowed_users %}
|
||||
AllowUsers {{ ssh_allowed_users | join(' ') }}
|
||||
{% endif %}
|
||||
{% if ssh_allowed_groups %}
|
||||
AllowGroups {{ ssh_allowed_groups | join(' ') }}
|
||||
{% endif %}
|
||||
|
||||
# Banner
|
||||
Banner {{ ssh_banner }}
|
||||
|
||||
# Logging
|
||||
SyslogFacility AUTH
|
||||
LogLevel INFO
|
||||
|
||||
# Kex Algorithms (secure)
|
||||
KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512
|
||||
|
||||
# Ciphers (secure)
|
||||
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
|
||||
|
||||
# MAC Algorithms (secure)
|
||||
MACs hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha2-256,hmac-sha2-512
|
||||
|
||||
# Host Key Algorithms
|
||||
PubkeyAcceptedKeyTypes {{ ssh_host_key_algorithms | join(',') }}
|
||||
|
||||
# Additional Security
|
||||
PermitEmptyPasswords no
|
||||
StrictModes yes
|
||||
IgnoreRhosts yes
|
||||
HostbasedAuthentication no
|
||||
PrintMotd no
|
||||
PrintLastLog yes
|
||||
@@ -1,21 +0,0 @@
|
||||
---
|
||||
# OS-specific variables for Debian/Ubuntu
|
||||
security_packages:
|
||||
- ufw
|
||||
- fail2ban
|
||||
- unattended-upgrades
|
||||
- apt-listchanges
|
||||
- logwatch
|
||||
- rkhunter
|
||||
- chkrootkit
|
||||
|
||||
# Services
|
||||
security_services:
|
||||
- ufw
|
||||
- fail2ban
|
||||
- unattended-upgrades
|
||||
|
||||
# Package management
|
||||
package_manager: apt
|
||||
update_cache_command: "apt-get update"
|
||||
upgrade_command: "apt-get upgrade -y"
|
||||
@@ -1,151 +0,0 @@
|
||||
---
|
||||
# Docker Runtime Role Default Variables
|
||||
|
||||
# Docker Installation
|
||||
docker_edition: ce
|
||||
docker_version: "latest"
|
||||
docker_channel: stable
|
||||
docker_compose_version: "2.20.0"
|
||||
|
||||
# Repository Configuration
|
||||
docker_apt_arch: amd64
|
||||
docker_apt_repository: "deb [arch={{ docker_apt_arch }}] https://download.docker.com/linux/{{ ansible_distribution | lower }} {{ ansible_distribution_release }} {{ docker_channel }}"
|
||||
docker_apt_gpg_key: "https://download.docker.com/linux/{{ ansible_distribution | lower }}/gpg"
|
||||
|
||||
# Docker Daemon Configuration
|
||||
docker_daemon_config:
|
||||
# Security settings
|
||||
userland-proxy: false
|
||||
live-restore: true
|
||||
icc: false
|
||||
userns-remap: default
|
||||
no-new-privileges: true
|
||||
seccomp-profile: /etc/docker/seccomp-default.json
|
||||
|
||||
# Logging
|
||||
log-driver: json-file
|
||||
log-opts:
|
||||
max-size: 50m
|
||||
max-file: "5"
|
||||
|
||||
# Storage
|
||||
storage-driver: overlay2
|
||||
|
||||
# Network security
|
||||
bridge: none
|
||||
ip-forward: false
|
||||
ip-masq: false
|
||||
iptables: false
|
||||
ipv6: false
|
||||
|
||||
# Resource limits
|
||||
default-ulimits:
|
||||
nproc:
|
||||
hard: 65536
|
||||
soft: 65536
|
||||
nofile:
|
||||
hard: 65536
|
||||
soft: 65536
|
||||
|
||||
# Registry security
|
||||
insecure-registries: []
|
||||
registry-mirrors: []
|
||||
|
||||
# Experimental features
|
||||
experimental: false
|
||||
|
||||
# Docker Service Configuration
|
||||
docker_service_state: started
|
||||
docker_service_enabled: true
|
||||
docker_restart_handler_state: restarted
|
||||
|
||||
# User Management
|
||||
docker_users: []
|
||||
docker_group: docker
|
||||
|
||||
# PHP 8.4 Specific Configuration
|
||||
php_version: "8.4"
|
||||
php_docker_image: "php:8.4-fpm-alpine"
|
||||
php_extensions:
|
||||
- mysqli
|
||||
- pdo_mysql
|
||||
- opcache
|
||||
- redis
|
||||
- memcached
|
||||
- intl
|
||||
- gd
|
||||
- zip
|
||||
- bcmath
|
||||
- soap
|
||||
- xml
|
||||
- curl
|
||||
- json
|
||||
|
||||
# Docker Compose Configuration
|
||||
docker_compose_projects: []
|
||||
docker_compose_path: /opt/docker-compose
|
||||
|
||||
# Security Profiles
|
||||
docker_security_profiles:
|
||||
- name: default-seccomp
|
||||
path: /etc/docker/seccomp-default.json
|
||||
- name: framework-apparmor
|
||||
path: /etc/apparmor.d/docker-framework
|
||||
|
||||
# Network Configuration
|
||||
docker_networks:
|
||||
- name: framework-network
|
||||
driver: bridge
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.20.0.0/16
|
||||
gateway: 172.20.0.1
|
||||
options:
|
||||
com.docker.network.bridge.enable_icc: "false"
|
||||
com.docker.network.bridge.enable_ip_masquerade: "false"
|
||||
|
||||
# Volume Configuration
|
||||
docker_volumes:
|
||||
- name: framework-app-data
|
||||
driver: local
|
||||
- name: framework-db-data
|
||||
driver: local
|
||||
- name: framework-logs
|
||||
driver: local
|
||||
|
||||
# Health Check Configuration
|
||||
docker_health_check_interval: 30s
|
||||
docker_health_check_timeout: 10s
|
||||
docker_health_check_retries: 3
|
||||
docker_health_check_start_period: 60s
|
||||
|
||||
# Backup Configuration
|
||||
docker_backup_enabled: "{{ backup_enabled | default(false) }}"
|
||||
docker_backup_schedule: "0 2 * * *" # Daily at 2 AM
|
||||
docker_backup_retention: 7
|
||||
|
||||
# Monitoring Configuration
|
||||
docker_monitoring_enabled: "{{ monitoring_enabled | default(true) }}"
|
||||
docker_metrics_enabled: true
|
||||
docker_metrics_address: "0.0.0.0:9323"
|
||||
|
||||
# Resource Limits (per environment)
|
||||
docker_resource_limits:
|
||||
production:
|
||||
memory: "{{ docker_memory_limit | default('4g') }}"
|
||||
cpus: "{{ docker_cpu_limit | default('2.0') }}"
|
||||
pids: 1024
|
||||
staging:
|
||||
memory: "{{ docker_memory_limit | default('2g') }}"
|
||||
cpus: "{{ docker_cpu_limit | default('1.0') }}"
|
||||
pids: 512
|
||||
development:
|
||||
memory: "{{ docker_memory_limit | default('1g') }}"
|
||||
cpus: "{{ docker_cpu_limit | default('0.5') }}"
|
||||
pids: 256
|
||||
|
||||
# Container Security Options
|
||||
docker_security_opts:
|
||||
- no-new-privileges:true
|
||||
- seccomp:unconfined
|
||||
- apparmor:docker-framework
|
||||
@@ -1,52 +0,0 @@
|
||||
---
|
||||
# Docker Runtime Role Handlers
|
||||
|
||||
- name: restart docker
|
||||
service:
|
||||
name: docker
|
||||
state: restarted
|
||||
listen: restart docker
|
||||
|
||||
- name: reload docker
|
||||
service:
|
||||
name: docker
|
||||
state: reloaded
|
||||
listen: reload docker
|
||||
|
||||
- name: reload systemd
|
||||
systemd:
|
||||
daemon_reload: true
|
||||
listen: reload systemd
|
||||
|
||||
- name: restart containerd
|
||||
service:
|
||||
name: containerd
|
||||
state: restarted
|
||||
listen: restart containerd
|
||||
|
||||
- name: reload apparmor
|
||||
service:
|
||||
name: apparmor
|
||||
state: reloaded
|
||||
listen: reload apparmor
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
- name: restart docker-compose
|
||||
command: docker-compose restart
|
||||
args:
|
||||
chdir: "{{ item }}"
|
||||
loop: "{{ docker_compose_projects | map(attribute='path') | list }}"
|
||||
when: docker_compose_projects is defined and docker_compose_projects | length > 0
|
||||
listen: restart docker-compose
|
||||
|
||||
- name: prune docker system
|
||||
command: docker system prune -af --volumes
|
||||
listen: prune docker system
|
||||
|
||||
- name: update docker images
|
||||
command: docker image prune -af
|
||||
listen: update docker images
|
||||
|
||||
- name: rebuild php image
|
||||
command: /usr/local/bin/build-php-image.sh
|
||||
listen: rebuild php image
|
||||
@@ -1,30 +0,0 @@
|
||||
---
|
||||
galaxy_info:
|
||||
role_name: docker-runtime
|
||||
author: Custom PHP Framework Team
|
||||
description: Secure Docker runtime environment with PHP 8.4 optimization
|
||||
company: michaelschiemer.de
|
||||
license: MIT
|
||||
min_ansible_version: 2.12
|
||||
platforms:
|
||||
- name: Ubuntu
|
||||
versions:
|
||||
- "20.04"
|
||||
- "22.04"
|
||||
- "24.04"
|
||||
- name: Debian
|
||||
versions:
|
||||
- "11"
|
||||
- "12"
|
||||
galaxy_tags:
|
||||
- docker
|
||||
- containers
|
||||
- security
|
||||
- php
|
||||
- runtime
|
||||
|
||||
dependencies: []
|
||||
|
||||
collections:
|
||||
- community.docker
|
||||
- ansible.posix
|
||||
@@ -1,113 +0,0 @@
|
||||
---
|
||||
# Docker Daemon Configuration
|
||||
|
||||
- name: Create Docker configuration directory
|
||||
file:
|
||||
path: /etc/docker
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- docker
|
||||
- config
|
||||
|
||||
- name: Configure Docker daemon
|
||||
template:
|
||||
src: daemon.json.j2
|
||||
dest: /etc/docker/daemon.json
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
notify: restart docker
|
||||
tags:
|
||||
- docker
|
||||
- config
|
||||
|
||||
- name: Create Docker systemd service directory
|
||||
file:
|
||||
path: /etc/systemd/system/docker.service.d
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- docker
|
||||
- systemd
|
||||
|
||||
- name: Configure Docker systemd service overrides
|
||||
template:
|
||||
src: docker-service-override.conf.j2
|
||||
dest: /etc/systemd/system/docker.service.d/override.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify:
|
||||
- reload systemd
|
||||
- restart docker
|
||||
tags:
|
||||
- docker
|
||||
- systemd
|
||||
|
||||
- name: Create Docker socket service override
|
||||
template:
|
||||
src: docker-socket-override.conf.j2
|
||||
dest: /etc/systemd/system/docker.socket.d/override.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify:
|
||||
- reload systemd
|
||||
- restart docker
|
||||
tags:
|
||||
- docker
|
||||
- systemd
|
||||
|
||||
- name: Configure Docker log rotation
|
||||
template:
|
||||
src: docker-logrotate.j2
|
||||
dest: /etc/logrotate.d/docker
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- docker
|
||||
- logging
|
||||
|
||||
- name: Create Docker logs directory
|
||||
file:
|
||||
path: /var/log/docker
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- docker
|
||||
- logging
|
||||
|
||||
- name: Set up Docker environment
|
||||
template:
|
||||
src: docker-environment.j2
|
||||
dest: /etc/default/docker
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: restart docker
|
||||
tags:
|
||||
- docker
|
||||
- environment
|
||||
|
||||
- name: Configure Docker resource limits
|
||||
template:
|
||||
src: docker-limits.conf.j2
|
||||
dest: /etc/systemd/system/docker.service.d/limits.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify:
|
||||
- reload systemd
|
||||
- restart docker
|
||||
tags:
|
||||
- docker
|
||||
- limits
|
||||
@@ -1,96 +0,0 @@
|
||||
---
|
||||
# Docker Engine Installation
|
||||
|
||||
- name: Remove old Docker versions
|
||||
package:
|
||||
name:
|
||||
- docker
|
||||
- docker-engine
|
||||
- docker.io
|
||||
- containerd
|
||||
- runc
|
||||
state: absent
|
||||
tags:
|
||||
- docker
|
||||
- cleanup
|
||||
|
||||
- name: Add Docker GPG key
|
||||
apt_key:
|
||||
url: "{{ docker_apt_gpg_key }}"
|
||||
state: present
|
||||
tags:
|
||||
- docker
|
||||
- repository
|
||||
|
||||
- name: Add Docker repository
|
||||
apt_repository:
|
||||
repo: "{{ docker_apt_repository }}"
|
||||
state: present
|
||||
update_cache: true
|
||||
tags:
|
||||
- docker
|
||||
- repository
|
||||
|
||||
- name: Install Docker Engine
|
||||
package:
|
||||
name:
|
||||
- docker-{{ docker_edition }}
|
||||
- docker-{{ docker_edition }}-cli
|
||||
- containerd.io
|
||||
- docker-buildx-plugin
|
||||
- docker-compose-plugin
|
||||
state: present
|
||||
update_cache: true
|
||||
notify: restart docker
|
||||
tags:
|
||||
- docker
|
||||
- packages
|
||||
|
||||
- name: Ensure Docker group exists
|
||||
group:
|
||||
name: "{{ docker_group }}"
|
||||
state: present
|
||||
tags:
|
||||
- docker
|
||||
- users
|
||||
|
||||
- name: Add users to Docker group
|
||||
user:
|
||||
name: "{{ item }}"
|
||||
groups: "{{ docker_group }}"
|
||||
append: true
|
||||
loop: "{{ docker_users }}"
|
||||
when: docker_users | length > 0
|
||||
tags:
|
||||
- docker
|
||||
- users
|
||||
|
||||
- name: Add deploy user to Docker group
|
||||
user:
|
||||
name: "{{ ansible_user }}"
|
||||
groups: "{{ docker_group }}"
|
||||
append: true
|
||||
when: ansible_user != 'root'
|
||||
tags:
|
||||
- docker
|
||||
- users
|
||||
|
||||
- name: Start and enable Docker service
|
||||
service:
|
||||
name: docker
|
||||
state: "{{ docker_service_state }}"
|
||||
enabled: "{{ docker_service_enabled }}"
|
||||
tags:
|
||||
- docker
|
||||
- service
|
||||
|
||||
- name: Wait for Docker daemon to be ready
|
||||
command: docker version
|
||||
register: docker_ready
|
||||
retries: 5
|
||||
delay: 10
|
||||
until: docker_ready.rc == 0
|
||||
changed_when: false
|
||||
tags:
|
||||
- docker
|
||||
- verification
|
||||
@@ -1,77 +0,0 @@
|
||||
---
|
||||
# Docker Runtime Role - Main Tasks
|
||||
|
||||
- name: Include OS-specific variables
|
||||
include_vars: "{{ ansible_os_family }}.yml"
|
||||
tags:
|
||||
- docker
|
||||
- config
|
||||
|
||||
- name: Install Docker prerequisites
|
||||
include_tasks: prerequisites.yml
|
||||
tags:
|
||||
- docker
|
||||
- prerequisites
|
||||
|
||||
- name: Install Docker Engine
|
||||
include_tasks: install-docker.yml
|
||||
tags:
|
||||
- docker
|
||||
- install
|
||||
|
||||
- name: Configure Docker daemon
|
||||
include_tasks: configure-daemon.yml
|
||||
tags:
|
||||
- docker
|
||||
- config
|
||||
|
||||
- name: Setup Docker security
|
||||
include_tasks: security-setup.yml
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
|
||||
- name: Install Docker Compose
|
||||
include_tasks: install-compose.yml
|
||||
tags:
|
||||
- docker
|
||||
- compose
|
||||
|
||||
- name: Setup Docker networks
|
||||
include_tasks: setup-networks.yml
|
||||
tags:
|
||||
- docker
|
||||
- network
|
||||
|
||||
- name: Setup Docker volumes
|
||||
include_tasks: setup-volumes.yml
|
||||
tags:
|
||||
- docker
|
||||
- volumes
|
||||
|
||||
- name: Configure PHP 8.4 optimization
|
||||
include_tasks: php-optimization.yml
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- optimization
|
||||
|
||||
- name: Setup monitoring and health checks
|
||||
include_tasks: monitoring.yml
|
||||
when: docker_monitoring_enabled | bool
|
||||
tags:
|
||||
- docker
|
||||
- monitoring
|
||||
|
||||
- name: Configure backup system
|
||||
include_tasks: backup-setup.yml
|
||||
when: docker_backup_enabled | bool
|
||||
tags:
|
||||
- docker
|
||||
- backup
|
||||
|
||||
- name: Verify Docker installation
|
||||
include_tasks: verification.yml
|
||||
tags:
|
||||
- docker
|
||||
- verification
|
||||
@@ -1,177 +0,0 @@
|
||||
---
|
||||
# PHP 8.4 Docker Optimization
|
||||
|
||||
- name: Create PHP configuration directory
|
||||
file:
|
||||
path: /etc/docker/php
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- config
|
||||
|
||||
- name: Create PHP 8.4 optimized Dockerfile template
|
||||
template:
|
||||
src: php84-dockerfile.j2
|
||||
dest: /etc/docker/php/Dockerfile.php84
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- dockerfile
|
||||
|
||||
- name: Create PHP-FPM configuration for containers
|
||||
template:
|
||||
src: php-fpm-docker.conf.j2
|
||||
dest: /etc/docker/php/php-fpm.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- fpm
|
||||
|
||||
- name: Create PHP configuration for containers
|
||||
template:
|
||||
src: php-docker.ini.j2
|
||||
dest: /etc/docker/php/php.ini
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- config
|
||||
|
||||
- name: Create OPcache configuration
|
||||
template:
|
||||
src: opcache-docker.ini.j2
|
||||
dest: /etc/docker/php/opcache.ini
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- opcache
|
||||
|
||||
- name: Create Redis configuration for PHP
|
||||
template:
|
||||
src: redis-php.ini.j2
|
||||
dest: /etc/docker/php/redis.ini
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- redis
|
||||
|
||||
- name: Create PHP health check script
|
||||
template:
|
||||
src: php-health-check.sh.j2
|
||||
dest: /etc/docker/php/health-check.sh
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- health
|
||||
|
||||
- name: Pull PHP 8.4 base image
|
||||
docker_image:
|
||||
name: "{{ php_docker_image }}"
|
||||
source: pull
|
||||
state: present
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- image
|
||||
|
||||
- name: Create custom PHP 8.4 image build script
|
||||
template:
|
||||
src: build-php-image.sh.j2
|
||||
dest: /usr/local/bin/build-php-image.sh
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- build
|
||||
|
||||
- name: Create PHP container resource limits
|
||||
template:
|
||||
src: php-container-limits.json.j2
|
||||
dest: /etc/docker/php/container-limits.json
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- limits
|
||||
|
||||
- name: Configure PHP error logging for containers
|
||||
template:
|
||||
src: php-error-log.conf.j2
|
||||
dest: /etc/docker/php/error-log.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- logging
|
||||
|
||||
- name: Create PHP performance tuning script
|
||||
template:
|
||||
src: php-performance-tune.sh.j2
|
||||
dest: /usr/local/bin/php-performance-tune.sh
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- performance
|
||||
|
||||
- name: Set up PHP session handling for containers
|
||||
template:
|
||||
src: php-session.ini.j2
|
||||
dest: /etc/docker/php/session.ini
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- session
|
||||
|
||||
- name: Create PHP security configuration
|
||||
template:
|
||||
src: php-security.ini.j2
|
||||
dest: /etc/docker/php/security.ini
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- security
|
||||
|
||||
- name: Build optimized PHP 8.4 image
|
||||
command: /usr/local/bin/build-php-image.sh
|
||||
args:
|
||||
creates: /var/lib/docker/image-builds/php84-custom.built
|
||||
tags:
|
||||
- docker
|
||||
- php
|
||||
- build
|
||||
@@ -1,175 +0,0 @@
|
||||
---
|
||||
# Docker Security Configuration
|
||||
|
||||
- name: Create Docker security profiles directory
|
||||
file:
|
||||
path: /etc/docker/security
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
|
||||
- name: Install seccomp security profile
|
||||
template:
|
||||
src: seccomp-default.json.j2
|
||||
dest: /etc/docker/seccomp-default.json
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- seccomp
|
||||
|
||||
- name: Install AppArmor profile for Docker
|
||||
template:
|
||||
src: docker-framework-apparmor.j2
|
||||
dest: /etc/apparmor.d/docker-framework
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: reload apparmor
|
||||
when: ansible_os_family == 'Debian'
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- apparmor
|
||||
|
||||
- name: Load AppArmor profile
|
||||
command: apparmor_parser -r -W /etc/apparmor.d/docker-framework
|
||||
when: ansible_os_family == 'Debian'
|
||||
changed_when: false
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- apparmor
|
||||
|
||||
- name: Configure user namespace mapping
|
||||
template:
|
||||
src: subuid.j2
|
||||
dest: /etc/subuid
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- userns
|
||||
|
||||
- name: Configure group namespace mapping
|
||||
template:
|
||||
src: subgid.j2
|
||||
dest: /etc/subgid
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- userns
|
||||
|
||||
- name: Create Docker TLS certificates directory
|
||||
file:
|
||||
path: /etc/docker/certs
|
||||
state: directory
|
||||
owner: root
|
||||
group: docker
|
||||
mode: '0750'
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- tls
|
||||
|
||||
- name: Generate Docker TLS certificates
|
||||
command: >
|
||||
openssl req -new -x509 -days 365 -nodes
|
||||
-out /etc/docker/certs/server-cert.pem
|
||||
-keyout /etc/docker/certs/server-key.pem
|
||||
-subj "/CN={{ inventory_hostname }}"
|
||||
args:
|
||||
creates: /etc/docker/certs/server-cert.pem
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- tls
|
||||
|
||||
- name: Set correct permissions on Docker TLS certificates
|
||||
file:
|
||||
path: "{{ item.path }}"
|
||||
owner: root
|
||||
group: docker
|
||||
mode: "{{ item.mode }}"
|
||||
loop:
|
||||
- { path: "/etc/docker/certs/server-cert.pem", mode: "0644" }
|
||||
- { path: "/etc/docker/certs/server-key.pem", mode: "0640" }
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- tls
|
||||
- permissions
|
||||
|
||||
- name: Configure Docker Content Trust
|
||||
lineinfile:
|
||||
path: /etc/environment
|
||||
line: "DOCKER_CONTENT_TRUST=1"
|
||||
create: true
|
||||
when: environment == 'production'
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- trust
|
||||
|
||||
- name: Install Docker security scanning tools
|
||||
package:
|
||||
name:
|
||||
- runc
|
||||
- docker-bench-security
|
||||
state: present
|
||||
ignore_errors: true
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- tools
|
||||
|
||||
- name: Create Docker security audit script
|
||||
template:
|
||||
src: docker-security-audit.sh.j2
|
||||
dest: /usr/local/bin/docker-security-audit.sh
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- audit
|
||||
|
||||
- name: Schedule Docker security audits
|
||||
cron:
|
||||
name: "Docker security audit"
|
||||
minute: "0"
|
||||
hour: "5"
|
||||
weekday: "1"
|
||||
job: "/usr/local/bin/docker-security-audit.sh | mail -s 'Docker Security Audit - {{ inventory_hostname }}' {{ ssl_email }}"
|
||||
user: root
|
||||
when: environment == 'production'
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- audit
|
||||
- cron
|
||||
|
||||
- name: Configure Docker socket security
|
||||
file:
|
||||
path: /var/run/docker.sock
|
||||
owner: root
|
||||
group: docker
|
||||
mode: '0660'
|
||||
tags:
|
||||
- docker
|
||||
- security
|
||||
- socket
|
||||
@@ -1,61 +0,0 @@
|
||||
{
|
||||
"# Custom PHP Framework Docker Daemon Configuration": "{{ environment | upper }}",
|
||||
|
||||
"# Security Settings": "Hardened configuration for production use",
|
||||
"userland-proxy": {{ docker_daemon_config['userland-proxy'] | tojson }},
|
||||
"live-restore": {{ docker_daemon_config['live-restore'] | tojson }},
|
||||
"icc": {{ docker_daemon_config['icc'] | tojson }},
|
||||
"userns-remap": "{{ docker_daemon_config['userns-remap'] }}",
|
||||
"no-new-privileges": {{ docker_daemon_config['no-new-privileges'] | tojson }},
|
||||
{% if docker_daemon_config['seccomp-profile'] is defined %}
|
||||
"seccomp-profile": "{{ docker_daemon_config['seccomp-profile'] }}",
|
||||
{% endif %}
|
||||
|
||||
"# Logging Configuration": "Structured logging with rotation",
|
||||
"log-driver": "{{ docker_daemon_config['log-driver'] }}",
|
||||
"log-opts": {{ docker_daemon_config['log-opts'] | tojson }},
|
||||
|
||||
"# Storage Configuration": "Optimized for performance",
|
||||
"storage-driver": "{{ docker_daemon_config['storage-driver'] }}",
|
||||
{% if docker_daemon_config['storage-opts'] is defined %}
|
||||
"storage-opts": {{ docker_daemon_config['storage-opts'] | tojson }},
|
||||
{% endif %}
|
||||
|
||||
"# Network Security": "Disabled for security",
|
||||
{% if docker_daemon_config['bridge'] is defined and docker_daemon_config['bridge'] %}
|
||||
"bridge": "{{ docker_daemon_config['bridge'] }}",
|
||||
{% endif %}
|
||||
"ip-forward": {{ docker_daemon_config['ip-forward'] | tojson }},
|
||||
"ip-masq": {{ docker_daemon_config['ip-masq'] | tojson }},
|
||||
"iptables": {{ docker_daemon_config['iptables'] | tojson }},
|
||||
"ipv6": {{ docker_daemon_config['ipv6'] | tojson }},
|
||||
|
||||
"# Resource Limits": "Default container limits",
|
||||
"default-ulimits": {{ docker_daemon_config['default-ulimits'] | tojson }},
|
||||
|
||||
"# Registry Configuration": "Secure registry access",
|
||||
{% if docker_daemon_config['insecure-registries'] | length > 0 %}
|
||||
"insecure-registries": {{ docker_daemon_config['insecure-registries'] | tojson }},
|
||||
{% endif %}
|
||||
{% if docker_daemon_config['registry-mirrors'] | length > 0 %}
|
||||
"registry-mirrors": {{ docker_daemon_config['registry-mirrors'] | tojson }},
|
||||
{% endif %}
|
||||
|
||||
"# Monitoring and Metrics": "Enable for production monitoring",
|
||||
{% if docker_metrics_enabled %}
|
||||
"metrics-addr": "{{ docker_metrics_address }}",
|
||||
"experimental": true,
|
||||
{% endif %}
|
||||
|
||||
"# Runtime Configuration": "Optimized for PHP 8.4 workloads",
|
||||
"default-runtime": "runc",
|
||||
"runtimes": {
|
||||
"runc": {
|
||||
"path": "/usr/bin/runc"
|
||||
}
|
||||
},
|
||||
|
||||
"# Debug and Development": "Environment specific settings",
|
||||
"debug": {{ (environment == 'development') | tojson }},
|
||||
"experimental": {{ docker_daemon_config['experimental'] | tojson }}
|
||||
}
|
||||
@@ -1,101 +0,0 @@
|
||||
# Custom PHP 8.4 Dockerfile for {{ domain_name }}
|
||||
# Optimized for Custom PHP Framework
|
||||
# Environment: {{ environment | upper }}
|
||||
|
||||
FROM php:8.4-fpm-alpine
|
||||
|
||||
# Build arguments
|
||||
ARG PHP_VERSION="{{ php_version }}"
|
||||
ARG BUILD_DATE="{{ ansible_date_time.iso8601 }}"
|
||||
ARG VCS_REF="{{ ansible_hostname }}"
|
||||
|
||||
# Labels for container metadata
|
||||
LABEL maintainer="{{ ssl_email }}" \
|
||||
org.label-schema.build-date="${BUILD_DATE}" \
|
||||
org.label-schema.vcs-ref="${VCS_REF}" \
|
||||
org.label-schema.schema-version="1.0" \
|
||||
org.label-schema.name="custom-php-framework" \
|
||||
org.label-schema.description="Custom PHP Framework with PHP 8.4" \
|
||||
org.label-schema.version="${PHP_VERSION}"
|
||||
|
||||
# Install system dependencies
|
||||
RUN apk add --no-cache \
|
||||
# Build dependencies
|
||||
$PHPIZE_DEPS \
|
||||
autoconf \
|
||||
gcc \
|
||||
g++ \
|
||||
make \
|
||||
# Runtime dependencies
|
||||
curl-dev \
|
||||
freetype-dev \
|
||||
icu-dev \
|
||||
jpeg-dev \
|
||||
libpng-dev \
|
||||
libxml2-dev \
|
||||
libzip-dev \
|
||||
oniguruma-dev \
|
||||
openssl-dev \
|
||||
postgresql-dev \
|
||||
sqlite-dev \
|
||||
# System tools
|
||||
git \
|
||||
unzip \
|
||||
wget
|
||||
|
||||
# Install PHP extensions
|
||||
{% for extension in php_extensions %}
|
||||
RUN docker-php-ext-install {{ extension }}
|
||||
{% endfor %}
|
||||
|
||||
# Install and configure OPcache
|
||||
RUN docker-php-ext-install opcache
|
||||
|
||||
# Install Redis extension
|
||||
RUN pecl install redis && docker-php-ext-enable redis
|
||||
|
||||
# Install Xdebug for development
|
||||
{% if environment == 'development' %}
|
||||
RUN pecl install xdebug && docker-php-ext-enable xdebug
|
||||
{% endif %}
|
||||
|
||||
# Configure PHP
|
||||
COPY php.ini /usr/local/etc/php/conf.d/99-custom.ini
|
||||
COPY opcache.ini /usr/local/etc/php/conf.d/10-opcache.ini
|
||||
COPY redis.ini /usr/local/etc/php/conf.d/20-redis.ini
|
||||
COPY security.ini /usr/local/etc/php/conf.d/30-security.ini
|
||||
COPY session.ini /usr/local/etc/php/conf.d/40-session.ini
|
||||
|
||||
# Configure PHP-FPM
|
||||
COPY php-fpm.conf /usr/local/etc/php-fpm.d/www.conf
|
||||
|
||||
# Install Composer
|
||||
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer \
|
||||
&& composer --version
|
||||
|
||||
# Create application user
|
||||
RUN addgroup -g 1000 -S www && \
|
||||
adduser -u 1000 -S www -G www
|
||||
|
||||
# Set up application directory
|
||||
WORKDIR /var/www/html
|
||||
|
||||
# Set proper permissions
|
||||
RUN chown -R www:www /var/www/html
|
||||
|
||||
# Security: Run as non-root user
|
||||
USER www
|
||||
|
||||
# Health check
|
||||
COPY health-check.sh /usr/local/bin/health-check.sh
|
||||
HEALTHCHECK --interval={{ docker_health_check_interval }} \
|
||||
--timeout={{ docker_health_check_timeout }} \
|
||||
--start-period={{ docker_health_check_start_period }} \
|
||||
--retries={{ docker_health_check_retries }} \
|
||||
CMD /usr/local/bin/health-check.sh
|
||||
|
||||
# Expose PHP-FPM port
|
||||
EXPOSE 9000
|
||||
|
||||
# Default command
|
||||
CMD ["php-fpm"]
|
||||
@@ -1,148 +0,0 @@
|
||||
---
|
||||
# Monitoring Role Default Variables
|
||||
|
||||
# General Configuration
|
||||
monitoring_enabled: "{{ monitoring_enabled | default(true) }}"
|
||||
health_checks_enabled: "{{ health_checks_enabled | default(true) }}"
|
||||
monitoring_user: monitoring
|
||||
monitoring_group: monitoring
|
||||
monitoring_home: /opt/monitoring
|
||||
|
||||
# Node Exporter Configuration
|
||||
node_exporter_enabled: true
|
||||
node_exporter_version: "1.6.1"
|
||||
node_exporter_port: 9100
|
||||
node_exporter_bind_address: "127.0.0.1"
|
||||
node_exporter_user: node_exporter
|
||||
node_exporter_group: node_exporter
|
||||
|
||||
# Prometheus Configuration (basic)
|
||||
prometheus_enabled: false # Can be enabled for advanced monitoring
|
||||
prometheus_version: "2.45.0"
|
||||
prometheus_port: 9090
|
||||
prometheus_bind_address: "127.0.0.1"
|
||||
prometheus_retention_time: "15d"
|
||||
prometheus_retention_size: "10GB"
|
||||
|
||||
# Health Check Configuration
|
||||
health_check_interval: 30
|
||||
health_check_timeout: 10
|
||||
health_check_retries: 3
|
||||
|
||||
# Service Health Checks
|
||||
service_checks:
|
||||
- name: nginx
|
||||
command: "systemctl is-active nginx"
|
||||
interval: 30
|
||||
timeout: 5
|
||||
retries: 2
|
||||
|
||||
- name: docker
|
||||
command: "docker version"
|
||||
interval: 60
|
||||
timeout: 10
|
||||
retries: 3
|
||||
|
||||
- name: php-fpm
|
||||
command: "docker exec php php-fpm -t"
|
||||
interval: 60
|
||||
timeout: 15
|
||||
retries: 2
|
||||
|
||||
- name: mysql
|
||||
command: "docker exec mysql mysqladmin ping -h localhost"
|
||||
interval: 60
|
||||
timeout: 10
|
||||
retries: 3
|
||||
|
||||
# Application Health Checks
|
||||
app_health_checks:
|
||||
- name: framework-health
|
||||
url: "https://{{ domain_name }}/health"
|
||||
method: GET
|
||||
expected_status: 200
|
||||
timeout: 10
|
||||
interval: 30
|
||||
|
||||
- name: api-health
|
||||
url: "https://{{ domain_name }}/api/health"
|
||||
method: GET
|
||||
expected_status: 200
|
||||
timeout: 5
|
||||
interval: 60
|
||||
|
||||
# System Monitoring Thresholds
|
||||
monitoring_thresholds:
|
||||
cpu_usage_warning: 70
|
||||
cpu_usage_critical: 90
|
||||
memory_usage_warning: 80
|
||||
memory_usage_critical: 95
|
||||
disk_usage_warning: 80
|
||||
disk_usage_critical: 90
|
||||
load_average_warning: 2.0
|
||||
load_average_critical: 4.0
|
||||
|
||||
# Log Monitoring
|
||||
log_monitoring_enabled: true
|
||||
log_files_to_monitor:
|
||||
- path: /var/log/nginx/error.log
|
||||
patterns:
|
||||
- "error"
|
||||
- "warn"
|
||||
- "crit"
|
||||
alert_threshold: 10 # alerts per minute
|
||||
|
||||
- path: /var/log/nginx/access.log
|
||||
patterns:
|
||||
- "5[0-9][0-9]" # 5xx errors
|
||||
- "4[0-9][0-9]" # 4xx errors
|
||||
alert_threshold: 20
|
||||
|
||||
- path: /var/log/auth.log
|
||||
patterns:
|
||||
- "Failed password"
|
||||
- "authentication failure"
|
||||
alert_threshold: 5
|
||||
|
||||
# Alerting Configuration
|
||||
alerting_enabled: true
|
||||
alert_email: "{{ ssl_email }}"
|
||||
alert_methods:
|
||||
- email
|
||||
- log
|
||||
|
||||
# Backup Monitoring
|
||||
backup_monitoring_enabled: "{{ backup_enabled | default(false) }}"
|
||||
backup_check_command: "/usr/local/bin/check-backups.sh"
|
||||
backup_alert_threshold: 24 # hours
|
||||
|
||||
# Performance Monitoring
|
||||
performance_monitoring_enabled: true
|
||||
performance_check_interval: 300 # 5 minutes
|
||||
performance_metrics:
|
||||
- response_time
|
||||
- throughput
|
||||
- error_rate
|
||||
- resource_usage
|
||||
|
||||
# Container Monitoring
|
||||
docker_monitoring_enabled: true
|
||||
docker_stats_interval: 60
|
||||
docker_health_check_command: "docker ps --format 'table {{.Names}}\\t{{.Status}}\\t{{.Ports}}'"
|
||||
|
||||
# Custom Framework Monitoring
|
||||
framework_monitoring:
|
||||
console_health_check: "php console.php framework:health-check"
|
||||
mcp_server_check: "php console.php mcp:server --test"
|
||||
queue_monitoring: "php console.php queue:status"
|
||||
cache_monitoring: "php console.php cache:status"
|
||||
|
||||
# Monitoring Scripts Location
|
||||
monitoring_scripts_dir: "{{ monitoring_home }}/scripts"
|
||||
monitoring_logs_dir: "/var/log/monitoring"
|
||||
monitoring_config_dir: "{{ monitoring_home }}/config"
|
||||
|
||||
# Cleanup Configuration
|
||||
log_retention_days: 30
|
||||
metrics_retention_days: 7
|
||||
cleanup_schedule: "0 2 * * *" # Daily at 2 AM
|
||||
@@ -1,45 +0,0 @@
|
||||
---
|
||||
# Monitoring Role Handlers
|
||||
|
||||
- name: reload systemd
|
||||
systemd:
|
||||
daemon_reload: true
|
||||
listen: reload systemd
|
||||
|
||||
- name: restart monitoring
|
||||
systemd:
|
||||
name: "{{ item }}"
|
||||
state: restarted
|
||||
loop:
|
||||
- health-check.service
|
||||
listen: restart monitoring
|
||||
ignore_errors: true
|
||||
|
||||
- name: restart node-exporter
|
||||
systemd:
|
||||
name: node_exporter
|
||||
state: restarted
|
||||
listen: restart node-exporter
|
||||
when: node_exporter_enabled | bool
|
||||
|
||||
- name: start monitoring services
|
||||
systemd:
|
||||
name: "{{ item }}"
|
||||
state: started
|
||||
enabled: true
|
||||
loop:
|
||||
- health-check.timer
|
||||
listen: start monitoring services
|
||||
ignore_errors: true
|
||||
|
||||
- name: reload monitoring config
|
||||
command: "{{ monitoring_scripts_dir }}/monitoring-utils.sh reload"
|
||||
listen: reload monitoring config
|
||||
become_user: "{{ monitoring_user }}"
|
||||
ignore_errors: true
|
||||
|
||||
- name: test alerts
|
||||
command: "{{ monitoring_scripts_dir }}/send-alert.sh TEST 'Test Alert' 'This is a test alert from Ansible deployment'"
|
||||
listen: test alerts
|
||||
become_user: "{{ monitoring_user }}"
|
||||
ignore_errors: true
|
||||
@@ -1,31 +0,0 @@
|
||||
---
|
||||
galaxy_info:
|
||||
role_name: monitoring
|
||||
author: Custom PHP Framework Team
|
||||
description: System monitoring and health checks for PHP applications
|
||||
company: michaelschiemer.de
|
||||
license: MIT
|
||||
min_ansible_version: 2.12
|
||||
platforms:
|
||||
- name: Ubuntu
|
||||
versions:
|
||||
- "20.04"
|
||||
- "22.04"
|
||||
- "24.04"
|
||||
- name: Debian
|
||||
versions:
|
||||
- "11"
|
||||
- "12"
|
||||
galaxy_tags:
|
||||
- monitoring
|
||||
- health-checks
|
||||
- metrics
|
||||
- alerting
|
||||
- prometheus
|
||||
- node-exporter
|
||||
|
||||
dependencies: []
|
||||
|
||||
collections:
|
||||
- community.general
|
||||
- ansible.posix
|
||||
@@ -1,112 +0,0 @@
|
||||
---
|
||||
# Health Checks Configuration
|
||||
|
||||
- name: Create health check scripts
|
||||
template:
|
||||
src: health-check.sh.j2
|
||||
dest: "{{ monitoring_scripts_dir }}/health-check-{{ item.name }}.sh"
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0755'
|
||||
loop: "{{ service_checks }}"
|
||||
tags:
|
||||
- monitoring
|
||||
- health-checks
|
||||
- scripts
|
||||
|
||||
- name: Create application health check script
|
||||
template:
|
||||
src: app-health-check.sh.j2
|
||||
dest: "{{ monitoring_scripts_dir }}/app-health-check.sh"
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0755'
|
||||
tags:
|
||||
- monitoring
|
||||
- health-checks
|
||||
- application
|
||||
|
||||
- name: Create framework-specific health checks
|
||||
template:
|
||||
src: framework-health-check.sh.j2
|
||||
dest: "{{ monitoring_scripts_dir }}/framework-health-check.sh"
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0755'
|
||||
tags:
|
||||
- monitoring
|
||||
- health-checks
|
||||
- framework
|
||||
|
||||
- name: Create comprehensive health check runner
|
||||
template:
|
||||
src: run-health-checks.sh.j2
|
||||
dest: "{{ monitoring_scripts_dir }}/run-health-checks.sh"
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0755'
|
||||
tags:
|
||||
- monitoring
|
||||
- health-checks
|
||||
- runner
|
||||
|
||||
- name: Create health check systemd service
|
||||
template:
|
||||
src: health-check.service.j2
|
||||
dest: /etc/systemd/system/health-check.service
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: reload systemd
|
||||
tags:
|
||||
- monitoring
|
||||
- health-checks
|
||||
- systemd
|
||||
|
||||
- name: Create health check systemd timer
|
||||
template:
|
||||
src: health-check.timer.j2
|
||||
dest: /etc/systemd/system/health-check.timer
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: reload systemd
|
||||
tags:
|
||||
- monitoring
|
||||
- health-checks
|
||||
- systemd
|
||||
|
||||
- name: Enable and start health check timer
|
||||
systemd:
|
||||
name: health-check.timer
|
||||
enabled: true
|
||||
state: started
|
||||
daemon_reload: true
|
||||
tags:
|
||||
- monitoring
|
||||
- health-checks
|
||||
- systemd
|
||||
|
||||
- name: Create health check status endpoint
|
||||
template:
|
||||
src: health-status.php.j2
|
||||
dest: /var/www/html/health
|
||||
owner: "{{ nginx_user | default('www-data') }}"
|
||||
group: "{{ nginx_group | default('www-data') }}"
|
||||
mode: '0644'
|
||||
tags:
|
||||
- monitoring
|
||||
- health-checks
|
||||
- web
|
||||
|
||||
- name: Schedule individual health checks
|
||||
cron:
|
||||
name: "Health check - {{ item.name }}"
|
||||
minute: "*/{{ item.interval }}"
|
||||
job: "{{ monitoring_scripts_dir }}/health-check-{{ item.name }}.sh"
|
||||
user: "{{ monitoring_user }}"
|
||||
loop: "{{ service_checks }}"
|
||||
tags:
|
||||
- monitoring
|
||||
- health-checks
|
||||
- cron
|
||||
@@ -1,67 +0,0 @@
|
||||
---
|
||||
# Monitoring Role - Main Tasks
|
||||
|
||||
- name: Include OS-specific variables
|
||||
include_vars: "{{ ansible_os_family }}.yml"
|
||||
tags:
|
||||
- monitoring
|
||||
- config
|
||||
|
||||
- name: Setup monitoring infrastructure
|
||||
include_tasks: setup-monitoring.yml
|
||||
tags:
|
||||
- monitoring
|
||||
- setup
|
||||
|
||||
- name: Install and configure Node Exporter
|
||||
include_tasks: node-exporter.yml
|
||||
when: node_exporter_enabled | bool
|
||||
tags:
|
||||
- monitoring
|
||||
- node-exporter
|
||||
|
||||
- name: Setup health checks
|
||||
include_tasks: health-checks.yml
|
||||
when: health_checks_enabled | bool
|
||||
tags:
|
||||
- monitoring
|
||||
- health-checks
|
||||
|
||||
- name: Configure system monitoring
|
||||
include_tasks: system-monitoring.yml
|
||||
tags:
|
||||
- monitoring
|
||||
- system
|
||||
|
||||
- name: Setup application monitoring
|
||||
include_tasks: app-monitoring.yml
|
||||
tags:
|
||||
- monitoring
|
||||
- application
|
||||
|
||||
- name: Configure Docker monitoring
|
||||
include_tasks: docker-monitoring.yml
|
||||
when: docker_monitoring_enabled | bool
|
||||
tags:
|
||||
- monitoring
|
||||
- docker
|
||||
|
||||
- name: Setup log monitoring
|
||||
include_tasks: log-monitoring.yml
|
||||
when: log_monitoring_enabled | bool
|
||||
tags:
|
||||
- monitoring
|
||||
- logs
|
||||
|
||||
- name: Configure alerting
|
||||
include_tasks: alerting.yml
|
||||
when: alerting_enabled | bool
|
||||
tags:
|
||||
- monitoring
|
||||
- alerting
|
||||
|
||||
- name: Setup monitoring cleanup
|
||||
include_tasks: cleanup.yml
|
||||
tags:
|
||||
- monitoring
|
||||
- cleanup
|
||||
@@ -1,79 +0,0 @@
|
||||
---
|
||||
# Monitoring Infrastructure Setup
|
||||
|
||||
- name: Create monitoring user
|
||||
user:
|
||||
name: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
system: true
|
||||
shell: /bin/bash
|
||||
home: "{{ monitoring_home }}"
|
||||
create_home: true
|
||||
tags:
|
||||
- monitoring
|
||||
- users
|
||||
|
||||
- name: Create monitoring group
|
||||
group:
|
||||
name: "{{ monitoring_group }}"
|
||||
system: true
|
||||
tags:
|
||||
- monitoring
|
||||
- users
|
||||
|
||||
- name: Create monitoring directories
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0755'
|
||||
loop:
|
||||
- "{{ monitoring_home }}"
|
||||
- "{{ monitoring_scripts_dir }}"
|
||||
- "{{ monitoring_logs_dir }}"
|
||||
- "{{ monitoring_config_dir }}"
|
||||
- /etc/systemd/system
|
||||
tags:
|
||||
- monitoring
|
||||
- directories
|
||||
|
||||
- name: Install monitoring dependencies
|
||||
package:
|
||||
name:
|
||||
- curl
|
||||
- wget
|
||||
- jq
|
||||
- bc
|
||||
- mailutils
|
||||
- logrotate
|
||||
state: present
|
||||
tags:
|
||||
- monitoring
|
||||
- packages
|
||||
|
||||
- name: Create monitoring configuration file
|
||||
template:
|
||||
src: monitoring.conf.j2
|
||||
dest: "{{ monitoring_config_dir }}/monitoring.conf"
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0644'
|
||||
tags:
|
||||
- monitoring
|
||||
- config
|
||||
|
||||
- name: Create monitoring utility scripts
|
||||
template:
|
||||
src: "{{ item }}.sh.j2"
|
||||
dest: "{{ monitoring_scripts_dir }}/{{ item }}.sh"
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0755'
|
||||
loop:
|
||||
- monitoring-utils
|
||||
- send-alert
|
||||
- check-thresholds
|
||||
tags:
|
||||
- monitoring
|
||||
- scripts
|
||||
@@ -1,108 +0,0 @@
|
||||
---
|
||||
# System Resource Monitoring
|
||||
|
||||
- name: Create system monitoring script
|
||||
template:
|
||||
src: system-monitor.sh.j2
|
||||
dest: "{{ monitoring_scripts_dir }}/system-monitor.sh"
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0755'
|
||||
tags:
|
||||
- monitoring
|
||||
- system
|
||||
- scripts
|
||||
|
||||
- name: Create resource usage checker
|
||||
template:
|
||||
src: check-resources.sh.j2
|
||||
dest: "{{ monitoring_scripts_dir }}/check-resources.sh"
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0755'
|
||||
tags:
|
||||
- monitoring
|
||||
- system
|
||||
- resources
|
||||
|
||||
- name: Create disk usage monitoring script
|
||||
template:
|
||||
src: check-disk-usage.sh.j2
|
||||
dest: "{{ monitoring_scripts_dir }}/check-disk-usage.sh"
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0755'
|
||||
tags:
|
||||
- monitoring
|
||||
- system
|
||||
- disk
|
||||
|
||||
- name: Create memory monitoring script
|
||||
template:
|
||||
src: check-memory.sh.j2
|
||||
dest: "{{ monitoring_scripts_dir }}/check-memory.sh"
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0755'
|
||||
tags:
|
||||
- monitoring
|
||||
- system
|
||||
- memory
|
||||
|
||||
- name: Create CPU monitoring script
|
||||
template:
|
||||
src: check-cpu.sh.j2
|
||||
dest: "{{ monitoring_scripts_dir }}/check-cpu.sh"
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0755'
|
||||
tags:
|
||||
- monitoring
|
||||
- system
|
||||
- cpu
|
||||
|
||||
- name: Create load average monitoring script
|
||||
template:
|
||||
src: check-load.sh.j2
|
||||
dest: "{{ monitoring_scripts_dir }}/check-load.sh"
|
||||
owner: "{{ monitoring_user }}"
|
||||
group: "{{ monitoring_group }}"
|
||||
mode: '0755'
|
||||
tags:
|
||||
- monitoring
|
||||
- system
|
||||
- load
|
||||
|
||||
- name: Schedule system resource monitoring
|
||||
cron:
|
||||
name: "System resource monitoring"
|
||||
minute: "*/5"
|
||||
job: "{{ monitoring_scripts_dir }}/system-monitor.sh"
|
||||
user: "{{ monitoring_user }}"
|
||||
tags:
|
||||
- monitoring
|
||||
- system
|
||||
- cron
|
||||
|
||||
- name: Schedule resource usage alerts
|
||||
cron:
|
||||
name: "Resource usage alerts"
|
||||
minute: "*/10"
|
||||
job: "{{ monitoring_scripts_dir }}/check-resources.sh"
|
||||
user: "{{ monitoring_user }}"
|
||||
tags:
|
||||
- monitoring
|
||||
- system
|
||||
- alerts
|
||||
|
||||
- name: Create system monitoring log rotation
|
||||
template:
|
||||
src: system-monitoring-logrotate.j2
|
||||
dest: /etc/logrotate.d/system-monitoring
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- monitoring
|
||||
- system
|
||||
- logrotate
|
||||
@@ -1,95 +0,0 @@
|
||||
#!/bin/bash
|
||||
# System Resource Monitoring Script
|
||||
# Custom PHP Framework - {{ environment | upper }}
|
||||
# Generated by Ansible
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Configuration
|
||||
LOG_DIR="{{ monitoring_logs_dir }}"
|
||||
LOG_FILE="${LOG_DIR}/system-monitor.log"
|
||||
ALERT_SCRIPT="{{ monitoring_scripts_dir }}/send-alert.sh"
|
||||
CONFIG_FILE="{{ monitoring_config_dir }}/monitoring.conf"
|
||||
|
||||
# Load configuration
|
||||
source "${CONFIG_FILE}"
|
||||
|
||||
# Create log directory if it doesn't exist
|
||||
mkdir -p "${LOG_DIR}"
|
||||
|
||||
# Function to log with timestamp
|
||||
log() {
|
||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" >> "${LOG_FILE}"
|
||||
}
|
||||
|
||||
# Function to check CPU usage
|
||||
check_cpu() {
|
||||
local cpu_usage
|
||||
cpu_usage=$(top -bn1 | grep "Cpu(s)" | sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | awk '{print 100 - $1}')
|
||||
cpu_usage=${cpu_usage%.*} # Remove decimal part
|
||||
|
||||
log "CPU Usage: ${cpu_usage}%"
|
||||
|
||||
if (( cpu_usage > {{ monitoring_thresholds.cpu_usage_critical }} )); then
|
||||
"${ALERT_SCRIPT}" "CRITICAL" "CPU Usage Critical" "CPU usage is ${cpu_usage}% (Critical threshold: {{ monitoring_thresholds.cpu_usage_critical }}%)"
|
||||
elif (( cpu_usage > {{ monitoring_thresholds.cpu_usage_warning }} )); then
|
||||
"${ALERT_SCRIPT}" "WARNING" "CPU Usage High" "CPU usage is ${cpu_usage}% (Warning threshold: {{ monitoring_thresholds.cpu_usage_warning }}%)"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to check memory usage
|
||||
check_memory() {
|
||||
local mem_usage
|
||||
mem_usage=$(free | grep Mem | awk '{printf "%.0f", $3/$2 * 100.0}')
|
||||
|
||||
log "Memory Usage: ${mem_usage}%"
|
||||
|
||||
if (( mem_usage > {{ monitoring_thresholds.memory_usage_critical }} )); then
|
||||
"${ALERT_SCRIPT}" "CRITICAL" "Memory Usage Critical" "Memory usage is ${mem_usage}% (Critical threshold: {{ monitoring_thresholds.memory_usage_critical }}%)"
|
||||
elif (( mem_usage > {{ monitoring_thresholds.memory_usage_warning }} )); then
|
||||
"${ALERT_SCRIPT}" "WARNING" "Memory Usage High" "Memory usage is ${mem_usage}% (Warning threshold: {{ monitoring_thresholds.memory_usage_warning }}%)"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to check disk usage
|
||||
check_disk() {
|
||||
local disk_usage
|
||||
disk_usage=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')
|
||||
|
||||
log "Disk Usage: ${disk_usage}%"
|
||||
|
||||
if (( disk_usage > {{ monitoring_thresholds.disk_usage_critical }} )); then
|
||||
"${ALERT_SCRIPT}" "CRITICAL" "Disk Usage Critical" "Disk usage is ${disk_usage}% (Critical threshold: {{ monitoring_thresholds.disk_usage_critical }}%)"
|
||||
elif (( disk_usage > {{ monitoring_thresholds.disk_usage_warning }} )); then
|
||||
"${ALERT_SCRIPT}" "WARNING" "Disk Usage High" "Disk usage is ${disk_usage}% (Warning threshold: {{ monitoring_thresholds.disk_usage_warning }}%)"
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to check load average
|
||||
check_load() {
|
||||
local load_avg
|
||||
load_avg=$(uptime | awk -F'load average:' '{ print $2 }' | cut -d, -f1 | tr -d ' ')
|
||||
|
||||
log "Load Average: ${load_avg}"
|
||||
|
||||
if (( $(echo "${load_avg} > {{ monitoring_thresholds.load_average_critical }}" | bc -l) )); then
|
||||
"${ALERT_SCRIPT}" "CRITICAL" "Load Average Critical" "Load average is ${load_avg} (Critical threshold: {{ monitoring_thresholds.load_average_critical }})"
|
||||
elif (( $(echo "${load_avg} > {{ monitoring_thresholds.load_average_warning }}" | bc -l) )); then
|
||||
"${ALERT_SCRIPT}" "WARNING" "Load Average High" "Load average is ${load_avg} (Warning threshold: {{ monitoring_thresholds.load_average_warning }})"
|
||||
fi
|
||||
}
|
||||
|
||||
# Main monitoring function
|
||||
main() {
|
||||
log "Starting system monitoring check"
|
||||
|
||||
check_cpu
|
||||
check_memory
|
||||
check_disk
|
||||
check_load
|
||||
|
||||
log "System monitoring check completed"
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
@@ -1,184 +0,0 @@
|
||||
---
|
||||
# Nginx Proxy Role Default Variables
|
||||
|
||||
# Nginx Installation
|
||||
nginx_version: "latest"
|
||||
nginx_package: nginx
|
||||
nginx_service: nginx
|
||||
nginx_user: www-data
|
||||
nginx_group: www-data
|
||||
|
||||
# SSL Configuration
|
||||
ssl_provider: "{{ ssl_provider | default('letsencrypt') }}"
|
||||
ssl_email: "{{ ssl_email }}"
|
||||
ssl_certificate_path: "{{ ssl_certificate_path | default('/etc/letsencrypt/live/' + domain_name) }}"
|
||||
ssl_protocols:
|
||||
- TLSv1.2
|
||||
- TLSv1.3
|
||||
ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
|
||||
ssl_prefer_server_ciphers: true
|
||||
ssl_session_cache: "shared:SSL:10m"
|
||||
ssl_session_timeout: "1d"
|
||||
ssl_session_tickets: false
|
||||
ssl_stapling: true
|
||||
ssl_stapling_verify: true
|
||||
|
||||
# HSTS Configuration
|
||||
hsts_enabled: true
|
||||
hsts_max_age: 63072000 # 2 years
|
||||
hsts_include_subdomains: true
|
||||
hsts_preload: true
|
||||
|
||||
# Security Headers
|
||||
security_headers:
|
||||
X-Frame-Options: "SAMEORIGIN"
|
||||
X-Content-Type-Options: "nosniff"
|
||||
X-XSS-Protection: "1; mode=block"
|
||||
Referrer-Policy: "strict-origin-when-cross-origin"
|
||||
Permissions-Policy: "geolocation=(), microphone=(), camera=()"
|
||||
Content-Security-Policy: "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' data:; connect-src 'self'"
|
||||
|
||||
# Rate Limiting
|
||||
rate_limiting_enabled: true
|
||||
rate_limit_zone: "api"
|
||||
rate_limit_requests: "10r/s"
|
||||
rate_limit_burst: 20
|
||||
rate_limit_nodelay: true
|
||||
|
||||
# Upstream Configuration
|
||||
upstream_servers:
|
||||
- name: php-backend
|
||||
servers:
|
||||
- address: "127.0.0.1:9000"
|
||||
weight: 1
|
||||
max_fails: 3
|
||||
fail_timeout: 30s
|
||||
keepalive: 32
|
||||
keepalive_requests: 100
|
||||
keepalive_timeout: 60s
|
||||
|
||||
# Virtual Hosts
|
||||
nginx_vhosts:
|
||||
- server_name: "{{ domain_name }}"
|
||||
listen: "443 ssl http2"
|
||||
root: "/var/www/html/public"
|
||||
index: "index.php index.html"
|
||||
ssl_certificate: "{{ ssl_certificate_path }}/fullchain.pem"
|
||||
ssl_certificate_key: "{{ ssl_certificate_path }}/privkey.pem"
|
||||
access_log: "/var/log/nginx/{{ domain_name }}-access.log main"
|
||||
error_log: "/var/log/nginx/{{ domain_name }}-error.log"
|
||||
extra_parameters: |
|
||||
# PHP-FPM Configuration
|
||||
location ~ \.php$ {
|
||||
try_files $uri =404;
|
||||
fastcgi_split_path_info ^(.+\.php)(/.+)$;
|
||||
fastcgi_pass php-backend;
|
||||
fastcgi_index index.php;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
include fastcgi_params;
|
||||
fastcgi_param HTTPS on;
|
||||
fastcgi_param HTTP_SCHEME https;
|
||||
}
|
||||
|
||||
# API Rate Limiting
|
||||
location /api/ {
|
||||
limit_req zone={{ rate_limit_zone }} burst={{ rate_limit_burst }}{{ ' nodelay' if rate_limit_nodelay else '' }};
|
||||
try_files $uri $uri/ /index.php$is_args$args;
|
||||
}
|
||||
|
||||
# Static Assets
|
||||
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
|
||||
expires 1y;
|
||||
add_header Cache-Control "public, immutable";
|
||||
access_log off;
|
||||
}
|
||||
|
||||
# Security
|
||||
location ~ /\.ht {
|
||||
deny all;
|
||||
}
|
||||
|
||||
location ~ /\. {
|
||||
deny all;
|
||||
}
|
||||
|
||||
# HTTP to HTTPS redirect
|
||||
nginx_redirect_vhost:
|
||||
server_name: "{{ domain_name }}"
|
||||
listen: "80"
|
||||
return: "301 https://$server_name$request_uri"
|
||||
|
||||
# Global Nginx Configuration
|
||||
nginx_worker_processes: "{{ nginx_worker_processes | default('auto') }}"
|
||||
nginx_worker_connections: "{{ nginx_worker_connections | default(1024) }}"
|
||||
nginx_multi_accept: true
|
||||
nginx_sendfile: true
|
||||
nginx_tcp_nopush: true
|
||||
nginx_tcp_nodelay: true
|
||||
nginx_keepalive_timeout: 65
|
||||
nginx_keepalive_requests: 100
|
||||
nginx_server_tokens: false
|
||||
nginx_client_max_body_size: "100M"
|
||||
nginx_client_body_timeout: 60
|
||||
nginx_client_header_timeout: 60
|
||||
nginx_send_timeout: 60
|
||||
|
||||
# Logging Configuration
|
||||
nginx_access_log_format: |
|
||||
'$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for" '
|
||||
'$request_time $upstream_response_time'
|
||||
|
||||
nginx_error_log_level: "{{ log_level | default('warn') }}"
|
||||
|
||||
# Gzip Configuration
|
||||
nginx_gzip: true
|
||||
nginx_gzip_vary: true
|
||||
nginx_gzip_proxied: any
|
||||
nginx_gzip_comp_level: 6
|
||||
nginx_gzip_types:
|
||||
- text/plain
|
||||
- text/css
|
||||
- text/xml
|
||||
- text/javascript
|
||||
- application/javascript
|
||||
- application/json
|
||||
- application/xml+rss
|
||||
- application/atom+xml
|
||||
- image/svg+xml
|
||||
|
||||
# Cache Configuration
|
||||
nginx_cache_enabled: true
|
||||
nginx_cache_path: "/var/cache/nginx"
|
||||
nginx_cache_levels: "1:2"
|
||||
nginx_cache_keys_zone: "framework_cache:10m"
|
||||
nginx_cache_max_size: "1g"
|
||||
nginx_cache_inactive: "60m"
|
||||
nginx_cache_use_temp_path: false
|
||||
|
||||
# Real IP Configuration
|
||||
nginx_real_ip_header: "X-Forwarded-For"
|
||||
nginx_set_real_ip_from:
|
||||
- "127.0.0.1"
|
||||
- "10.0.0.0/8"
|
||||
- "172.16.0.0/12"
|
||||
- "192.168.0.0/16"
|
||||
|
||||
# Let's Encrypt Configuration
|
||||
letsencrypt_enabled: "{{ ssl_provider == 'letsencrypt' }}"
|
||||
letsencrypt_email: "{{ ssl_email }}"
|
||||
letsencrypt_domains:
|
||||
- "{{ domain_name }}"
|
||||
letsencrypt_webroot_path: "/var/www/letsencrypt"
|
||||
letsencrypt_renewal_cron: true
|
||||
letsencrypt_renewal_user: root
|
||||
letsencrypt_renewal_minute: "30"
|
||||
letsencrypt_renewal_hour: "2"
|
||||
|
||||
# Monitoring and Status
|
||||
nginx_status_enabled: "{{ monitoring_enabled | default(true) }}"
|
||||
nginx_status_location: "/nginx_status"
|
||||
nginx_status_allowed_ips:
|
||||
- "127.0.0.1"
|
||||
- "::1"
|
||||
@@ -1,53 +0,0 @@
|
||||
---
|
||||
# Nginx Proxy Role Handlers
|
||||
|
||||
- name: restart nginx
|
||||
service:
|
||||
name: "{{ nginx_service }}"
|
||||
state: restarted
|
||||
listen: restart nginx
|
||||
|
||||
- name: reload nginx
|
||||
service:
|
||||
name: "{{ nginx_service }}"
|
||||
state: reloaded
|
||||
listen: reload nginx
|
||||
|
||||
- name: start nginx
|
||||
service:
|
||||
name: "{{ nginx_service }}"
|
||||
state: started
|
||||
enabled: true
|
||||
listen: start nginx
|
||||
|
||||
- name: stop nginx
|
||||
service:
|
||||
name: "{{ nginx_service }}"
|
||||
state: stopped
|
||||
listen: stop nginx
|
||||
|
||||
- name: validate nginx config
|
||||
command: nginx -t
|
||||
register: nginx_config_test
|
||||
changed_when: false
|
||||
failed_when: nginx_config_test.rc != 0
|
||||
listen: validate nginx config
|
||||
|
||||
- name: reload systemd
|
||||
systemd:
|
||||
daemon_reload: true
|
||||
listen: reload systemd
|
||||
|
||||
- name: renew letsencrypt certificates
|
||||
command: certbot renew --quiet
|
||||
listen: renew letsencrypt certificates
|
||||
when: letsencrypt_enabled | bool
|
||||
|
||||
- name: update nginx status
|
||||
uri:
|
||||
url: "http://localhost/{{ nginx_status_location }}"
|
||||
method: GET
|
||||
status_code: 200
|
||||
listen: update nginx status
|
||||
when: nginx_status_enabled | bool
|
||||
ignore_errors: true
|
||||
@@ -1,31 +0,0 @@
|
||||
---
|
||||
galaxy_info:
|
||||
role_name: nginx-proxy
|
||||
author: Custom PHP Framework Team
|
||||
description: Nginx reverse proxy with SSL termination and security headers
|
||||
company: michaelschiemer.de
|
||||
license: MIT
|
||||
min_ansible_version: 2.12
|
||||
platforms:
|
||||
- name: Ubuntu
|
||||
versions:
|
||||
- "20.04"
|
||||
- "22.04"
|
||||
- "24.04"
|
||||
- name: Debian
|
||||
versions:
|
||||
- "11"
|
||||
- "12"
|
||||
galaxy_tags:
|
||||
- nginx
|
||||
- proxy
|
||||
- ssl
|
||||
- security
|
||||
- web
|
||||
- letsencrypt
|
||||
|
||||
dependencies: []
|
||||
|
||||
collections:
|
||||
- community.crypto
|
||||
- ansible.posix
|
||||
@@ -1,144 +0,0 @@
|
||||
---
|
||||
# Nginx Main Configuration
|
||||
|
||||
- name: Backup original nginx.conf
|
||||
copy:
|
||||
src: /etc/nginx/nginx.conf
|
||||
dest: /etc/nginx/nginx.conf.backup
|
||||
remote_src: true
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
ignore_errors: true
|
||||
tags:
|
||||
- nginx
|
||||
- config
|
||||
- backup
|
||||
|
||||
- name: Configure main nginx.conf
|
||||
template:
|
||||
src: nginx.conf.j2
|
||||
dest: /etc/nginx/nginx.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
backup: true
|
||||
notify: reload nginx
|
||||
tags:
|
||||
- nginx
|
||||
- config
|
||||
|
||||
- name: Configure upstream servers
|
||||
template:
|
||||
src: upstream.conf.j2
|
||||
dest: /etc/nginx/conf.d/upstream.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: reload nginx
|
||||
tags:
|
||||
- nginx
|
||||
- upstream
|
||||
|
||||
- name: Configure security headers
|
||||
template:
|
||||
src: security-headers.conf.j2
|
||||
dest: /etc/nginx/conf.d/security-headers.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: reload nginx
|
||||
tags:
|
||||
- nginx
|
||||
- security
|
||||
|
||||
- name: Configure SSL settings
|
||||
template:
|
||||
src: ssl-settings.conf.j2
|
||||
dest: /etc/nginx/conf.d/ssl-settings.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: reload nginx
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
|
||||
- name: Configure gzip compression
|
||||
template:
|
||||
src: gzip.conf.j2
|
||||
dest: /etc/nginx/conf.d/gzip.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: reload nginx
|
||||
tags:
|
||||
- nginx
|
||||
- compression
|
||||
|
||||
- name: Configure caching
|
||||
template:
|
||||
src: cache.conf.j2
|
||||
dest: /etc/nginx/conf.d/cache.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
when: nginx_cache_enabled | bool
|
||||
notify: reload nginx
|
||||
tags:
|
||||
- nginx
|
||||
- cache
|
||||
|
||||
- name: Configure real IP detection
|
||||
template:
|
||||
src: real-ip.conf.j2
|
||||
dest: /etc/nginx/conf.d/real-ip.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: reload nginx
|
||||
tags:
|
||||
- nginx
|
||||
- real-ip
|
||||
|
||||
- name: Remove default site
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: absent
|
||||
loop:
|
||||
- /etc/nginx/sites-enabled/default
|
||||
- /var/www/html/index.nginx-debian.html
|
||||
notify: reload nginx
|
||||
tags:
|
||||
- nginx
|
||||
- cleanup
|
||||
|
||||
- name: Create custom error pages
|
||||
template:
|
||||
src: "{{ item }}.html.j2"
|
||||
dest: "/var/www/html/{{ item }}.html"
|
||||
owner: "{{ nginx_user }}"
|
||||
group: "{{ nginx_group }}"
|
||||
mode: '0644'
|
||||
loop:
|
||||
- 403
|
||||
- 404
|
||||
- 500
|
||||
- 502
|
||||
- 503
|
||||
- 504
|
||||
tags:
|
||||
- nginx
|
||||
- error-pages
|
||||
|
||||
- name: Configure custom error pages
|
||||
template:
|
||||
src: error-pages.conf.j2
|
||||
dest: /etc/nginx/conf.d/error-pages.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: reload nginx
|
||||
tags:
|
||||
- nginx
|
||||
- error-pages
|
||||
@@ -1,86 +0,0 @@
|
||||
---
|
||||
# Nginx Installation
|
||||
|
||||
- name: Update package cache
|
||||
package:
|
||||
update_cache: true
|
||||
cache_valid_time: 3600
|
||||
tags:
|
||||
- nginx
|
||||
- packages
|
||||
|
||||
- name: Install Nginx and dependencies
|
||||
package:
|
||||
name:
|
||||
- "{{ nginx_package }}"
|
||||
- openssl
|
||||
- ca-certificates
|
||||
state: present
|
||||
tags:
|
||||
- nginx
|
||||
- packages
|
||||
|
||||
- name: Install Let's Encrypt client (Certbot)
|
||||
package:
|
||||
name:
|
||||
- certbot
|
||||
- python3-certbot-nginx
|
||||
state: present
|
||||
when: letsencrypt_enabled | bool
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
- letsencrypt
|
||||
|
||||
- name: Create Nginx directories
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
loop:
|
||||
- /etc/nginx/sites-available
|
||||
- /etc/nginx/sites-enabled
|
||||
- /etc/nginx/conf.d
|
||||
- /var/log/nginx
|
||||
- "{{ nginx_cache_path }}"
|
||||
- /var/www/html
|
||||
tags:
|
||||
- nginx
|
||||
- directories
|
||||
|
||||
- name: Create Let's Encrypt webroot directory
|
||||
file:
|
||||
path: "{{ letsencrypt_webroot_path }}"
|
||||
state: directory
|
||||
owner: "{{ nginx_user }}"
|
||||
group: "{{ nginx_group }}"
|
||||
mode: '0755'
|
||||
when: letsencrypt_enabled | bool
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
- directories
|
||||
|
||||
- name: Set proper permissions on log directory
|
||||
file:
|
||||
path: /var/log/nginx
|
||||
state: directory
|
||||
owner: "{{ nginx_user }}"
|
||||
group: "{{ nginx_group }}"
|
||||
mode: '0755'
|
||||
tags:
|
||||
- nginx
|
||||
- permissions
|
||||
|
||||
- name: Ensure Nginx user exists
|
||||
user:
|
||||
name: "{{ nginx_user }}"
|
||||
system: true
|
||||
shell: /bin/false
|
||||
home: /var/cache/nginx
|
||||
create_home: false
|
||||
tags:
|
||||
- nginx
|
||||
- users
|
||||
@@ -1,13 +0,0 @@
|
||||
---
|
||||
# Log Rotation Configuration for Nginx
|
||||
|
||||
- name: Configure nginx log rotation
|
||||
template:
|
||||
src: nginx-logrotate.j2
|
||||
dest: /etc/logrotate.d/nginx
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- nginx
|
||||
- logging
|
||||
@@ -1,65 +0,0 @@
|
||||
---
|
||||
# Nginx Proxy Role - Main Tasks
|
||||
|
||||
- name: Include OS-specific variables
|
||||
include_vars: "{{ ansible_os_family }}.yml"
|
||||
tags:
|
||||
- nginx
|
||||
- config
|
||||
|
||||
- name: Install Nginx and prerequisites
|
||||
include_tasks: install-nginx.yml
|
||||
tags:
|
||||
- nginx
|
||||
- install
|
||||
|
||||
- name: Configure Nginx
|
||||
include_tasks: configure-nginx.yml
|
||||
tags:
|
||||
- nginx
|
||||
- config
|
||||
|
||||
- name: Setup SSL certificates
|
||||
include_tasks: ssl-setup.yml
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
|
||||
- name: Configure security headers and hardening
|
||||
include_tasks: security-config.yml
|
||||
tags:
|
||||
- nginx
|
||||
- security
|
||||
|
||||
- name: Setup virtual hosts
|
||||
include_tasks: vhosts-config.yml
|
||||
tags:
|
||||
- nginx
|
||||
- vhosts
|
||||
|
||||
- name: Configure rate limiting
|
||||
include_tasks: rate-limiting.yml
|
||||
when: rate_limiting_enabled | bool
|
||||
tags:
|
||||
- nginx
|
||||
- security
|
||||
- rate-limit
|
||||
|
||||
- name: Setup monitoring and status
|
||||
include_tasks: monitoring.yml
|
||||
when: nginx_status_enabled | bool
|
||||
tags:
|
||||
- nginx
|
||||
- monitoring
|
||||
|
||||
- name: Configure log rotation
|
||||
include_tasks: log-rotation.yml
|
||||
tags:
|
||||
- nginx
|
||||
- logging
|
||||
|
||||
- name: Validate configuration and start services
|
||||
include_tasks: validation.yml
|
||||
tags:
|
||||
- nginx
|
||||
- validation
|
||||
@@ -1,14 +0,0 @@
|
||||
---
|
||||
# Monitoring Configuration for Nginx
|
||||
|
||||
- name: Configure Nginx status endpoint
|
||||
template:
|
||||
src: status.conf.j2
|
||||
dest: "{{ nginx_conf_d_path }}/status.conf"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: restart nginx
|
||||
tags:
|
||||
- nginx
|
||||
- monitoring
|
||||
@@ -1,15 +0,0 @@
|
||||
---
|
||||
# Rate Limiting Configuration for Nginx
|
||||
|
||||
- name: Create rate limiting configuration
|
||||
template:
|
||||
src: rate-limiting.conf.j2
|
||||
dest: "{{ nginx_conf_d_path }}/rate-limiting.conf"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: restart nginx
|
||||
tags:
|
||||
- nginx
|
||||
- security
|
||||
- rate-limit
|
||||
@@ -1,38 +0,0 @@
|
||||
---
|
||||
# Security Configuration for Nginx
|
||||
|
||||
- name: Create security headers configuration
|
||||
template:
|
||||
src: security-headers.conf.j2
|
||||
dest: "{{ nginx_conf_d_path }}/security-headers.conf"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: restart nginx
|
||||
tags:
|
||||
- nginx
|
||||
- security
|
||||
- headers
|
||||
|
||||
- name: Configure SSL settings
|
||||
template:
|
||||
src: ssl-settings.conf.j2
|
||||
dest: "{{ nginx_conf_d_path }}/ssl-settings.conf"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
when: ssl_provider is defined
|
||||
notify: restart nginx
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
- security
|
||||
|
||||
- name: Remove default Nginx site
|
||||
file:
|
||||
path: "{{ nginx_sites_enabled_path }}/default"
|
||||
state: absent
|
||||
notify: restart nginx
|
||||
tags:
|
||||
- nginx
|
||||
- security
|
||||
@@ -1,162 +0,0 @@
|
||||
---
|
||||
# SSL Certificate Setup
|
||||
|
||||
- name: Create SSL directories
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
loop:
|
||||
- /etc/ssl/private
|
||||
- /etc/ssl/certs
|
||||
- "{{ ssl_certificate_path | dirname }}"
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
- directories
|
||||
|
||||
- name: Generate DH parameters for SSL
|
||||
openssl_dhparam:
|
||||
path: /etc/ssl/certs/dhparam.pem
|
||||
size: 2048
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
- dhparam
|
||||
|
||||
- name: Generate self-signed certificate for initial setup
|
||||
block:
|
||||
- name: Generate private key
|
||||
openssl_privatekey:
|
||||
path: /etc/ssl/private/{{ domain_name }}.key
|
||||
size: 2048
|
||||
type: RSA
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0600'
|
||||
|
||||
- name: Generate self-signed certificate
|
||||
openssl_certificate:
|
||||
path: /etc/ssl/certs/{{ domain_name }}.crt
|
||||
privatekey_path: /etc/ssl/private/{{ domain_name }}.key
|
||||
provider: selfsigned
|
||||
common_name: "{{ domain_name }}"
|
||||
subject_alt_name:
|
||||
- "DNS:{{ domain_name }}"
|
||||
- "DNS:www.{{ domain_name }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
when: ssl_provider == 'self-signed' or environment == 'development'
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
- self-signed
|
||||
|
||||
- name: Setup Let's Encrypt certificates
|
||||
block:
|
||||
- name: Check if certificates already exist
|
||||
stat:
|
||||
path: "{{ ssl_certificate_path }}/fullchain.pem"
|
||||
register: letsencrypt_cert
|
||||
|
||||
- name: Create temporary Nginx config for Let's Encrypt
|
||||
template:
|
||||
src: nginx-letsencrypt-temp.conf.j2
|
||||
dest: /etc/nginx/sites-available/letsencrypt-temp
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
when: not letsencrypt_cert.stat.exists
|
||||
|
||||
- name: Enable temporary Nginx config
|
||||
file:
|
||||
src: /etc/nginx/sites-available/letsencrypt-temp
|
||||
dest: /etc/nginx/sites-enabled/letsencrypt-temp
|
||||
state: link
|
||||
when: not letsencrypt_cert.stat.exists
|
||||
notify: reload nginx
|
||||
|
||||
- name: Start Nginx for Let's Encrypt validation
|
||||
service:
|
||||
name: "{{ nginx_service }}"
|
||||
state: started
|
||||
enabled: true
|
||||
when: not letsencrypt_cert.stat.exists
|
||||
|
||||
- name: Obtain Let's Encrypt certificate
|
||||
command: >
|
||||
certbot certonly
|
||||
--webroot
|
||||
--webroot-path {{ letsencrypt_webroot_path }}
|
||||
--email {{ letsencrypt_email }}
|
||||
--agree-tos
|
||||
--non-interactive
|
||||
--expand
|
||||
{% for domain in letsencrypt_domains %}
|
||||
-d {{ domain }}
|
||||
{% endfor %}
|
||||
when: not letsencrypt_cert.stat.exists
|
||||
tags:
|
||||
- ssl
|
||||
- letsencrypt
|
||||
- certificate
|
||||
|
||||
- name: Remove temporary Nginx config
|
||||
file:
|
||||
path: /etc/nginx/sites-enabled/letsencrypt-temp
|
||||
state: absent
|
||||
when: not letsencrypt_cert.stat.exists
|
||||
notify: reload nginx
|
||||
|
||||
- name: Setup automatic certificate renewal
|
||||
cron:
|
||||
name: "Renew Let's Encrypt certificates"
|
||||
minute: "{{ letsencrypt_renewal_minute }}"
|
||||
hour: "{{ letsencrypt_renewal_hour }}"
|
||||
job: "certbot renew --quiet && systemctl reload nginx"
|
||||
user: "{{ letsencrypt_renewal_user }}"
|
||||
when: letsencrypt_renewal_cron | bool
|
||||
|
||||
when: letsencrypt_enabled | bool and environment != 'development'
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
- letsencrypt
|
||||
|
||||
- name: Set up SSL certificate paths
|
||||
set_fact:
|
||||
ssl_cert_file: >-
|
||||
{%- if letsencrypt_enabled and environment != 'development' -%}
|
||||
{{ ssl_certificate_path }}/fullchain.pem
|
||||
{%- else -%}
|
||||
/etc/ssl/certs/{{ domain_name }}.crt
|
||||
{%- endif -%}
|
||||
ssl_key_file: >-
|
||||
{%- if letsencrypt_enabled and environment != 'development' -%}
|
||||
{{ ssl_certificate_path }}/privkey.pem
|
||||
{%- else -%}
|
||||
/etc/ssl/private/{{ domain_name }}.key
|
||||
{%- endif -%}
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
- config
|
||||
|
||||
- name: Verify SSL certificate files exist
|
||||
stat:
|
||||
path: "{{ item }}"
|
||||
register: ssl_files_check
|
||||
loop:
|
||||
- "{{ ssl_cert_file }}"
|
||||
- "{{ ssl_key_file }}"
|
||||
failed_when: not ssl_files_check.results | selectattr('stat.exists') | list
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
- verification
|
||||
@@ -1,37 +0,0 @@
|
||||
---
|
||||
# Validation and Service Management for Nginx
|
||||
|
||||
- name: Test nginx configuration
|
||||
command: nginx -t
|
||||
register: nginx_test
|
||||
changed_when: false
|
||||
tags:
|
||||
- nginx
|
||||
- validation
|
||||
|
||||
- name: Display nginx test results
|
||||
debug:
|
||||
msg: "Nginx configuration test: {{ 'PASSED' if nginx_test.rc == 0 else 'FAILED' }}"
|
||||
tags:
|
||||
- nginx
|
||||
- validation
|
||||
|
||||
- name: Start and enable nginx service
|
||||
systemd:
|
||||
name: nginx
|
||||
state: started
|
||||
enabled: true
|
||||
daemon_reload: true
|
||||
tags:
|
||||
- nginx
|
||||
- service
|
||||
|
||||
- name: Wait for nginx to be ready
|
||||
wait_for:
|
||||
port: 80
|
||||
host: "{{ ansible_default_ipv4.address | default('127.0.0.1') }}"
|
||||
delay: 2
|
||||
timeout: 30
|
||||
tags:
|
||||
- nginx
|
||||
- validation
|
||||
@@ -1,50 +0,0 @@
|
||||
---
|
||||
# Virtual Hosts Configuration for Nginx
|
||||
|
||||
- name: Create virtual host configuration
|
||||
template:
|
||||
src: vhost.conf.j2
|
||||
dest: "{{ nginx_sites_available_path }}/{{ domain_name }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: restart nginx
|
||||
tags:
|
||||
- nginx
|
||||
- vhosts
|
||||
|
||||
- name: Enable virtual host
|
||||
file:
|
||||
src: "{{ nginx_sites_available_path }}/{{ domain_name }}"
|
||||
dest: "{{ nginx_sites_enabled_path }}/{{ domain_name }}"
|
||||
state: link
|
||||
notify: restart nginx
|
||||
tags:
|
||||
- nginx
|
||||
- vhosts
|
||||
|
||||
- name: Create HTTP to HTTPS redirect configuration
|
||||
template:
|
||||
src: redirect-vhost.conf.j2
|
||||
dest: "{{ nginx_sites_available_path }}/{{ domain_name }}-redirect"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
when: ssl_provider is defined and environment != 'development'
|
||||
notify: restart nginx
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
- redirect
|
||||
|
||||
- name: Enable HTTP to HTTPS redirect
|
||||
file:
|
||||
src: "{{ nginx_sites_available_path }}/{{ domain_name }}-redirect"
|
||||
dest: "{{ nginx_sites_enabled_path }}/{{ domain_name }}-redirect"
|
||||
state: link
|
||||
when: ssl_provider is defined and environment != 'development'
|
||||
notify: restart nginx
|
||||
tags:
|
||||
- nginx
|
||||
- ssl
|
||||
- redirect
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user