feat: CI/CD pipeline setup complete - Ansible playbooks updated, secrets configured, workflow ready

This commit is contained in:
2025-10-31 01:39:24 +01:00
parent 55c04e4fd0
commit e26eb2aa12
601 changed files with 44184 additions and 32477 deletions

View File

@@ -1,2 +0,0 @@
# Ignore local Ansible vault pass in infrastructure directory
.vault_pass

View File

@@ -1,554 +0,0 @@
# Production Deployment Analysis & Fix Strategy
**Date**: 2025-10-27
**Status**: CRITICAL - Production website returning HTTP 500 errors
**Root Cause**: Database connection configuration error (DB_PORT mismatch)
---
## 1. Complete Deployment Flow Analysis
### Deployment Architecture
The project uses a **release-based deployment pattern** with shared configuration:
```
/home/deploy/michaelschiemer/
├── releases/
│ ├── 1761566515/ # Current release (timestamped)
│ ├── 1761565432/ # Previous releases
│ └── ...
├── shared/
│ └── .env.production # Shared configuration file
└── current -> releases/1761566515/ # Symlink to active release
```
**Key Characteristics**:
- **Releases Directory**: Each deployment creates a new timestamped release
- **Shared Directory**: Configuration files persist across deployments
- **Current Symlink**: Points to the active release
- **Symlink Chain**: `current/.env.production``shared/.env.production` → Used by application
### .env File Sources (3 Different Files Identified)
#### 1. Root Directory: `/home/michael/dev/michaelschiemer/.env.production`
- **Size**: 2.9K
- **Checksum**: 9f33068713432c1dc4008724dc6923b0
- **DB_PORT**: 5432 (CORRECT for PostgreSQL)
- **DB_USERNAME**: mdb_user (with underscore)
- **DB_PASSWORD**: Qo2KNgGqeYksEhKr57pgugakxlothn8J
- **Purpose**: Framework default configuration
- **Status**: CORRECT database configuration
#### 2. Deployment Directory: `/home/michael/dev/michaelschiemer/deployment/applications/environments/.env.production`
- **Size**: 4.3K
- **Checksum**: b516bf86beed813df03a30f655687b72
- **DB_PORT**: 5432 (CORRECT for PostgreSQL)
- **DB_USERNAME**: mdb_user (with underscore)
- **DB_PASSWORD**: Qo2KNgGqeYksEhKr57pgugakxlothn8J
- **Purpose**: Application-specific production configuration
- **Status**: CORRECT and MORE COMPLETE (includes Redis, Queue, Mail, Monitoring configs)
#### 3. Production Server: `/home/deploy/michaelschiemer/shared/.env.production`
- **Size**: 3.0K (modified Oct 26 20:56)
- **Line 15**: `DB_PORT=3306` (WRONG - MySQL port instead of PostgreSQL)
- **Line 67**: `DB_PORT=` (duplicate empty entry)
- **DB_USERNAME**: mdb-user (with hyphen - likely wrong)
- **DB_PASSWORD**: StartSimple2024! (different from local configs)
- **Status**: CORRUPTED - Wrong database configuration causing HTTP 500 errors
### Deployment Playbook Flow
**File**: `/home/michael/dev/michaelschiemer/deployment/infrastructure/playbooks/deploy-rsync-based.yml`
**Critical Configuration**:
```yaml
local_project_path: "{{ playbook_dir }}/../../.." # 3 dirs up = /home/michael/dev/michaelschiemer
shared_files:
- .env.production # Marked as SHARED file
rsync_excludes:
- .env
- .env.local
- .env.development
```
**Deployment Steps**:
1. **Rsync files** from `{{ local_project_path }}` (framework root) to release directory
- Excludes: `.env`, `.env.local`, `.env.development`
- Includes: `.env.production` from root directory
2. **Create release directory**: `/home/deploy/michaelschiemer/releases/{{ timestamp }}`
3. **Copy files** to release directory
4. **Create symlinks**:
- `release/.env.production``../../shared/.env.production`
- `release/.env``../../shared/.env.production`
5. **Update current** symlink → latest release
6. **Restart containers** via docker-compose
**CRITICAL ISSUE IDENTIFIED**:
The playbook does NOT have a task to initially copy `.env.production` to `shared/.env.production`. It only creates symlinks assuming the file already exists. This means:
- Initial setup requires MANUAL copy of `.env.production` to `shared/`
- Updates to `.env.production` require MANUAL sync to production server
- The rsync'd `.env.production` in release directory is IGNORED (symlink overrides it)
---
## 2. Production Server .env Status
### Current State (BROKEN)
```bash
# /home/deploy/michaelschiemer/shared/.env.production
Line 15: DB_PORT=3306 # WRONG - MySQL port (should be 5432 for PostgreSQL)
Line 67: DB_PORT= # Duplicate empty entry
DB_USERNAME=mdb-user # Wrong format (should be mdb_user with underscore)
DB_PASSWORD=StartSimple2024! # Wrong password (doesn't match local configs)
```
### Container Status
```
CONTAINER STATUS ISSUE
php Up 27 minutes (healthy) -
db Up 40 minutes (healthy) PostgreSQL running on port 5432
redis Up 40 minutes (healthy) -
web Up 40 minutes (UNHEALTHY) Nginx cannot connect to PHP due to DB error
queue-worker Restarting (1) 4s ago PHP crashing due to DB connection error
```
### Error Pattern
- **HTTP 500** on all requests (/, /impressum, etc.)
- **Root Cause**: PHP application cannot connect to database because:
1. `DB_PORT=3306` (MySQL) instead of `5432` (PostgreSQL)
2. Wrong username format (`mdb-user` vs `mdb_user`)
3. Wrong password
- **Impact**: All PHP processes fail to initialize → Nginx returns 500
---
## 3. Deployment Command Documentation
### WORKING Commands (Current Playbook)
#### Deploy via Ansible Playbook
```bash
cd /home/michael/dev/michaelschiemer/deployment/infrastructure
# Full production deployment
ansible-playbook \
-i inventories/production/hosts.yml \
playbooks/deploy-rsync-based.yml \
--vault-password-file .vault_pass
# With specific variables
ansible-playbook \
-i inventories/production/hosts.yml \
playbooks/deploy-rsync-based.yml \
--vault-password-file .vault_pass \
-e "deployment_branch=main"
```
#### Check Production Status
```bash
# Check containers
ansible web_servers \
-i inventories/production/hosts.yml \
-m shell -a "docker ps -a" \
--vault-password-file .vault_pass
# Check .env configuration
ansible web_servers \
-i inventories/production/hosts.yml \
-m shell -a "cat /home/deploy/michaelschiemer/shared/.env.production" \
--vault-password-file .vault_pass
# Check application logs
ansible web_servers \
-i inventories/production/hosts.yml \
-m shell -a "docker logs web --tail 50" \
--vault-password-file .vault_pass
```
### COMMANDS TO CREATE (User Requirements)
#### 1. Simple Manual Deploy Script
```bash
#!/bin/bash
# File: /home/michael/dev/michaelschiemer/deployment/infrastructure/scripts/deploy.sh
set -e
cd "$(dirname "$0")/.."
echo "🚀 Deploying to production..."
ansible-playbook \
-i inventories/production/hosts.yml \
playbooks/deploy-rsync-based.yml \
--vault-password-file .vault_pass
echo "✅ Deployment complete!"
echo "🔍 Check status: docker ps"
```
#### 2. .env Update Script
```bash
#!/bin/bash
# File: /home/michael/dev/michaelschiemer/deployment/infrastructure/scripts/update-env.sh
set -e
cd "$(dirname "$0")/../.."
SOURCE_ENV="deployment/applications/environments/.env.production"
REMOTE_PATH="/home/deploy/michaelschiemer/shared/.env.production"
if [[ ! -f "$SOURCE_ENV" ]]; then
echo "❌ Source .env.production not found at: $SOURCE_ENV"
exit 1
fi
echo "📤 Uploading .env.production to production server..."
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m copy \
-a "src=$SOURCE_ENV dest=$REMOTE_PATH mode=0644" \
--vault-password-file deployment/infrastructure/.vault_pass
echo "🔄 Restarting containers..."
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m shell \
-a "cd /home/deploy/michaelschiemer/current && docker-compose restart php web queue-worker" \
--vault-password-file deployment/infrastructure/.vault_pass
echo "✅ .env.production updated and containers restarted!"
```
#### 3. Quick Production Sync
```bash
#!/bin/bash
# File: /home/michael/dev/michaelschiemer/deployment/infrastructure/scripts/quick-sync.sh
set -e
cd "$(dirname "$0")/../.."
# Sync code changes (no .env update)
rsync -avz \
--exclude '.env' \
--exclude '.env.local' \
--exclude 'node_modules/' \
--exclude '.git/' \
./ deploy@94.16.110.151:/home/deploy/michaelschiemer/current/
# Restart containers
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m shell \
-a "cd /home/deploy/michaelschiemer/current && docker-compose restart php web" \
--vault-password-file deployment/infrastructure/.vault_pass
echo "✅ Quick sync complete!"
```
### SCRIPTS TO REMOVE (Unused/Deprecated)
1. **`/home/michael/dev/michaelschiemer/deploy.sh`** (if exists in root)
- Reason: Conflicting with playbook-based deployment
2. **`/home/michael/dev/michaelschiemer/.env.local`** (if exists)
- Reason: Not used in production, causes confusion
3. **Duplicate .env files** in root:
- Keep: `.env.production` (source of truth for framework defaults)
- Remove: `.env.backup.*`, `.env.old`, etc.
---
## 4. Fix Strategy (Step-by-Step)
### IMMEDIATE FIX (Restore Production)
#### Step 1: Update Production .env.production File
```bash
cd /home/michael/dev/michaelschiemer
# Copy correct .env.production to production server
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m copy \
-a "src=deployment/applications/environments/.env.production dest=/home/deploy/michaelschiemer/shared/.env.production mode=0644" \
--vault-password-file deployment/infrastructure/.vault_pass
```
**Why this file?**
- Most complete configuration (4.3K vs 2.9K)
- Includes Redis, Queue, Mail, Monitoring configs
- Correct DB_PORT=5432
- Correct DB credentials
#### Step 2: Verify .env.production on Server
```bash
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m shell \
-a "grep -E '(DB_PORT|DB_USERNAME|DB_PASSWORD)' /home/deploy/michaelschiemer/shared/.env.production" \
--vault-password-file deployment/infrastructure/.vault_pass
```
**Expected Output**:
```
DB_PORT=5432
DB_USERNAME=mdb_user
DB_PASSWORD=Qo2KNgGqeYksEhKr57pgugakxlothn8J
```
#### Step 3: Restart Containers
```bash
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m shell \
-a "cd /home/deploy/michaelschiemer/current && docker-compose restart php web queue-worker" \
--vault-password-file deployment/infrastructure/.vault_pass
```
#### Step 4: Verify Website Functionality
```bash
# Check HTTP status
curl -I https://michaelschiemer.de
# Expected: HTTP/2 200 OK (instead of 500)
# Check container health
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m shell \
-a "docker ps | grep -E '(web|php|queue-worker)'" \
--vault-password-file deployment/infrastructure/.vault_pass
```
**Expected**: All containers should be "Up" and "healthy"
### LONG-TERM FIX (Prevent Future Issues)
#### 1. Update Playbook to Sync .env.production
Add task to `deploy-rsync-based.yml`:
```yaml
# After "Synchronize project files" task, add:
- name: Sync .env.production to shared directory
copy:
src: "{{ local_project_path }}/deployment/applications/environments/.env.production"
dest: "{{ project_path }}/shared/.env.production"
mode: '0644'
when: sync_env_to_shared | default(true)
tags:
- deploy
- config
```
#### 2. Create Helper Scripts
Create the 3 scripts documented in section 3:
- `scripts/deploy.sh` - Simple wrapper for playbook
- `scripts/update-env.sh` - Update .env.production only
- `scripts/quick-sync.sh` - Quick code sync without full deployment
#### 3. Establish Source of Truth
**Decision**: Use `deployment/applications/environments/.env.production` as source of truth
- Most complete configuration
- Application-specific settings
- Includes all production services
**Action**: Document in README.md:
```markdown
## Production Configuration
**Source of Truth**: `deployment/applications/environments/.env.production`
To update production .env:
1. Edit `deployment/applications/environments/.env.production`
2. Run `./deployment/infrastructure/scripts/update-env.sh`
3. Containers will auto-restart with new config
```
#### 4. Add .env Validation
Create pre-deployment validation script:
```bash
#!/bin/bash
# scripts/validate-env.sh
ENV_FILE="deployment/applications/environments/.env.production"
echo "🔍 Validating .env.production..."
# Check required variables
REQUIRED_VARS=(
"DB_DRIVER"
"DB_HOST"
"DB_PORT"
"DB_DATABASE"
"DB_USERNAME"
"DB_PASSWORD"
)
for var in "${REQUIRED_VARS[@]}"; do
if ! grep -q "^${var}=" "$ENV_FILE"; then
echo "❌ Missing required variable: $var"
exit 1
fi
done
# Check PostgreSQL port
if ! grep -q "^DB_PORT=5432" "$ENV_FILE"; then
echo "⚠️ Warning: DB_PORT should be 5432 for PostgreSQL"
fi
echo "✅ .env.production validation passed"
```
---
## 5. Cleanup Recommendations
### Files to Remove
#### In Framework Root (`/home/michael/dev/michaelschiemer/`)
```bash
# List files to remove
find . -maxdepth 1 -name ".env.backup*" -o -name ".env.old*" -o -name ".env.local"
# Remove after confirmation
rm -f .env.backup* .env.old* .env.local
```
#### In Deployment Directory
```bash
# Check for duplicate/old deployment scripts
find deployment/ -name "deploy-old.yml" -o -name "*.backup"
```
#### On Production Server
```bash
# Clean up old releases (keep last 5)
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m shell \
-a "cd /home/deploy/michaelschiemer/releases && ls -t | tail -n +6 | xargs rm -rf" \
--vault-password-file deployment/infrastructure/.vault_pass
# Remove duplicate .env files in current release
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m shell \
-a "cd /home/deploy/michaelschiemer/current && rm -f .env.backup* .env.old*" \
--vault-password-file deployment/infrastructure/.vault_pass
```
### Configuration to Keep
**Essential Files**:
- `/.env.production` - Framework defaults (keep for reference)
- `/deployment/applications/environments/.env.production` - Source of truth
- `/deployment/infrastructure/playbooks/deploy-rsync-based.yml` - Main playbook
- `/deployment/infrastructure/inventories/production/hosts.yml` - Inventory
**Symlinks (Do Not Remove)**:
- `/home/deploy/michaelschiemer/current/.env.production``shared/.env.production`
- `/home/deploy/michaelschiemer/current/.env``shared/.env.production`
---
## 6. Post-Fix Verification Checklist
```bash
# 1. Website accessible
curl -I https://michaelschiemer.de
# Expected: HTTP/2 200 OK
# 2. All containers healthy
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m shell -a "docker ps" \
--vault-password-file deployment/infrastructure/.vault_pass
# Expected: All "Up" and "(healthy)"
# 3. Database connection working
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m shell -a "docker exec php php -r \"new PDO('pgsql:host=db;port=5432;dbname=michaelschiemer', 'mdb_user', 'Qo2KNgGqeYksEhKr57pgugakxlothn8J');\"" \
--vault-password-file deployment/infrastructure/.vault_pass
# Expected: No errors
# 4. Application logs clean
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m shell -a "docker logs web --tail 20" \
--vault-password-file deployment/infrastructure/.vault_pass
# Expected: HTTP 200 responses, no 500 errors
# 5. Queue worker stable
ansible web_servers \
-i deployment/infrastructure/inventories/production/hosts.yml \
-m shell -a "docker ps | grep queue-worker" \
--vault-password-file deployment/infrastructure/.vault_pass
# Expected: "Up" status (not "Restarting")
```
---
## 7. Future Deployment Best Practices
1. **Always validate .env before deployment**
- Run `scripts/validate-env.sh` pre-deployment
- Check DB_PORT=5432 for PostgreSQL
- Verify credentials match database server
2. **Use playbook for all deployments**
- Consistent process
- Automated rollback capability
- Proper symlink management
3. **Monitor container health post-deployment**
- Check `docker ps` output
- Verify all containers "(healthy)"
- Check application logs for errors
4. **Keep .env.production in sync**
- Single source of truth: `deployment/applications/environments/.env.production`
- Use `update-env.sh` script for updates
- Never manually edit on production server
5. **Regular backups**
- Backup `shared/.env.production` before changes
- Keep last 5 releases for quick rollback
- Document any manual production changes
---
## Summary
**Current Status**: Production broken due to DB_PORT configuration error
**Root Cause**: Manual edits to `shared/.env.production` with wrong PostgreSQL port
**Fix Time**: ~5 minutes (copy correct .env + restart containers)
**Prevention**: Automated .env sync in playbook + validation scripts
**Next Steps**:
1. Execute Step 1-4 of Fix Strategy (IMMEDIATE)
2. Verify website returns HTTP 200
3. Implement long-term fixes (playbook updates, scripts)
4. Document deployment process in README.md

View File

@@ -1,286 +0,0 @@
# Production Deployment Fix Summary
**Date**: 2025-10-27
**Status**: PARTIALLY FIXED - DB configuration corrected, but additional issues remain
---
## What Was Fixed
### 1. Database Configuration Corrected ✅
**Problem**: Wrong DB_PORT in production `.env.production`
- Line 15: `DB_PORT=3306` (MySQL port)
- Line 67: `DB_PORT=` (duplicate empty entry)
- Wrong username: `mdb-user` (should be `mdb_user`)
- Wrong password
**Solution Applied**:
```bash
# Copied correct .env.production from source of truth
ansible web_servers -m copy \
-a "src=deployment/applications/environments/.env.production \
dest=/home/deploy/michaelschiemer/shared/.env.production" \
--vault-password-file deployment/infrastructure/.vault_pass
```
**Verification**:
```bash
DB_PORT=5432 # ✅ Correct
DB_USERNAME=mdb_user # ✅ Correct
DB_PASSWORD=Qo2KNgGqeYksEhKr57pgugakxlothn8J # ✅ Correct
```
### 2. Containers Restarted ✅
```bash
docker compose restart php web queue-worker
```
**Current Status**:
- **php**: Up 6 minutes (healthy) ✅
- **db**: Up 53 minutes (healthy) ✅
- **redis**: Up 53 minutes (healthy) ✅
- **web**: Up 6 minutes (UNHEALTHY) ⚠️
- **queue-worker**: Restarting (1) ❌
---
## Remaining Issues
### Issue 1: Web Container Unhealthy ⚠️
**Symptom**: Website still returns HTTP 500
**Possible Causes**:
1. **PHP-FPM not responding** - Web container can't connect to PHP
2. **Application error** - PHP code failing during bootstrap
3. **Missing files** - Application files not properly deployed
4. **Permissions** - Web server can't access application files
**Next Steps to Diagnose**:
```bash
# Check if PHP-FPM is accessible from web container
docker exec web curl http://php:9000
# Check Nginx configuration
docker exec web nginx -t
# Check web container health check
docker inspect web --format='{{json .State.Health}}' | jq
# Check if application files exist
docker exec web ls -la /var/www/html/public/index.php
```
### Issue 2: Queue Worker Crashing ❌
**Symptom**: Continuous restart loop
**Possible Causes**:
1. **Same DB connection issue** (should be fixed now)
2. **Missing queue configuration**
3. **Redis connection issue**
4. **Application code error in queue worker**
**Next Steps to Diagnose**:
```bash
# Check queue-worker logs
docker logs queue-worker --tail 100
# Try running queue worker manually
docker exec php php artisan queue:work --tries=1 --once
```
---
## Scripts Created ✅
### 1. Simple Deployment Script
**Location**: `/home/michael/dev/michaelschiemer/deployment/infrastructure/scripts/deploy.sh`
```bash
./deployment/infrastructure/scripts/deploy.sh
```
### 2. .env Update Script
**Location**: `/home/michael/dev/michaelschiemer/deployment/infrastructure/scripts/update-env.sh`
```bash
./deployment/infrastructure/scripts/update-env.sh
```
### 3. Quick Sync Script
**Location**: `/home/michael/dev/michaelschiemer/deployment/infrastructure/scripts/quick-sync.sh`
```bash
./deployment/infrastructure/scripts/quick-sync.sh
```
**Note**: All scripts updated to use `docker compose` (v2) instead of `docker-compose` (v1)
---
## Documentation Created ✅
### Comprehensive Deployment Analysis
**Location**: `/home/michael/dev/michaelschiemer/deployment/infrastructure/DEPLOYMENT_ANALYSIS.md`
**Contents**:
1. Complete deployment flow analysis
2. .env file sources and conflicts
3. Deployment command documentation
4. Step-by-step fix strategy
5. Cleanup recommendations
6. Post-fix verification checklist
---
## Recommended Next Actions
### Immediate (To Fix HTTP 500)
1. **Check Application Bootstrap**:
```bash
# Test if PHP application can start
ansible web_servers -m shell \
-a "docker exec php php /var/www/html/public/index.php" \
--vault-password-file deployment/infrastructure/.vault_pass
```
2. **Check Nginx-PHP Connection**:
```bash
# Test PHP-FPM socket
ansible web_servers -m shell \
-a "docker exec web curl -v http://php:9000" \
--vault-password-file deployment/infrastructure/.vault_pass
```
3. **Check Application Logs**:
```bash
# Look for PHP errors
ansible web_servers -m shell \
-a "docker exec php ls -la /var/www/html/storage/logs/" \
--vault-password-file deployment/infrastructure/.vault_pass
```
4. **Verify File Permissions**:
```bash
# Check if web server can read files
ansible web_servers -m shell \
-a "docker exec web ls -la /var/www/html/public/" \
--vault-password-file deployment/infrastructure/.vault_pass
```
### Short-Term (Within 24h)
1. **Fix Web Container Health** - Resolve HTTP 500 errors
2. **Fix Queue Worker** - Stop crash loop
3. **Full Deployment Test** - Run complete deployment playbook
4. **Verify All Services** - Ensure all containers healthy
### Long-Term (This Week)
1. **Update Playbook** - Add .env.production sync task
2. **Add Validation** - Pre-deployment .env validation script
3. **Document Process** - Update README with deployment guide
4. **Setup Monitoring** - Add health check alerts
5. **Cleanup Old Files** - Remove duplicate .env files
---
## Key Learnings
### 1. Deployment Flow Issues
**Problem**: Playbook doesn't sync `.env.production` to `shared/`
**Impact**: Manual updates required for configuration changes
**Solution**: Add sync task to playbook
### 2. Multiple .env Sources
**Problem**: 3 different `.env.production` files with conflicting content
**Resolution**: Use `deployment/applications/environments/.env.production` as source of truth
### 3. Docker Compose Version
**Problem**: Production uses Docker Compose v2 (`docker compose`)
**Impact**: Scripts using v1 syntax (`docker-compose`) fail
**Solution**: All scripts updated to v2 syntax
### 4. Symlink Chain Complexity
**Structure**:
```
current/.env → shared/.env.production
current/.env.production → shared/.env.production
```
**Risk**: If `shared/.env.production` is wrong, ALL releases break
**Mitigation**: Validate before deploy, backup before changes
---
## Quick Reference
### Check Production Status
```bash
cd /home/michael/dev/michaelschiemer/deployment/infrastructure
# Container status
ansible web_servers -i inventories/production/hosts.yml \
-m shell -a "docker ps" --vault-password-file .vault_pass
# .env configuration
ansible web_servers -i inventories/production/hosts.yml \
-m shell -a "cat /home/deploy/michaelschiemer/shared/.env.production" \
--vault-password-file .vault_pass
# Application logs
ansible web_servers -i inventories/production/hosts.yml \
-m shell -a "docker logs web --tail 50" --vault-password-file .vault_pass
```
### Deploy to Production
```bash
# Full deployment
./deployment/infrastructure/scripts/deploy.sh
# Update .env only
./deployment/infrastructure/scripts/update-env.sh
# Quick code sync
./deployment/infrastructure/scripts/quick-sync.sh
```
### Emergency Rollback
```bash
# List releases
ansible web_servers -i inventories/production/hosts.yml \
-m shell -a "ls -la /home/deploy/michaelschiemer/releases/" \
--vault-password-file .vault_pass
# Switch to previous release
ansible web_servers -i inventories/production/hosts.yml \
-m shell -a "ln -sfn /home/deploy/michaelschiemer/releases/PREVIOUS_TIMESTAMP \
/home/deploy/michaelschiemer/current" \
--vault-password-file .vault_pass
# Restart containers
ansible web_servers -i inventories/production/hosts.yml \
-m shell -a "cd /home/deploy/michaelschiemer/current && docker compose restart" \
--vault-password-file .vault_pass
```
---
## Support Contacts
**Documentation**:
- Deployment Analysis: `deployment/infrastructure/DEPLOYMENT_ANALYSIS.md`
- This Summary: `deployment/infrastructure/DEPLOYMENT_FIX_SUMMARY.md`
**Scripts**:
- All scripts in: `deployment/infrastructure/scripts/`
- Make executable: `chmod +x deployment/infrastructure/scripts/*.sh`
**Configuration**:
- Source of Truth: `deployment/applications/environments/.env.production`
- Production File: `/home/deploy/michaelschiemer/shared/.env.production`

View File

@@ -1,319 +0,0 @@
# Custom PHP Framework - Infrastructure Automation
Modern, secure Ansible infrastructure automation for the Custom PHP Framework with PHP 8.4 optimization.
## 🏗️ Architecture Overview
### Security-First Design
- **SSH Hardening**: Secure SSH configuration with key-based authentication
- **Firewall Protection**: UFW firewall with fail2ban intrusion detection
- **SSL/TLS**: Let's Encrypt certificates with modern cipher suites
- **Security Headers**: Comprehensive HTTP security headers
- **System Hardening**: Kernel parameters, audit logging, and security monitoring
### Docker-Optimized Runtime
- **PHP 8.4**: Optimized Docker containers with custom PHP configuration
- **Security Profiles**: AppArmor and seccomp security profiles
- **Resource Limits**: Memory and CPU constraints for production workloads
- **Health Checks**: Automated container health monitoring
### Production-Ready Infrastructure
- **Environment Separation**: Development, staging, and production configurations
- **Monitoring**: System health checks and performance monitoring
- **Backup System**: Automated backup with encryption and retention policies
- **Log Management**: Centralized logging with rotation and monitoring
## 🚀 Quick Start
### Prerequisites
```bash
# Install Ansible
pip install ansible
# Install required collections
ansible-galaxy collection install community.general
ansible-galaxy collection install community.crypto
ansible-galaxy collection install community.docker
```
### Initial Setup
1. **Configure Ansible Vault**:
```bash
cd deployment/infrastructure
echo "your_vault_password" > .vault_pass
chmod 600 .vault_pass
# Encrypt sensitive variables
ansible-vault encrypt group_vars/all/vault.yml
```
2. **Update Inventory**:
- Edit `inventories/production/hosts.yml` with your server details
- Update domain and SSL email configuration
3. **Deploy Infrastructure**:
```bash
# Production deployment
ansible-playbook -i inventories/production site.yml
# Staging deployment
ansible-playbook -i inventories/staging site.yml
```
## 📁 Directory Structure
```
deployment/infrastructure/
├── ansible.cfg # Ansible configuration
├── site.yml # Main deployment playbook
├── inventories/ # Environment-specific inventory
│ ├── production/
│ ├── staging/
│ └── development/
├── group_vars/ # Global variables
│ └── all/
├── roles/ # Ansible roles
│ ├── base-security/ # Security hardening
│ ├── docker-runtime/ # Docker with PHP 8.4
│ ├── nginx-proxy/ # Nginx reverse proxy
│ └── monitoring/ # Health monitoring
└── playbooks/ # Additional playbooks
```
## 🔒 Security Features
### SSH Hardening
- Key-based authentication only
- Strong cipher suites and key exchange algorithms
- Connection rate limiting
- Security banners and access logging
### Firewall Configuration
- Default deny policy with specific allow rules
- Rate limiting for SSH connections
- Protection for Docker containers
- Environment-specific rule sets
### SSL/TLS Security
- Let's Encrypt certificates with auto-renewal
- Modern TLS protocols (1.2, 1.3)
- HSTS with preloading
- OCSP stapling enabled
### Application Security
- Security headers (CSP, HSTS, X-Frame-Options)
- Rate limiting for API endpoints
- Input validation and sanitization
- OWASP security compliance
## 🐳 Docker Configuration
### PHP 8.4 Optimization
- Custom PHP 8.4 container with security hardening
- OPcache configuration for production performance
- Memory and execution time limits
- Extension management for framework requirements
### Container Security
- Non-root user execution
- Read-only root filesystem where possible
- Security profiles (AppArmor, seccomp)
- Resource constraints and health checks
### Network Security
- Custom bridge networks with isolation
- No inter-container communication by default
- Encrypted internal communication
- External access controls
## 📊 Monitoring & Health Checks
### System Monitoring
- CPU, memory, and disk usage monitoring
- Load average and process monitoring
- Network and I/O performance tracking
- Automated alerting for threshold breaches
### Application Health Checks
- HTTP endpoint monitoring
- Database connectivity checks
- Framework-specific health validation
- Container health verification
### Log Management
- Centralized log collection and rotation
- Error pattern detection and alerting
- Security event logging and monitoring
- Performance metrics collection
## 🔧 Environment Configuration
### Production Environment
- High security settings with strict firewall
- Performance optimizations enabled
- Comprehensive monitoring and alerting
- Daily automated backups
### Staging Environment
- Relaxed security for testing
- Debug mode enabled
- Basic monitoring
- Weekly backups
### Development Environment
- Minimal security restrictions
- Full debugging capabilities
- No production optimizations
- No automated backups
## 📋 Deployment Playbooks
### Main Infrastructure (`site.yml`)
Deploys complete infrastructure stack:
- Base security hardening
- Docker runtime environment
- Nginx reverse proxy with SSL
- System monitoring and health checks
### Application Deployment (`playbooks/deploy-application.yml`)
Handles application-specific deployment:
- Code deployment from Git repository
- Dependency installation (Composer, NPM)
- Database migrations
- Asset compilation and optimization
- Service restarts and health verification
## 🛠️ Management Commands
### Infrastructure Management
```bash
# Deploy to production
ansible-playbook -i inventories/production site.yml
# Deploy specific role
ansible-playbook -i inventories/production site.yml --tags security
# Run health checks
ansible-playbook -i inventories/production site.yml --tags verification
# Update SSL certificates
ansible-playbook -i inventories/production site.yml --tags ssl
```
### Application Management
```bash
# Deploy application code
ansible-playbook -i inventories/production playbooks/deploy-application.yml
# Deploy specific branch
ansible-playbook -i inventories/production playbooks/deploy-application.yml -e deploy_branch=feature/new-feature
```
### Security Operations
```bash
# Security audit
ansible-playbook -i inventories/production site.yml --tags audit
# Update security configurations
ansible-playbook -i inventories/production site.yml --tags security
# Restart security services
ansible-playbook -i inventories/production site.yml --tags security,restart
```
## 🔐 Ansible Vault Usage
### Encrypting Secrets
```bash
# Encrypt vault file
ansible-vault encrypt group_vars/all/vault.yml
# Edit encrypted file
ansible-vault edit group_vars/all/vault.yml
# View encrypted file
ansible-vault view group_vars/all/vault.yml
```
### Running Playbooks with Vault
```bash
# Using vault password file (configured in ansible.cfg)
ansible-playbook site.yml
# Prompt for vault password
ansible-playbook site.yml --ask-vault-pass
# Using vault password file explicitly
ansible-playbook site.yml --vault-password-file .vault_pass
```
## 📝 Customization
### Adding Custom Roles
1. Create role directory structure
2. Define role metadata in `meta/main.yml`
3. Add role to main playbook
4. Test in development environment
### Environment-Specific Variables
- Update inventory files for environment-specific settings
- Modify group variables for global changes
- Use vault files for sensitive information
### SSL Certificate Management
- Let's Encrypt: Automatic certificate generation and renewal
- Self-signed: For development and testing environments
- Custom certificates: Place in appropriate directories
## 🚨 Troubleshooting
### Common Issues
**SSH Connection Failures**:
- Verify SSH key configuration
- Check firewall rules and fail2ban status
- Ensure user has proper sudo privileges
**SSL Certificate Problems**:
- Verify DNS resolution for domain
- Check Let's Encrypt rate limits
- Ensure port 80 is accessible for validation
**Docker Container Issues**:
- Check Docker daemon status and logs
- Verify image build and pull permissions
- Review container resource limits
**Performance Problems**:
- Monitor system resources and logs
- Check application and database performance
- Review caching and optimization settings
### Getting Help
For issues specific to the Custom PHP Framework infrastructure:
1. Check Ansible logs in `/var/log/ansible.log`
2. Review system logs for specific services
3. Use the monitoring dashboard for system health
4. Contact the development team at kontakt@michaelschiemer.de
## 📄 License
This infrastructure automation is part of the Custom PHP Framework project.
Licensed under MIT License - see LICENSE file for details.
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch
3. Test changes in development environment
4. Submit a pull request with detailed description
---
**Domain**: michaelschiemer.de
**Environment**: Production-ready with PHP 8.4 optimization
**Security**: Enterprise-grade hardening and monitoring
**Maintainer**: kontakt@michaelschiemer.de

View File

@@ -1,71 +0,0 @@
[defaults]
# Ansible Configuration for Custom PHP Framework Infrastructure
inventory = inventories/production/hosts.yml
roles_path = roles
host_key_checking = False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible_facts_cache
fact_caching_timeout = 3600
# Performance optimizations
pipelining = True
forks = 5
strategy = linear
# strategy_plugins = ~/.ansible/plugins/strategy:~/dev/mitogen/ansible_mitogen/plugins/strategy
# Logging and output
log_path = logs/ansible.log
stdout_callback = yaml
stderr_callback = yaml
bin_ansible_callbacks = True
verbosity = 1
# Security settings - Vault password via environment or prompt (disabled for testing)
ask_vault_pass = False
# vault_encrypt_identity = vault@michaelschiemer.de
# vault_identity_list = vault@michaelschiemer.de
# Connection settings
timeout = 60
remote_user = deploy
private_key_file = ~/.ssh/deploy_key
ansible_ssh_common_args = -o StrictHostKeyChecking=yes -o UserKnownHostsFile=~/.ssh/known_hosts -o ControlMaster=auto -o ControlPersist=60s
# Privilege escalation
become = True
become_method = sudo
become_user = root
become_ask_pass = False
become_exe = sudo
[inventory]
enable_plugins = host_list, script, auto, yaml, ini, toml
[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=yes
control_path = ~/.ssh/ansible-%%h-%%p-%%r
retries = 3
pipelining = True
scp_if_ssh = smart
transfer_method = smart
[persistent_connection]
connect_timeout = 30
command_timeout = 30
[galaxy]
server_list = galaxy, community_galaxy
ignore_certs = False
[galaxy_server.galaxy]
url = https://galaxy.ansible.com/
# Token should be set via environment: ANSIBLE_GALAXY_TOKEN
[galaxy_server.community_galaxy]
url = https://galaxy.ansible.com/
# Token should be set via environment: ANSIBLE_GALAXY_TOKEN
[diff]
context = 3
always = False

View File

@@ -1,17 +0,0 @@
#!/bin/bash
# Quick deployment script with force flag
# Usage: ./deploy.sh
cd "$(dirname "$0")"
echo "🚀 Starting deployment to production..."
echo ""
ansible-playbook \
-i inventories/production/hosts.yml \
playbooks/deploy-rsync-based.yml \
--vault-password-file .vault_pass \
--extra-vars 'force_deploy=true'
echo ""
echo "✅ Deployment completed!"

View File

@@ -1,164 +0,0 @@
---
# Environment-specific variable mappings
# These variables change behavior based on the environment
# Environment Detection
environment_config:
production:
debug_enabled: false
log_level: "error"
cache_enabled: true
minify_assets: true
ssl_required: true
monitoring_level: "full"
backup_frequency: "daily"
staging:
debug_enabled: true
log_level: "info"
cache_enabled: true
minify_assets: false
ssl_required: true
monitoring_level: "basic"
backup_frequency: "weekly"
development:
debug_enabled: true
log_level: "debug"
cache_enabled: false
minify_assets: false
ssl_required: false
monitoring_level: "minimal"
backup_frequency: "never"
# Environment-specific PHP configuration
php_config:
production:
display_errors: "Off"
display_startup_errors: "Off"
error_reporting: "E_ALL & ~E_DEPRECATED & ~E_STRICT"
log_errors: "On"
memory_limit: "512M"
max_execution_time: 30
opcache_validate_timestamps: 0
opcache_revalidate_freq: 0
staging:
display_errors: "On"
display_startup_errors: "On"
error_reporting: "E_ALL"
log_errors: "On"
memory_limit: "256M"
max_execution_time: 60
opcache_validate_timestamps: 1
opcache_revalidate_freq: 2
development:
display_errors: "On"
display_startup_errors: "On"
error_reporting: "E_ALL"
log_errors: "On"
memory_limit: "1G"
max_execution_time: 0
opcache_validate_timestamps: 1
opcache_revalidate_freq: 0
# Environment-specific database configuration
database_config:
production:
query_cache: true
slow_query_log: true
long_query_time: 2
max_connections: 200
innodb_buffer_pool_size: "1G"
staging:
query_cache: true
slow_query_log: true
long_query_time: 5
max_connections: 100
innodb_buffer_pool_size: "512M"
development:
query_cache: false
slow_query_log: false
long_query_time: 10
max_connections: 50
innodb_buffer_pool_size: "128M"
# Environment-specific security settings
security_config:
production:
firewall_strict: true
rate_limiting: true
brute_force_protection: true
ssl_only: true
hsts_enabled: true
security_headers: "strict"
fail2ban_enabled: true
staging:
firewall_strict: false
rate_limiting: true
brute_force_protection: true
ssl_only: true
hsts_enabled: false
security_headers: "standard"
fail2ban_enabled: true
development:
firewall_strict: false
rate_limiting: false
brute_force_protection: false
ssl_only: false
hsts_enabled: false
security_headers: "minimal"
fail2ban_enabled: false
# Environment-specific monitoring configuration
monitoring_config:
production:
health_check_interval: 30
metric_collection_interval: 60
log_level: "warn"
alert_on_errors: true
performance_monitoring: true
staging:
health_check_interval: 60
metric_collection_interval: 300
log_level: "info"
alert_on_errors: false
performance_monitoring: true
development:
health_check_interval: 300
metric_collection_interval: 600
log_level: "debug"
alert_on_errors: false
performance_monitoring: false
# Environment-specific caching configuration
cache_config:
production:
driver: "redis"
default_ttl: 3600
prefix: "prod_"
staging:
driver: "redis"
default_ttl: 1800
prefix: "staging_"
development:
driver: "file"
default_ttl: 300
prefix: "dev_"
# Current environment configuration (set by inventory)
current_config: "{{ environment_config[environment] }}"
current_php_config: "{{ php_config[environment] }}"
current_database_config: "{{ database_config[environment] }}"
current_security_config: "{{ security_config[environment] }}"
current_monitoring_config: "{{ monitoring_config[environment] }}"
current_cache_config: "{{ cache_config[environment] }}"

View File

@@ -1,157 +0,0 @@
---
# Global Variables for Container-based PHP Framework Infrastructure
# These variables are shared across all environments
# Project Information
project_name: "michaelschiemer"
container_image: "{{ container_registry | default('docker.io') }}/{{ image_repository | default('michaelschiemer/php-framework') }}"
maintainer_email: "kontakt@michaelschiemer.de"
# Framework Configuration
framework:
name: "custom-php-framework"
version: "1.0.0"
php_version: "8.4"
environment: "{{ environment }}"
debug_mode: "{{ debug_mode | default(false) }}"
container_based: true
build_on_server: false
# Common Package Lists
common_packages:
- curl
- wget
- unzip
- git
- htop
- vim
- nano
- rsync
- screen
- tmux
security_packages:
- fail2ban
- ufw
- rkhunter
- chkrootkit
- lynis
- unattended-upgrades
- apt-listchanges
# Timezone and Locale
timezone: "Europe/Berlin"
locale: "en_US.UTF-8"
# User Management
system_users:
- name: deploy
groups:
- sudo
- docker
shell: /bin/bash
home: /home/deploy
create_home: true
# Directory Structure
app_directories:
- /var/www/html
- /var/www/backups
- /var/log/applications
- /home/deploy/.docker
- /home/deploy/scripts
# File Permissions
default_file_permissions:
web_root: "0755"
config_files: "0644"
scripts: "0755"
logs: "0755"
private_keys: "0600"
public_keys: "0644"
# Backup Configuration
backup_settings:
enabled: "{{ BACKUP_ENABLED | default(true) | bool }}"
retention_days: "{{ BACKUP_RETENTION_DAYS | default(30) }}"
schedule: "0 2 * * *" # Daily at 2 AM
compression: true
encryption: true
remote_storage: "{{ S3_BACKUP_ENABLED | default(false) | bool }}"
# Log Rotation
log_rotation:
rotate_count: 52 # Keep 52 weeks
rotate_when: weekly
compress: true
compress_delay: 1
missing_ok: true
not_if_empty: true
# Network Configuration
network:
ipv6_enabled: false
firewall_default_policy: deny
allowed_ssh_networks:
- "0.0.0.0/0" # Restrict this in production
# Docker Defaults
docker_defaults:
restart_policy: "always"
log_driver: "json-file"
log_options:
max-size: "10m"
max-file: "3"
networks:
- framework-network
security_opts:
- no-new-privileges:true
pull_policy: "always"
build_policy: "never"
# Performance Tuning
performance:
swappiness: 10
max_open_files: 65536
max_processes: 4096
# Monitoring Defaults
monitoring_defaults:
check_interval: 300 # 5 minutes
alert_threshold_cpu: 80
alert_threshold_memory: 85
alert_threshold_disk: 90
log_retention_days: 30
# SSL Defaults
ssl_defaults:
key_size: 2048
protocols:
- "TLSv1.2"
- "TLSv1.3"
cipher_suite: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384"
# Container Runtime Defaults
container_defaults:
php_version: "8.4"
pull_timeout: 300
deploy_timeout: 600
health_check_timeout: 30
health_check_interval: 10
health_check_retries: 15
# Database Defaults
database_defaults:
engine: mysql
version: "8.0"
charset: utf8mb4
collation: utf8mb4_unicode_ci
max_connections: 100
innodb_buffer_pool_size: "128M"
# Application Defaults
app_defaults:
session_lifetime: 7200 # 2 hours
cache_driver: redis
queue_driver: redis
mail_driver: smtp

View File

@@ -1,96 +0,0 @@
---
# Encrypted Variables (Ansible Vault)
# These variables contain sensitive information and should be encrypted
# Database Credentials
vault_mysql_root_password: "super_secure_root_password_change_me"
vault_mysql_user_password: "secure_user_password_change_me"
vault_mysql_replication_password: "secure_replication_password_change_me"
# Application Secrets
vault_app_key: "base64:CHANGE_THIS_TO_A_REAL_32_CHARACTER_SECRET_KEY"
vault_jwt_secret: "CHANGE_THIS_TO_A_REAL_JWT_SECRET_KEY"
vault_encryption_key: "CHANGE_THIS_TO_A_REAL_ENCRYPTION_KEY"
# Redis Password
vault_redis_password: "secure_redis_password_change_me"
# SMTP Configuration
vault_smtp_host: "smtp.example.com"
vault_smtp_port: 587
vault_smtp_username: "noreply@michaelschiemer.de"
vault_smtp_password: "smtp_password_change_me"
vault_smtp_encryption: "tls"
# Third-party API Keys
vault_api_keys:
stripe_secret: "sk_test_CHANGE_THIS_TO_REAL_STRIPE_SECRET"
paypal_client_id: "CHANGE_THIS_TO_REAL_PAYPAL_CLIENT_ID"
paypal_client_secret: "CHANGE_THIS_TO_REAL_PAYPAL_SECRET"
google_analytics: "GA_TRACKING_ID"
recaptcha_site_key: "RECAPTCHA_SITE_KEY"
recaptcha_secret_key: "RECAPTCHA_SECRET_KEY"
# OAuth Configuration
vault_oauth:
google:
client_id: "GOOGLE_CLIENT_ID"
client_secret: "GOOGLE_CLIENT_SECRET"
github:
client_id: "GITHUB_CLIENT_ID"
client_secret: "GITHUB_CLIENT_SECRET"
# Backup Encryption
vault_backup_encryption_key: "CHANGE_THIS_TO_A_REAL_BACKUP_ENCRYPTION_KEY"
# Monitoring Secrets
vault_monitoring:
slack_webhook: "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
pagerduty_key: "PAGERDUTY_INTEGRATION_KEY"
# Docker Registry Credentials
vault_docker_registry:
username: "registry_username"
password: "registry_password"
email: "kontakt@michaelschiemer.de"
# SSH Keys (base64 encoded)
vault_ssh_keys:
deploy_private_key: |
-----BEGIN OPENSSH PRIVATE KEY-----
# CHANGE THIS TO YOUR ACTUAL DEPLOY KEY
-----END OPENSSH PRIVATE KEY-----
deploy_public_key: "ssh-rsa AAAAB3NzaC1yc2E... deploy@michaelschiemer.de"
# SSL Certificate Passwords
vault_ssl_passwords:
private_key_passphrase: "ssl_private_key_passphrase"
p12_password: "ssl_p12_password"
# Security Tokens
vault_security_tokens:
csrf_secret: "CHANGE_THIS_TO_A_REAL_CSRF_SECRET"
api_token_secret: "CHANGE_THIS_TO_A_REAL_API_TOKEN_SECRET"
session_secret: "CHANGE_THIS_TO_A_REAL_SESSION_SECRET"
# External Service Credentials
vault_external_services:
cloudflare_api_token: "CLOUDFLARE_API_TOKEN"
aws_access_key: "AWS_ACCESS_KEY_ID"
aws_secret_key: "AWS_SECRET_ACCESS_KEY"
# Feature Flags and Secrets
vault_features:
enable_debug_mode: false
enable_profiler: false
enable_maintenance_mode: false
# Environment Specific Secrets
vault_environment_secrets:
production:
sentry_dsn: "https://YOUR_SENTRY_DSN@sentry.io/PROJECT_ID"
newrelic_license: "NEWRELIC_LICENSE_KEY"
staging:
sentry_dsn: "https://YOUR_STAGING_SENTRY_DSN@sentry.io/PROJECT_ID"
development:
debug_token: "DEBUG_TOKEN_FOR_DEVELOPMENT"

View File

@@ -1,68 +0,0 @@
---
# Development Inventory for Custom PHP Framework
# Local development environment
all:
vars:
# Environment configuration
environment: development
domain_name: localhost
app_name: custom-php-framework
# SSL Configuration (self-signed for dev)
ssl_email: kontakt@michaelschiemer.de
ssl_provider: self-signed
# PHP Configuration
php_version: "8.4"
php_fpm_version: "8.4"
# Security settings (minimal for dev)
security_level: low
firewall_strict_mode: false
fail2ban_enabled: false
# Docker configuration
docker_edition: ce
docker_version: "latest"
docker_compose_version: "2.20.0"
# Monitoring (disabled for dev)
monitoring_enabled: false
health_checks_enabled: false
# Backup configuration (disabled)
backup_enabled: false
backup_retention_days: 0
children:
web_servers:
hosts:
localhost:
ansible_connection: local
ansible_host: 127.0.0.1
ansible_user: "{{ ansible_env.USER }}"
server_role: development
# Service configuration (minimal)
nginx_worker_processes: 1
nginx_worker_connections: 256
nginx_port: 443
# PHP-FPM configuration (minimal)
php_fpm_pm_max_children: 10
php_fpm_pm_start_servers: 2
php_fpm_pm_min_spare_servers: 1
php_fpm_pm_max_spare_servers: 5
# Docker resource limits (minimal)
docker_memory_limit: 2g
docker_cpu_limit: 1.0
vars:
# Web server specific vars
nginx_enabled: true
ssl_certificate_path: /etc/ssl/certs/localhost
log_level: debug
debug_mode: true
xdebug_enabled: true

View File

@@ -1,64 +0,0 @@
---
# Production Inventory for michaelschiemer.de
# Container-based PHP Framework Infrastructure
all:
vars:
# Environment configuration
environment: production
project_name: michaelschiemer
domain_name: michaelschiemer.de
# Container configuration
container_registry: docker.io
image_repository: michaelschiemer/php-framework
# SSL Configuration
ssl_email: kontakt@michaelschiemer.de
ssl_provider: letsencrypt
# Security settings
security_level: high
firewall_strict_mode: true
fail2ban_enabled: true
# Docker configuration
docker_edition: ce
docker_version: "24.0"
# Monitoring
monitoring_enabled: true
health_checks_enabled: true
# Backup configuration - parameterized from CI
backup_enabled: "{{ BACKUP_ENABLED | default(true) | bool }}"
backup_retention_days: "{{ BACKUP_RETENTION_DAYS | default(30) }}"
# CDN configuration
cdn_update: "{{ CDN_UPDATE | default(false) | bool }}"
children:
web_servers:
hosts:
michaelschiemer-prod-web-01:
ansible_host: 94.16.110.151
ansible_user: deploy
ansible_ssh_private_key_file: ~/.ssh/production
server_role: primary
# Server specifications
cpu_cores: 4
memory_gb: 8
disk_gb: 80
# Production resource limits
max_containers: 10
docker_memory_limit: 6g
docker_cpu_limit: 3.5
vars:
# Production environment variables
log_level: warning
deploy_timeout: 300
health_check_retries: 15
rollback_enabled: true

View File

@@ -1,73 +0,0 @@
---
# Staging Inventory for Custom PHP Framework
# Test environment for michaelschiemer.de
all:
vars:
# Environment configuration
environment: staging
domain_name: staging.michaelschiemer.de
app_name: custom-php-framework
# SSL Configuration
ssl_email: kontakt@michaelschiemer.de
ssl_provider: letsencrypt
# PHP Configuration
php_version: "8.4"
php_fpm_version: "8.4"
# Security settings (more relaxed for testing)
security_level: medium
firewall_strict_mode: false
fail2ban_enabled: true
# Docker configuration
docker_edition: ce
docker_version: "latest"
docker_compose_version: "2.20.0"
# Monitoring (basic for staging)
monitoring_enabled: true
health_checks_enabled: true
# Backup configuration (minimal)
backup_enabled: false
backup_retention_days: 7
children:
web_servers:
hosts:
michaelschiemer-staging-web-01:
# Can use same server with different ports/containers
ansible_host: 94.16.110.151
ansible_user: deploy
ansible_ssh_private_key_file: ~/.ssh/id_rsa_deploy
server_role: staging
# Server specifications (shared with prod)
cpu_cores: 2
memory_gb: 4
disk_gb: 40
# Service configuration (reduced for staging)
nginx_worker_processes: 2
nginx_worker_connections: 512
nginx_port: 8080
# PHP-FPM configuration (reduced)
php_fpm_pm_max_children: 20
php_fpm_pm_start_servers: 3
php_fpm_pm_min_spare_servers: 2
php_fpm_pm_max_spare_servers: 10
# Docker resource limits (reduced)
docker_memory_limit: 3g
docker_cpu_limit: 1.5
vars:
# Web server specific vars
nginx_enabled: true
ssl_certificate_path: /etc/letsencrypt/live/{{ domain_name }}
log_level: info
debug_mode: true

View File

@@ -1,14 +0,0 @@
#!/bin/bash
# Quick script to show PHP logs from production server
# Usage: ./logs.sh [lines]
# Default: 50 lines
LINES="${1:-50}"
echo "📋 Showing last $LINES lines of PHP logs from production..."
echo ""
ssh -i ~/.ssh/production deploy@michaelschiemer.de "docker logs php --tail $LINES"
echo ""
echo "✅ Done!"

View File

@@ -1,14 +0,0 @@
#!/bin/bash
# Show Nginx error logs from production server
# Usage: ./nginx-logs.sh [lines]
# Default: 50 lines
LINES="${1:-50}"
echo "📋 Showing last $LINES lines of Nginx error logs from production..."
echo ""
ssh -i ~/.ssh/production deploy@michaelschiemer.de "docker exec web tail -n $LINES /var/log/nginx/error.log"
echo ""
echo "✅ Done!"

View File

@@ -1,239 +0,0 @@
# Git-Based Deployment mit Gitea
## Übersicht
Das Git-basierte Deployment Playbook (`deploy-git-based.yml`) ermöglicht Zero-Downtime Deployments mit Gitea als Git-Repository-Server.
## Voraussetzungen
### 1. Gitea Server Setup
Der Gitea Server muss für den Production-Server erreichbar sein. Es gibt zwei Optionen:
#### Option A: Öffentlich erreichbarer Gitea Server (Empfohlen für Production)
```bash
# Gitea muss über das Internet erreichbar sein
git_repo: "git@git.michaelschiemer.de:michael/michaelschiemer.git"
```
**Erforderlich**:
- Öffentliche IP oder Domain für Gitea
- Firewall-Regel für Port 2222 (SSH)
- SSL/TLS für Webinterface (Port 9443/3000)
#### Option B: Gitea auf dem Production-Server
```bash
# Gitea läuft auf demselben Server wie die Anwendung
git_repo: "git@localhost:michael/michaelschiemer.git"
```
**Erforderlich**:
- Gitea Container auf Production-Server deployen
- Docker Compose Setup auf Production-Server
- Lokale SSH-Konfiguration
### 2. SSH Key Setup
Der Deploy-User auf dem Production-Server benötigt einen SSH-Key:
```bash
# Auf dem Production-Server
ssh-keygen -t ed25519 -C "deployment@michaelschiemer" -f ~/.ssh/gitea_deploy_key -N ""
# Public Key zu Gitea hinzufügen (via Web-UI oder API)
cat ~/.ssh/gitea_deploy_key.pub
```
### 3. SSH Keys im Secrets-Verzeichnis
Die SSH Keys müssen im `deployment/infrastructure/secrets/` Verzeichnis liegen:
```bash
deployment/infrastructure/secrets/
├── .gitignore # Schützt Keys vor versehentlichem Commit
├── gitea_deploy_key # Private Key
└── gitea_deploy_key.pub # Public Key
```
**WICHTIG**: Das `secrets/` Verzeichnis ist via `.gitignore` geschützt und darf NIEMALS committed werden!
## Deployment-Ablauf
### 1. SSH Key auf Production-Server kopieren
Das Playbook kopiert automatisch die SSH Keys aus `secrets/` auf den Production-Server:
```yaml
- name: Copy Gitea deploy SSH private key
copy:
src: "{{ playbook_dir }}/../secrets/gitea_deploy_key"
dest: "/home/{{ app_user }}/.ssh/gitea_deploy_key"
mode: '0600'
```
### 2. SSH-Konfiguration
Das Playbook erstellt automatisch die SSH-Konfiguration:
```ssh
Host localhost
HostName localhost
Port 2222
User git
IdentityFile ~/.ssh/gitea_deploy_key
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Host git.michaelschiemer.de
HostName git.michaelschiemer.de
Port 2222
User git
IdentityFile ~/.ssh/gitea_deploy_key
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
```
### 3. Git Clone
Das Playbook clont das Repository in ein Release-Verzeichnis:
```bash
/var/www/michaelschiemer/
├── releases/
│ ├── 1761524417/ # Timestamp-basierte Releases
│ └── v1.0.0/ # Tag-basierte Releases
├── shared/ # Shared Directories (symlinked)
│ ├── storage/
│ └── .env.production
└── current -> releases/1761524417 # Symlink auf aktives Release
```
### 4. Zero-Downtime Deployment
- Neues Release wird geclont
- Dependencies installiert
- Symlinks erstellt
- `current` Symlink atomar gewechselt
- Health Check durchgeführt
- Bei Fehler: Automatischer Rollback
## Deployment ausführen
### Standard Deployment (main Branch)
```bash
cd deployment/infrastructure
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-git-based.yml
```
### Tag-basiertes Deployment
```bash
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-git-based.yml \
--extra-vars "release_tag=v1.0.0"
```
### Custom Branch Deployment
```bash
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-git-based.yml \
--extra-vars "git_branch=develop"
```
## Konfiguration anpassen
### Git Repository URL ändern
In `deploy-git-based.yml`:
```yaml
vars:
git_repo: "git@git.michaelschiemer.de:michael/michaelschiemer.git"
# Oder für lokales Testing:
# git_repo: "git@localhost:michael/michaelschiemer.git"
```
### Shared Directories anpassen
```yaml
vars:
shared_dirs:
- storage/logs
- storage/cache
- storage/sessions
- storage/uploads
- public/uploads
shared_files:
- .env.production
```
## Troubleshooting
### Fehler: "Connection refused" zu Gitea
**Problem**: Der Production-Server kann Gitea nicht erreichen.
**Lösung**:
1. Prüfe, ob Gitea öffentlich erreichbar ist: `nc -zv git.michaelschiemer.de 2222`
2. Prüfe Firewall-Regeln auf dem Gitea-Server
3. Für lokales Testing: Verwende rsync-based Deployment stattdessen
### Fehler: "Permission denied (publickey)"
**Problem**: SSH Key ist nicht korrekt konfiguriert.
**Lösung**:
1. Prüfe, ob der Public Key in Gitea hinzugefügt wurde
2. Prüfe SSH Key Permissions: `chmod 600 ~/.ssh/gitea_deploy_key`
3. Teste SSH-Verbindung manuell: `ssh -p 2222 -i ~/.ssh/gitea_deploy_key git@git.michaelschiemer.de`
### Health Check schlägt fehl
**Problem**: Deployment-Health-Check failed.
**Lösung**:
1. Automatischer Rollback wurde durchgeführt
2. Prüfe Logs: `tail -f /var/www/michaelschiemer/deploy.log`
3. Prüfe Application Logs: `/var/www/michaelschiemer/shared/storage/logs/`
## Comparison: Git-based vs rsync-based
### Git-based Deployment (Dieser Playbook)
**Vorteile**:
- Zero-Downtime durch Symlink-Switch
- Atomare Releases mit Rollback-Fähigkeit
- Git-Historie auf Production-Server
- Einfache Rollbacks zu vorherigen Releases
**Nachteile**:
- Gitea Server muss erreichbar sein
- Zusätzliche Infrastruktur (Gitea)
- SSH Key Management erforderlich
### rsync-based Deployment
**Vorteile**:
- Keine zusätzliche Infrastruktur
- Funktioniert mit lokalem Development-Environment
- Schneller für kleine Änderungen
**Nachteile**:
- Kein Zero-Downtime ohne zusätzliche Logik
- Keine Git-Historie auf Server
- Rollback komplizierter
## Empfehlung
**Für Production**: Git-based Deployment mit öffentlich erreichbarem Gitea Server
**Für Development/Testing**: rsync-based Deployment (bereits implementiert und getestet)
## Related Files
- `deploy-git-based.yml` - Git-based Deployment Playbook
- `deploy-rsync-based.yml` - rsync-based Deployment Playbook (Alternative)
- `rollback-git-based.yml` - Rollback Playbook für Git-Deployments
- `secrets/.gitignore` - Schutz für SSH Keys

View File

@@ -1,652 +0,0 @@
# Rsync-Based Deployment
**Production-ready Zero-Downtime Deployment** mit Rsync, Release Management und automatischem Rollback.
## Übersicht
Das Rsync-basierte Deployment Playbook (`deploy-rsync-based.yml`) bietet eine robuste Lösung für Production Deployments ohne externe Git-Server-Abhängigkeiten.
**Vorteile**:
- ✅ Zero-Downtime durch Symlink-Switch
- ✅ Automatischer Rollback bei Health Check Failure
- ✅ Git Tag-basiertes Release Management
- ✅ Keine Gitea/GitHub Abhängigkeit
- ✅ Schnell für kleine Änderungen
- ✅ Einfaches Rollback zu vorherigen Releases
## Deployment-Architektur
### Release Structure
```
/home/deploy/michaelschiemer/
├── releases/
│ ├── 1761499893/ # Timestamp-based releases
│ ├── v1.0.0/ # Git tag-based releases
│ └── v1.2.3/
├── shared/ # Shared zwischen Releases
│ ├── storage/
│ │ └── sessions/
│ ├── public/
│ │ └── uploads/
│ └── .env.production # Shared config
├── current -> releases/v1.2.3 # Symlink auf aktives Release
└── deploy.log # Deployment history
```
### Zero-Downtime Process
```
1. Build Assets (local)
2. Rsync to new release directory
3. Create symlinks zu shared directories
4. Start Docker containers
5. Health Check (3 retries)
├─ Success → Switch 'current' symlink (atomic)
└─ Failure → Rollback zu previous release
6. Cleanup old releases (keep last 5)
```
## Voraussetzungen
### 1. SSH Key Setup
SSH Keys für Production Server müssen konfiguriert sein:
```bash
# SSH config in ~/.ssh/config
Host michaelschiemer-prod
HostName 94.16.110.151
User deploy
IdentityFile ~/.ssh/production
StrictHostKeyChecking no
```
### 2. Production Server Requirements
- **User**: `deploy` user mit sudo Rechten
- **Docker**: Docker und Docker Compose installiert
- **Directory**: `/home/deploy/michaelschiemer` mit korrekten Permissions
### 3. Local Development Setup
- **Composer**: Für `composer install`
- **NPM**: Für `npm run build`
- **Git**: Für Tag-basiertes Release Management (optional)
- **Ansible**: Ansible ≥2.13 installiert
## Deployment-Workflows
### Standard Deployment (Timestamp-based)
Deployiert aktuellen Stand ohne Git Tag:
```bash
cd deployment/infrastructure
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml
```
**Release Name**: Unix Timestamp (z.B. `1761499893`)
### Tagged Release Deployment (Recommended)
Deployiert spezifischen Git Tag:
```bash
# Option 1: Tag explizit angeben
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml \
--extra-vars "release_tag=v1.2.3"
# Option 2: Aktuellen Git Tag verwenden (auto-detected)
git tag v1.2.3
git push origin v1.2.3
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml
```
**Release Name**: Git Tag (z.B. `v1.2.3`)
### Force Deployment (Override Lock)
Wenn ein Deployment Lock existiert:
```bash
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml \
--extra-vars "force_deploy=true"
```
## Release Management
### Git Tag Workflow
**Semantic Versioning** wird empfohlen:
```bash
# 1. Create Git tag
git tag -a v1.2.3 -m "Release v1.2.3: Feature XYZ"
git push origin v1.2.3
# 2. Deploy tagged release
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml \
--extra-vars "release_tag=v1.2.3"
# 3. Verify deployment
ssh deploy@94.16.110.151 'ls -la /home/deploy/michaelschiemer/releases/'
```
### Auto-Detection von Git Tags
Wenn `release_tag` nicht angegeben wird, versucht das Playbook automatisch den aktuellen Git Tag zu verwenden:
```bash
# Auf einem getaggten Commit
git describe --tags --exact-match # Zeigt: v1.2.3
# Deployment verwendet automatisch v1.2.3 als Release Name
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml
```
**Fallback**: Wenn kein Git Tag vorhanden → Timestamp als Release Name
### Release List anzeigen
```bash
ansible -i inventories/production/hosts.yml web_servers -b -a \
"ls -lt /home/deploy/michaelschiemer/releases | head -10"
```
**Output**:
```
total 20
drwxr-xr-x 10 deploy deploy 4096 Oct 26 18:50 v1.2.3
drwxr-xr-x 10 deploy deploy 4096 Oct 25 14:32 v1.2.2
drwxr-xr-x 10 deploy deploy 4096 Oct 24 10:15 1761499893
drwxr-xr-x 10 deploy deploy 4096 Oct 23 09:00 v1.2.1
lrwxrwxrwx 1 deploy deploy 56 Oct 26 18:50 current -> /home/deploy/michaelschiemer/releases/v1.2.3
```
## Rollback Mechanisms
### Automatic Rollback
Bei Health Check Failure rollback das Playbook automatisch:
1. **Stop failed release containers**
2. **Switch `current` symlink** zurück zu `previous_release`
3. **Start previous release containers**
4. **Remove failed release directory**
5. **Log rollback event**
**Trigger**: Health Check Status ≠ 200 (nach 3 Retries mit 5s delay)
### Manual Rollback
Manueller Rollback zu vorherigem Release:
```bash
# 1. List available releases
ansible -i inventories/production/hosts.yml web_servers -b -a \
"ls -lt /home/deploy/michaelschiemer/releases"
# 2. Identify target release (z.B. v1.2.2)
TARGET_RELEASE="v1.2.2"
# 3. Manual rollback via Ansible
ansible -i inventories/production/hosts.yml web_servers -b -m shell -a "
cd /home/deploy/michaelschiemer && \
docker compose -f current/docker-compose.yml -f current/docker-compose.production.yml down && \
ln -sfn releases/${TARGET_RELEASE} current && \
docker compose -f current/docker-compose.yml -f current/docker-compose.production.yml up -d
"
# 4. Verify rollback
curl -k https://94.16.110.151/health/summary
```
**Oder**: Erstelle Rollback Playbook:
```yaml
# playbooks/rollback-rsync.yml
- name: Manual Rollback to Previous Release
hosts: web_servers
become: true
vars:
app_name: michaelschiemer
app_user: deploy
app_base_path: "/home/{{ app_user }}/{{ app_name }}"
target_release: "{{ rollback_target }}" # --extra-vars "rollback_target=v1.2.2"
tasks:
- name: Stop current release
command: docker compose down
args:
chdir: "{{ app_base_path }}/current"
become_user: "{{ app_user }}"
- name: Switch to target release
file:
src: "{{ app_base_path }}/releases/{{ target_release }}"
dest: "{{ app_base_path }}/current"
state: link
force: yes
- name: Start target release
command: docker compose -f docker-compose.yml -f docker-compose.production.yml up -d
args:
chdir: "{{ app_base_path }}/current"
become_user: "{{ app_user }}"
- name: Health check
uri:
url: "https://{{ ansible_host }}/health/summary"
method: GET
status_code: 200
validate_certs: no
retries: 3
delay: 5
# Usage:
# ansible-playbook -i inventories/production/hosts.yml playbooks/rollback-rsync.yml --extra-vars "rollback_target=v1.2.2"
```
## Health Checks
### Configured Health Endpoints
**Primary Health Check**: `https://{{ ansible_host }}/health/summary`
**Retry Strategy**:
- Retries: 3
- Delay: 5 seconds
- Success: HTTP 200 status code
### Health Check Flow
```yaml
- name: Health check - Summary endpoint (HTTPS)
uri:
url: "https://{{ ansible_host }}/health/summary"
method: GET
return_content: yes
status_code: 200
validate_certs: no
follow_redirects: none
register: health_check
retries: 3
delay: 5
until: health_check.status == 200
ignore_errors: yes
- name: Rollback on health check failure
block:
- name: Stop failed release containers
- name: Switch symlink back to previous release
- name: Start previous release containers
- name: Remove failed release
- name: Log rollback
- name: Fail deployment
when: health_check.status != 200
```
### Custom Health Endpoints
Füge weitere Health Checks hinzu:
```yaml
# Nach der Primary Health Check in deploy-rsync-based.yml
- name: Health check - Database connectivity
uri:
url: "https://{{ ansible_host }}/health/database"
method: GET
status_code: 200
validate_certs: no
retries: 2
delay: 3
ignore_errors: yes
register: db_health_check
- name: Health check - Cache service
uri:
url: "https://{{ ansible_host }}/health/cache"
method: GET
status_code: 200
validate_certs: no
retries: 2
delay: 3
ignore_errors: yes
register: cache_health_check
- name: Aggregate health check results
set_fact:
overall_health: "{{ health_check.status == 200 and db_health_check.status == 200 and cache_health_check.status == 200 }}"
- name: Rollback on any health check failure
block:
# ... rollback steps ...
when: not overall_health
```
## Monitoring & Logging
### Deployment Log
Alle Deployments werden geloggt:
```bash
# Deployment log anzeigen
ssh deploy@94.16.110.151 'tail -50 /home/deploy/michaelschiemer/deploy.log'
```
**Log Format**:
```
[2024-10-26T18:50:30Z] Deployment started - Release: v1.2.3 - User: michael
[2024-10-26T18:50:35Z] Release: v1.2.3 | Git Hash: a1b2c3d | Commit: a1b2c3d4e5f6g7h8i9j0
[2024-10-26T18:50:50Z] Symlink switched: /home/deploy/michaelschiemer/current -> releases/v1.2.3
[2024-10-26T18:50:55Z] Health check: 200
[2024-10-26T18:50:56Z] Cleanup: Kept 5 releases, removed 1
[2024-10-26T18:50:57Z] Deployment completed successfully - Release: v1.2.3
```
### Docker Logs
```bash
# Application logs
ssh deploy@94.16.110.151 'cd /home/deploy/michaelschiemer/current && docker compose logs -f'
# Specific service
ssh deploy@94.16.110.151 'cd /home/deploy/michaelschiemer/current && docker compose logs -f php'
```
### System Monitoring
```bash
# Disk usage
ansible -i inventories/production/hosts.yml web_servers -b -a "df -h /home/deploy/michaelschiemer"
# Release directory sizes
ansible -i inventories/production/hosts.yml web_servers -b -a "du -sh /home/deploy/michaelschiemer/releases/*"
# Container status
ansible -i inventories/production/hosts.yml web_servers -b -a "docker ps"
```
## Configuration
### Shared Directories
Konfiguriert in `deploy-rsync-based.yml`:
```yaml
shared_dirs:
- storage/sessions
- public/uploads
shared_files:
- .env.production
```
**Hinweis**: `storage/logs`, `storage/cache`, `storage/uploads` werden via Docker Volumes verwaltet.
### Rsync Exclusions
Files/Directories die NICHT deployiert werden:
```yaml
rsync_excludes:
- .git/
- .github/
- node_modules/
- .env
- .env.local
- .env.development
- storage/
- public/uploads/
- tests/
- .idea/
- .vscode/
- "*.log"
- .DS_Store
- deployment/
- database.sqlite
- "*.cache"
- .php-cs-fixer.cache
- var/cache/
- var/logs/
```
### Keep Releases
Anzahl der beibehaltenen Releases:
```yaml
keep_releases: 5 # Standard: 5 Releases
```
Ändere nach Bedarf:
```bash
ansible-playbook ... --extra-vars "keep_releases=10"
```
## Troubleshooting
### Problem: Deployment Lock existiert
**Error**:
```
FAILED! => msg: Deployment already in progress. Lock file exists: /home/deploy/michaelschiemer/.deploy.lock
```
**Ursache**: Vorheriges Deployment wurde unterbrochen
**Lösung**:
```bash
# Option 1: Force deployment
ansible-playbook ... --extra-vars "force_deploy=true"
# Option 2: Lock manuell entfernen
ansible -i inventories/production/hosts.yml web_servers -b -m file \
-a "path=/home/deploy/michaelschiemer/.deploy.lock state=absent"
```
### Problem: Health Check schlägt fehl
**Error**:
```
FAILED! => Deployment failed - health check returned 503. Rolled back to previous release.
```
**Diagnose**:
```bash
# 1. Check application logs
ssh deploy@94.16.110.151 'cd /home/deploy/michaelschiemer/current && docker compose logs --tail=100'
# 2. Check container status
ssh deploy@94.16.110.151 'docker ps -a'
# 3. Manual health check
curl -k -v https://94.16.110.151/health/summary
# 4. Check deployment log
ssh deploy@94.16.110.151 'tail -100 /home/deploy/michaelschiemer/deploy.log'
```
**Häufige Ursachen**:
- .env.production fehlt oder fehlerhaft
- Database migration fehlgeschlagen
- Docker container starten nicht
- SSL Zertifikat Probleme
### Problem: Rsync zu langsam
**Symptom**: Deployment dauert mehrere Minuten
**Optimierung**:
```yaml
# In deploy-rsync-based.yml - rsync command erweitern
--compress # Kompression aktiviert
--delete-after # Löschen nach Transfer
--delay-updates # Atomic updates
```
**Alternative**: Rsync via lokales Netzwerk statt Internet:
```yaml
# Wenn Production Server im gleichen Netzwerk
ansible_host: 192.168.1.100 # Lokale IP statt öffentliche
```
### Problem: Git Tag nicht erkannt
**Symptom**: Deployment verwendet Timestamp statt Git Tag
**Diagnose**:
```bash
# Check ob auf getaggtem Commit
git describe --tags --exact-match
# Sollte: v1.2.3 (ohne Fehler)
# Check ob Tag existiert
git tag -l
```
**Lösung**:
```bash
# 1. Tag erstellen falls fehlend
git tag v1.2.3
git push origin v1.2.3
# 2. Oder Tag explizit angeben
ansible-playbook ... --extra-vars "release_tag=v1.2.3"
```
## Best Practices
### 1. Always Tag Releases
```bash
# Vor Production Deployment immer Git Tag erstellen
git tag -a v1.2.3 -m "Release v1.2.3: Feature description"
git push origin v1.2.3
```
**Vorteile**:
- Klare Release-Historie
- Einfaches Rollback zu spezifischen Versionen
- Semantic Versioning tracking
### 2. Test Deployment in Staging First
```bash
# Staging deployment (separate inventory)
ansible-playbook -i inventories/staging/hosts.yml playbooks/deploy-rsync-based.yml \
--extra-vars "release_tag=v1.2.3"
# Nach erfolgreichen Tests → Production
ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml \
--extra-vars "release_tag=v1.2.3"
```
### 3. Monitor Deployment Log
```bash
# Real-time deployment monitoring
ssh deploy@94.16.110.151 'tail -f /home/deploy/michaelschiemer/deploy.log'
```
### 4. Backup vor Major Releases
```bash
# Database backup vor Major Release
ssh deploy@94.16.110.151 'cd /home/deploy/michaelschiemer/current && \
docker compose exec php php console.php db:backup'
```
### 5. Verify Health Before Release Tag
```bash
# Health check auf Staging
curl -k https://staging.michaelschiemer.de/health/summary
# Bei Erfolg → Production Tag
git tag v1.2.3
git push origin v1.2.3
```
## Comparison: Rsync vs Git-based
### Rsync-based (Current)
**Vorteile**:
- ✅ Keine Git-Server Abhängigkeit
- ✅ Funktioniert mit lokalem Development
- ✅ Schnell für kleine Änderungen
- ✅ Einfaches Setup
- ✅ Git Tag Support ohne External Server
**Nachteile**:
- ❌ Keine Git-Historie auf Production Server
- ❌ Erfordert lokale Build-Steps (Composer, NPM)
- ❌ Rsync über Internet kann langsam sein
### Git-based
**Vorteile**:
- ✅ Git-Historie auf Production Server
- ✅ Atomare Releases mit Git Commits
- ✅ Build direkt auf Production Server
- ✅ Kein lokales Build erforderlich
**Nachteile**:
- ❌ Gitea Server muss öffentlich erreichbar sein
- ❌ Zusätzliche Infrastruktur (Gitea)
- ❌ SSH Key Management komplexer
## Performance Optimizations
### 1. Pre-built Assets
Assets werden lokal gebaut → schnelleres Deployment:
```yaml
pre_tasks:
- name: Install Composer dependencies locally
- name: Build NPM assets locally
```
### 2. Docker Layer Caching
Docker Images werden auf Production Server gecached → schnellerer Start.
### 3. Shared Directories
Shared directories vermeiden unnötiges Kopieren:
- `storage/sessions`
- `public/uploads`
- `.env.production`
### 4. Cleanup Old Releases
Nur 5 Releases behalten → spart Disk Space:
```yaml
keep_releases: 5
```
## Related Files
- `deploy-rsync-based.yml` - Rsync-based Deployment Playbook
- `deploy-git-based.yml` - Git-based Deployment Playbook (Alternative)
- `rollback-git-based.yml` - Git-based Rollback Playbook
- `inventories/production/hosts.yml` - Production Server Configuration
## Zusammenfassung
Das rsync-based Deployment bietet:
-**Production-Ready** Zero-Downtime Deployment
-**Git Tag Support** für klare Release-Historie
-**Automatischer Rollback** bei Failures
-**Einfaches Setup** ohne externe Dependencies
-**Schnell und Zuverlässig** für Development und Production
**Empfehlung**: Ideal für lokale Development → Production Workflows ohne zusätzliche Git-Server-Infrastruktur.

View File

@@ -1,231 +0,0 @@
---
# Production Container Deployment Playbook
# Deploys pre-built container images for Custom PHP Framework
- name: Deploy Custom PHP Framework Application
hosts: web_servers
become: true
gather_facts: true
vars:
# Environment variable with proper fallback
deployment_env: "{{ deploy_environment | default('production') }}"
app_path: "/var/www/html"
backup_path: "/var/www/backups"
image_tag: "{{ IMAGE_TAG | default('latest') }}"
domain_name: "{{ DOMAIN_NAME | default('michaelschiemer.de') }}"
backup_enabled: "{{ BACKUP_ENABLED | default(true) | bool }}"
backup_retention_days: "{{ BACKUP_RETENTION_DAYS | default(30) }}"
cdn_update: "{{ CDN_UPDATE | default(false) | bool }}"
# Pfade für Templates/Compose relativ zum Playbook-Verzeichnis
compose_base_src: "{{ playbook_dir }}/../../../docker-compose.yml"
compose_overlay_src: "{{ playbook_dir }}/../../applications/docker-compose.{{ deployment_env }}.yml"
env_template_src: "{{ playbook_dir }}/../../applications/environments/.env.{{ deployment_env }}.template"
# Compose-Projektname: Standardmäßig Verzeichnisname von app_path (z. B. 'html')
compose_project: "{{ compose_project_name | default(app_path | basename) }}"
pre_tasks:
- name: Verify deployment requirements
assert:
that:
- app_path is defined
- domain_name is defined
- image_tag is defined
- image_tag != 'latest' or deployment_env != 'production'
fail_msg: "Production deployment requires specific image tag (not 'latest')"
tags: always
- name: Create required directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: deploy
group: deploy
mode: '0755'
loop:
- "{{ app_path }}"
- "{{ backup_path }}"
- /var/log/applications
tags: always
- name: Store current image tag for rollback
ansible.builtin.shell: |
if [ -f {{ app_path }}/.env.{{ deployment_env }} ]; then
grep '^IMAGE_TAG=' {{ app_path }}/.env.{{ deployment_env }} | cut -d'=' -f2 > {{ app_path }}/.last_release || echo 'none'
fi
ignore_errors: true
tags: backup
tasks:
- name: Check for existing deployment
ansible.builtin.stat:
path: "{{ app_path }}/docker-compose.yml"
register: existing_deployment
tags: deploy
- name: Render environment file from template
ansible.builtin.template:
src: "{{ env_template_src }}"
dest: "{{ app_path }}/.env.{{ deployment_env }}"
owner: deploy
group: deploy
mode: '0600'
backup: true
vars:
IMAGE_TAG: "{{ image_tag }}"
DOMAIN_NAME: "{{ domain_name }}"
# no_log: true # Disabled for debugging
tags: deploy
- name: Copy Docker Compose files (base + overlay)
ansible.builtin.copy:
src: "{{ item.src }}"
dest: "{{ app_path }}/{{ item.dest }}"
owner: deploy
group: deploy
mode: '0644'
loop:
- { src: "{{ compose_base_src }}", dest: "docker-compose.yml" }
- { src: "{{ compose_overlay_src }}", dest: "docker-compose.{{ deployment_env }}.yml" }
tags: deploy
- name: Stop existing services gracefully if present
community.docker.docker_compose_v2:
project_src: "{{ app_path }}"
files:
- docker-compose.yml
- "docker-compose.{{ deployment_env }}.yml"
env_files:
- ".env.{{ deployment_env }}"
state: stopped
timeout: 60
when: existing_deployment.stat.exists
ignore_errors: true
tags: deploy
- name: Create storage volumes with proper permissions
ansible.builtin.file:
path: "{{ app_path }}/{{ item }}"
state: directory
owner: www-data
group: www-data
mode: '0775'
loop:
- storage
- storage/logs
- storage/cache
- var
- var/logs
- src/Framework/Cache/storage
- src/Framework/Cache/storage/cache
tags: deploy
- name: Deploy application with Docker Compose v2
community.docker.docker_compose_v2:
project_src: "{{ app_path }}"
files:
- docker-compose.yml
- "docker-compose.{{ deployment_env }}.yml"
env_files:
- ".env.{{ deployment_env }}"
pull: "always"
build: "never"
state: present
recreate: "auto"
remove_orphans: true
timeout: 300
tags: deploy
- name: Wait for PHP container to be healthy (label-based)
community.docker.docker_container_info:
filters:
label:
- "com.docker.compose.service=php"
- "com.docker.compose.project={{ compose_project }}"
register: php_info
retries: 20
delay: 10
until: php_info.containers is defined and
(php_info.containers | length) > 0 and
(
(php_info.containers[0].State.Health is defined and php_info.containers[0].State.Health.Status == "healthy")
or
php_info.containers[0].State.Status == "running"
)
tags: deploy
- name: Run database migrations
community.docker.docker_container_exec:
container: "{{ php_info.containers[0].Id }}"
command: php console.php db:migrate --force
chdir: /var/www/html
tags: deploy
- name: Clear application caches
community.docker.docker_container_exec:
container: "{{ php_info.containers[0].Id }}"
command: "php console.php {{ item }}"
chdir: /var/www/html
loop:
- cache:clear
- view:clear
ignore_errors: true
tags: deploy
- name: Wait for application to be ready
ansible.builtin.uri:
url: "https://{{ domain_name }}/health"
method: GET
status_code: 200
timeout: 30
headers:
User-Agent: "Mozilla/5.0 (Ansible Health Check)"
validate_certs: true
register: http_health
retries: 15
delay: 10
until: http_health.status == 200
tags: deploy
- name: Store successful deployment tag
ansible.builtin.copy:
content: "{{ image_tag }}"
dest: "{{ app_path }}/.last_successful_release"
owner: deploy
group: deploy
mode: '0644'
tags: deploy
post_tasks:
- name: Clean up old backups
ansible.builtin.find:
paths: "{{ backup_path }}"
age: "{{ backup_retention_days }}d"
file_type: directory
register: old_backups
when: backup_enabled
tags: cleanup
- name: Remove old backup directories
ansible.builtin.file:
path: "{{ item.path }}"
state: absent
loop: "{{ old_backups.files }}"
when: backup_enabled and old_backups.files is defined
tags: cleanup
- name: CDN update notification
ansible.builtin.debug:
msg: "CDN update would be executed here (run separate CDN playbook)"
when: cdn_update | default(false) | bool
tags: cdn
- name: Deployment success notification
ansible.builtin.debug:
msg:
- "Application deployment completed successfully"
- "Image Tag: {{ image_tag }}"
- "Environment: {{ deployment_env }}"
- "Domain: {{ domain_name }}"
- "CDN Updated: {{ cdn_update }}"
tags: always

View File

@@ -1,442 +0,0 @@
---
# Git-based Deployment Playbook with Releases/Symlink Pattern (Gitea)
# Implements production-ready deployment with zero-downtime and rollback support
# Uses Gitea as Git repository server with SSH-based authentication
#
# Prerequisites:
# - SSH deploy key must be placed in deployment/infrastructure/secrets/gitea_deploy_key
# - Deploy key must be added to Gitea repository or user account
#
# Usage:
# ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-git-based.yml
# ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-git-based.yml --extra-vars "git_branch=main"
# ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-git-based.yml --extra-vars "release_tag=v1.0.0"
- name: Deploy Custom PHP Framework (Git-based with Releases)
hosts: web_servers
become: true
vars:
# Application configuration
app_name: michaelschiemer
app_user: deploy
app_group: deploy
# Deployment paths
app_base_path: "/var/www/{{ app_name }}"
releases_path: "{{ app_base_path }}/releases"
shared_path: "{{ app_base_path }}/shared"
current_path: "{{ app_base_path }}/current"
# Git configuration (Gitea)
# Use localhost for local testing, git.michaelschiemer.de for production
git_repo: "git@localhost:michael/michaelschiemer.git"
git_branch: "{{ release_tag | default('main') }}"
git_ssh_key: "/home/{{ app_user }}/.ssh/gitea_deploy_key"
# Release configuration
release_timestamp: "{{ ansible_date_time.epoch }}"
release_name: "{{ release_tag | default(release_timestamp) }}"
release_path: "{{ releases_path }}/{{ release_name }}"
# Deployment settings
keep_releases: 5
composer_install_flags: "--no-dev --optimize-autoloader --no-interaction"
# Shared directories and files
shared_dirs:
- storage/logs
- storage/cache
- storage/sessions
- storage/uploads
- public/uploads
shared_files:
- .env.production
tasks:
# ==========================================
# 1. SSH Key Setup for Gitea Access
# ==========================================
- name: Create .ssh directory for deploy user
file:
path: "/home/{{ app_user }}/.ssh"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0700'
- name: Copy Gitea deploy SSH private key
copy:
src: "{{ playbook_dir }}/../secrets/gitea_deploy_key"
dest: "{{ git_ssh_key }}"
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0600'
- name: Copy Gitea deploy SSH public key
copy:
src: "{{ playbook_dir }}/../secrets/gitea_deploy_key.pub"
dest: "{{ git_ssh_key }}.pub"
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0644'
- name: Configure SSH for Gitea (disable StrictHostKeyChecking)
blockinfile:
path: "/home/{{ app_user }}/.ssh/config"
create: yes
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0600'
marker: "# {mark} ANSIBLE MANAGED BLOCK - Gitea SSH Config"
block: |
Host localhost
HostName localhost
Port 2222
User git
IdentityFile {{ git_ssh_key }}
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Host git.michaelschiemer.de
HostName git.michaelschiemer.de
Port 2222
User git
IdentityFile {{ git_ssh_key }}
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
# ==========================================
# 2. Directory Structure Setup
# ==========================================
- name: Create base application directory
file:
path: "{{ app_base_path }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0755'
- name: Check if deployment lock exists
stat:
path: "{{ app_base_path }}/.deploy.lock"
register: deploy_lock
- name: Fail if deployment is already in progress
fail:
msg: "Deployment already in progress. Lock file exists: {{ app_base_path }}/.deploy.lock"
when: deploy_lock.stat.exists
- name: Create deployment lock
file:
path: "{{ app_base_path }}/.deploy.lock"
state: touch
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0644'
- name: Log deployment start
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Deployment started - Release: {{ release_name }} - User: {{ ansible_user_id }}"
create: yes
owner: "{{ app_user }}"
group: "{{ app_group }}"
- name: Create releases directory
file:
path: "{{ releases_path }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0755'
- name: Create shared directory
file:
path: "{{ shared_path }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0755'
- name: Create shared subdirectories
file:
path: "{{ shared_path }}/{{ item }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0755'
loop: "{{ shared_dirs }}"
# ==========================================
# 2. Git Repository Clone
# ==========================================
- name: Clone repository to new release directory
git:
repo: "{{ git_repo }}"
dest: "{{ release_path }}"
version: "{{ git_branch }}"
force: yes
depth: 1
become_user: "{{ app_user }}"
register: git_clone
- name: Get current commit hash
command: git rev-parse HEAD
args:
chdir: "{{ release_path }}"
register: commit_hash
changed_when: false
- name: Log commit hash
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Commit: {{ commit_hash.stdout }}"
# ==========================================
# 3. Shared Files/Directories Symlinks
# ==========================================
- name: Remove shared directories from release (they will be symlinked)
file:
path: "{{ release_path }}/{{ item }}"
state: absent
loop: "{{ shared_dirs }}"
- name: Create symlinks for shared directories
file:
src: "{{ shared_path }}/{{ item }}"
dest: "{{ release_path }}/{{ item }}"
state: link
owner: "{{ app_user }}"
group: "{{ app_group }}"
loop: "{{ shared_dirs }}"
- name: Create symlinks for shared files
file:
src: "{{ shared_path }}/{{ item }}"
dest: "{{ release_path }}/{{ item }}"
state: link
owner: "{{ app_user }}"
group: "{{ app_group }}"
loop: "{{ shared_files }}"
when: shared_files | length > 0
# ==========================================
# 4. Dependencies Installation
# ==========================================
- name: Install Composer dependencies
composer:
command: install
arguments: "{{ composer_install_flags }}"
working_dir: "{{ release_path }}"
become_user: "{{ app_user }}"
environment:
COMPOSER_HOME: "/home/{{ app_user }}/.composer"
- name: Check if package.json exists
stat:
path: "{{ release_path }}/package.json"
register: package_json
- name: Install NPM dependencies and build assets
block:
- name: Install NPM dependencies
npm:
path: "{{ release_path }}"
state: present
production: yes
become_user: "{{ app_user }}"
- name: Build production assets
command: npm run build
args:
chdir: "{{ release_path }}"
become_user: "{{ app_user }}"
when: package_json.stat.exists
# ==========================================
# 5. File Permissions
# ==========================================
- name: Set correct ownership for release
file:
path: "{{ release_path }}"
owner: "{{ app_user }}"
group: "{{ app_group }}"
recurse: yes
- name: Make console script executable
file:
path: "{{ release_path }}/console.php"
mode: '0755'
ignore_errors: yes
# ==========================================
# 6. Database Migrations (Optional)
# ==========================================
- name: Run database migrations
command: php console.php db:migrate --no-interaction
args:
chdir: "{{ release_path }}"
become_user: "{{ app_user }}"
when: run_migrations | default(false) | bool
register: migrations_result
- name: Log migration result
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Migrations: {{ migrations_result.stdout | default('skipped') }}"
when: run_migrations | default(false) | bool
# ==========================================
# 7. Symlink Switch (Zero-Downtime)
# ==========================================
- name: Get current release (before switch)
stat:
path: "{{ current_path }}"
register: current_release_before
- name: Store previous release path for rollback
set_fact:
previous_release: "{{ current_release_before.stat.lnk_source | default('none') }}"
- name: Switch current symlink to new release (atomic operation)
file:
src: "{{ release_path }}"
dest: "{{ current_path }}"
state: link
owner: "{{ app_user }}"
group: "{{ app_group }}"
force: yes
- name: Log symlink switch
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Symlink switched: {{ current_path }} -> {{ release_path }}"
# ==========================================
# 8. Health Checks
# ==========================================
- name: Wait for application to be ready
wait_for:
timeout: 10
delegate_to: localhost
- name: Health check - Summary endpoint
uri:
url: "http://{{ ansible_host }}/health/summary"
method: GET
return_content: yes
status_code: 200
register: health_check
retries: 3
delay: 5
until: health_check.status == 200
ignore_errors: yes
- name: Log health check result
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Health check: {{ health_check.status | default('FAILED') }}"
- name: Rollback on health check failure
block:
- name: Switch symlink back to previous release
file:
src: "{{ previous_release }}"
dest: "{{ current_path }}"
state: link
force: yes
when: previous_release != 'none'
- name: Remove failed release
file:
path: "{{ release_path }}"
state: absent
- name: Log rollback
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] ROLLBACK: Health check failed, reverted to {{ previous_release }}"
- name: Fail deployment
fail:
msg: "Deployment failed - health check returned {{ health_check.status }}. Rolled back to previous release."
when: health_check.status != 200
# ==========================================
# 9. Cleanup Old Releases
# ==========================================
- name: Get list of all releases
find:
paths: "{{ releases_path }}"
file_type: directory
register: all_releases
- name: Sort releases by creation time
set_fact:
sorted_releases: "{{ all_releases.files | sort(attribute='ctime', reverse=true) }}"
- name: Remove old releases (keep last {{ keep_releases }})
file:
path: "{{ item.path }}"
state: absent
loop: "{{ sorted_releases[keep_releases:] }}"
when: sorted_releases | length > keep_releases
- name: Log cleanup
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Cleanup: Kept {{ [sorted_releases | length, keep_releases] | min }} releases, removed {{ [sorted_releases | length - keep_releases, 0] | max }}"
post_tasks:
- name: Cleanup and logging
block:
- name: Remove deployment lock
file:
path: "{{ app_base_path }}/.deploy.lock"
state: absent
- name: Log deployment completion
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Deployment completed successfully - Release: {{ release_name }}"
- name: Display deployment summary
debug:
msg:
- "=========================================="
- "Deployment Summary"
- "=========================================="
- "Release: {{ release_name }}"
- "Commit: {{ commit_hash.stdout }}"
- "Path: {{ release_path }}"
- "Current: {{ current_path }}"
- "Health Check: {{ health_check.status | default('N/A') }}"
- "Previous Release: {{ previous_release }}"
- "=========================================="
rescue:
- name: Remove deployment lock on failure
file:
path: "{{ app_base_path }}/.deploy.lock"
state: absent
- name: Log deployment failure
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] DEPLOYMENT FAILED - Release: {{ release_name }}"
- name: Fail with error message
fail:
msg: "Deployment failed. Check {{ app_base_path }}/deploy.log for details."

View File

@@ -1,556 +0,0 @@
---
# Rsync-based Deployment Playbook with Releases/Symlink Pattern
# Implements production-ready deployment with zero-downtime and rollback support
# No GitHub dependency - deploys directly from local machine
#
# Usage:
# ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml
# ansible-playbook -i inventories/production/hosts.yml playbooks/deploy-rsync-based.yml --extra-vars "release_tag=v1.0.0"
- name: Deploy Custom PHP Framework (Rsync-based with Releases)
hosts: web_servers
become: true
vars:
# Application configuration
app_name: michaelschiemer
app_user: deploy
app_group: deploy
# Deployment paths
app_base_path: "/home/{{ app_user }}/{{ app_name }}"
releases_path: "{{ app_base_path }}/releases"
shared_path: "{{ app_base_path }}/shared"
current_path: "{{ app_base_path }}/current"
# Local source directory (project root on your machine)
local_project_path: "{{ playbook_dir }}/../../.."
# Release configuration
release_timestamp: "{{ ansible_date_time.epoch }}"
# Note: effective_release_tag is set in pre_tasks based on Git tags
release_name: "{{ effective_release_tag | default(release_tag | default(release_timestamp)) }}"
release_path: "{{ releases_path }}/{{ release_name }}"
# Deployment settings
keep_releases: 5
composer_install_flags: "--no-dev --optimize-autoloader --no-interaction"
# Shared directories and files
# Shared directories that need symlinks
# NOTE: storage/logs, storage/cache, storage/uploads are handled by Docker volumes
shared_dirs:
- storage/sessions
- public/uploads
shared_files:
- .env.production
# Rsync exclusions
rsync_excludes:
- .git/
- .github/
- node_modules/
- .env
- .env.local
- .env.development
- storage/
- public/uploads/
- tests/
- .idea/
- .vscode/
- "*.log"
- .DS_Store
- deployment/
- database.sqlite
- "*.cache"
- .php-cs-fixer.cache
- var/cache/
- var/logs/
- "*.php85/"
- src/**/*.php85/
pre_tasks:
# Git Tag Detection and Validation
- name: Get current Git tag (if release_tag not specified)
local_action:
module: command
cmd: git describe --tags --exact-match
chdir: "{{ local_project_path }}"
register: git_current_tag
become: false
ignore_errors: yes
when: release_tag is not defined
- name: Get current Git commit hash
local_action:
module: command
cmd: git rev-parse --short HEAD
chdir: "{{ local_project_path }}"
register: git_commit_hash
become: false
- name: Set release_name from Git tag or timestamp
set_fact:
effective_release_tag: "{{ release_tag | default(git_current_tag.stdout if (git_current_tag is defined and git_current_tag.rc == 0) else release_timestamp) }}"
git_hash: "{{ git_commit_hash.stdout }}"
- name: Display deployment information
debug:
msg:
- "=========================================="
- "Deployment Information"
- "=========================================="
- "Release: {{ effective_release_tag }}"
- "Git Hash: {{ git_hash }}"
- "Source: {{ local_project_path }}"
- "Target: {{ ansible_host }}"
- "=========================================="
- name: Install Composer dependencies locally before deployment
local_action:
module: command
cmd: composer install {{ composer_install_flags }}
chdir: "{{ local_project_path }}"
become: false
- name: Build NPM assets locally before deployment
local_action:
module: command
cmd: npm run build
chdir: "{{ local_project_path }}"
become: false
- name: Check if deployment lock exists
stat:
path: "{{ app_base_path }}/.deploy.lock"
register: deploy_lock
- name: Remove stale deployment lock if force flag is set
file:
path: "{{ app_base_path }}/.deploy.lock"
state: absent
when: deploy_lock.stat.exists and (force_deploy | default(false))
- name: Fail if deployment is already in progress (without force)
fail:
msg: "Deployment already in progress. Lock file exists: {{ app_base_path }}/.deploy.lock. Use --extra-vars 'force_deploy=true' to override."
when: deploy_lock.stat.exists and not (force_deploy | default(false))
- name: Create deployment lock
file:
path: "{{ app_base_path }}/.deploy.lock"
state: touch
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0644'
- name: Log deployment start
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Deployment started - Release: {{ release_name }} - User: {{ ansible_user_id }}"
create: yes
owner: "{{ app_user }}"
group: "{{ app_group }}"
tasks:
# ==========================================
# 1. Directory Structure Setup
# ==========================================
- name: Create base application directory
file:
path: "{{ app_base_path }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0755'
- name: Create releases directory
file:
path: "{{ releases_path }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0755'
- name: Create shared directory
file:
path: "{{ shared_path }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0755'
- name: Create shared subdirectories
file:
path: "{{ shared_path }}/{{ item }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0755'
loop: "{{ shared_dirs }}"
# ==========================================
# 2. Rsync Application Code to New Release
# ==========================================
- name: Remove old release directory if exists (prevent permission issues)
file:
path: "{{ release_path }}"
state: absent
- name: Create new release directory
file:
path: "{{ release_path }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0755'
- name: Temporarily rename .dockerignore to prevent rsync -F from reading it
command: mv {{ local_project_path }}/.dockerignore {{ local_project_path }}/.dockerignore.bak
delegate_to: localhost
become: false
ignore_errors: yes
- name: Sync application code to new release via rsync (raw command to avoid -F flag)
command: >
rsync --delay-updates --compress --delete-after --archive --rsh='ssh -i {{ ansible_ssh_private_key_file }} -o StrictHostKeyChecking=no' --no-g --no-o
{% for exclude in rsync_excludes %}--exclude='{{ exclude }}' {% endfor %}
{{ local_project_path }}/ {{ app_user }}@{{ ansible_host }}:{{ release_path }}/
delegate_to: localhost
become: false
- name: Restore .dockerignore after rsync
command: mv {{ local_project_path }}/.dockerignore.bak {{ local_project_path }}/.dockerignore
delegate_to: localhost
become: false
ignore_errors: yes
- name: Set correct ownership for release
file:
path: "{{ release_path }}"
owner: "{{ app_user }}"
group: "{{ app_group }}"
recurse: yes
- name: Get local git commit hash (if available)
command: git rev-parse HEAD
args:
chdir: "{{ local_project_path }}"
register: commit_hash
delegate_to: localhost
become: false
changed_when: false
failed_when: false
- name: Log release and commit information
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Release: {{ effective_release_tag }} | Git Hash: {{ git_hash | default('N/A') }} | Commit: {{ commit_hash.stdout | default('N/A') }}"
when: commit_hash.rc == 0
# ==========================================
# 3. Shared Files/Directories Symlinks
# ==========================================
- name: Remove shared directories from release (they will be symlinked)
file:
path: "{{ release_path }}/{{ item }}"
state: absent
loop: "{{ shared_dirs }}"
- name: Create parent directories for symlinks
file:
path: "{{ release_path }}/{{ item | dirname }}"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0755'
loop: "{{ shared_dirs }}"
# Skip if dirname is current directory ('.')
when: (item | dirname) != '.'
- name: Create symlinks for shared directories
file:
src: "{{ shared_path }}/{{ item }}"
dest: "{{ release_path }}/{{ item }}"
state: link
owner: "{{ app_user }}"
group: "{{ app_group }}"
force: yes
loop: "{{ shared_dirs }}"
- name: Remove .env.production from release (will be symlinked)
file:
path: "{{ release_path }}/.env.production"
state: absent
- name: Create symlink for .env.production
file:
src: "{{ shared_path }}/.env.production"
dest: "{{ release_path }}/.env.production"
state: link
owner: "{{ app_user }}"
group: "{{ app_group }}"
force: yes
- name: Create .env symlink with relative path to shared .env.production for Docker container access
file:
src: "../../shared/.env.production"
dest: "{{ release_path }}/.env"
state: link
owner: "{{ app_user }}"
group: "{{ app_group }}"
force: yes
# ==========================================
# 4. Dependencies Installation
# ==========================================
# Composer dependencies and NPM assets are already built locally and rsync'd
# No need to run composer install or npm build on the server
# ==========================================
# 5. File Permissions
# ==========================================
- name: Make console script executable
file:
path: "{{ release_path }}/console.php"
mode: '0755'
ignore_errors: yes
# ==========================================
# 6. Database Migrations (Optional)
# ==========================================
- name: Run database migrations
command: php console.php db:migrate --no-interaction
args:
chdir: "{{ release_path }}"
become_user: "{{ app_user }}"
when: run_migrations | default(false) | bool
register: migrations_result
- name: Log migration result
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Migrations: {{ migrations_result.stdout | default('skipped') }}"
when: run_migrations | default(false) | bool
# ==========================================
# 7. Prepare for Deployment
# ==========================================
- name: Get current release (before switch)
stat:
path: "{{ current_path }}"
register: current_release_before
- name: Stop existing Docker containers (if any)
command: docker compose -f docker-compose.yml -f docker-compose.production.yml down
args:
chdir: "{{ current_path }}"
become_user: "{{ app_user }}"
when: current_release_before.stat.exists
ignore_errors: yes
- name: Remove any remaining containers (force cleanup all)
shell: |
docker stop certbot db redis php web queue-worker 2>/dev/null || true
docker rm certbot db redis php web queue-worker 2>/dev/null || true
become_user: "{{ app_user }}"
ignore_errors: yes
# ==========================================
# 8. Symlink Switch (Zero-Downtime)
# ==========================================
- name: Store previous release path for rollback
set_fact:
previous_release: "{{ current_release_before.stat.lnk_source | default('none') }}"
- name: Switch current symlink to new release (atomic operation)
file:
src: "{{ release_path }}"
dest: "{{ current_path }}"
state: link
owner: "{{ app_user }}"
group: "{{ app_group }}"
force: yes
- name: Log symlink switch
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Symlink switched: {{ current_path }} -> {{ release_path }}"
# ==========================================
# 8.5. SSL Certificate Setup
# ==========================================
- name: Create SSL directory in release
file:
path: "{{ release_path }}/ssl"
state: directory
owner: "{{ app_user }}"
group: "{{ app_group }}"
mode: '0755'
- name: Copy SSL certificates from certbot to release (if they exist)
shell: |
if docker ps | grep -q certbot; then
docker cp certbot:/etc/letsencrypt/archive/michaelschiemer.de/fullchain1.pem {{ release_path }}/ssl/fullchain.pem 2>/dev/null || true
docker cp certbot:/etc/letsencrypt/archive/michaelschiemer.de/privkey1.pem {{ release_path }}/ssl/privkey.pem 2>/dev/null || true
chown {{ app_user }}:{{ app_group }} {{ release_path }}/ssl/*.pem 2>/dev/null || true
fi
args:
chdir: "{{ current_path }}"
ignore_errors: yes
# ==========================================
# 9. Start Docker Containers
# ==========================================
- name: Start Docker containers with new release
command: docker compose -f docker-compose.yml -f docker-compose.production.yml up -d --build
args:
chdir: "{{ current_path }}"
become_user: "{{ app_user }}"
- name: Wait for containers to be ready
pause:
seconds: 15
# ==========================================
# 10. Health Checks
# ==========================================
- name: Wait for application to be ready
pause:
seconds: 10
- name: Health check - Nginx ping endpoint (HTTPS)
uri:
url: "https://{{ ansible_host }}/ping"
method: GET
return_content: yes
status_code: 200
validate_certs: no
follow_redirects: none
register: health_check
retries: 3
delay: 5
until: health_check.status == 200
ignore_errors: yes
- name: Log health check result
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Health check: {{ health_check.status | default('FAILED') }}"
- name: Rollback on health check failure
block:
- name: Stop failed release containers
command: docker compose down
args:
chdir: "{{ current_path }}"
become_user: "{{ app_user }}"
- name: Switch symlink back to previous release
file:
src: "{{ previous_release }}"
dest: "{{ current_path }}"
state: link
force: yes
when: previous_release != 'none'
- name: Start previous release containers
command: docker compose -f docker-compose.yml -f docker-compose.production.yml up -d
args:
chdir: "{{ current_path }}"
become_user: "{{ app_user }}"
when: previous_release != 'none'
- name: Remove failed release
file:
path: "{{ release_path }}"
state: absent
- name: Log rollback
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] ROLLBACK: Health check failed, reverted to {{ previous_release }}"
- name: Fail deployment
fail:
msg: "Deployment failed - health check returned {{ health_check.status }}. Rolled back to previous release."
when: health_check.status != 200
# ==========================================
# 11. Cleanup Old Releases
# ==========================================
- name: Get list of all releases
find:
paths: "{{ releases_path }}"
file_type: directory
register: all_releases
- name: Sort releases by creation time
set_fact:
sorted_releases: "{{ all_releases.files | sort(attribute='ctime', reverse=true) }}"
- name: Remove old releases (keep last {{ keep_releases }})
file:
path: "{{ item.path }}"
state: absent
loop: "{{ sorted_releases[keep_releases:] }}"
when: sorted_releases | length > keep_releases
- name: Log cleanup
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Cleanup: Kept {{ [sorted_releases | length, keep_releases] | min }} releases, removed {{ [sorted_releases | length - keep_releases, 0] | max }}"
post_tasks:
- name: Cleanup and logging
block:
- name: Remove deployment lock
file:
path: "{{ app_base_path }}/.deploy.lock"
state: absent
- name: Log deployment completion
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] Deployment completed successfully - Release: {{ release_name }}"
- name: Display deployment summary
debug:
msg:
- "=========================================="
- "Deployment Summary"
- "=========================================="
- "Release: {{ release_name }}"
- "Commit: {{ commit_hash.stdout | default('N/A') }}"
- "Path: {{ release_path }}"
- "Current: {{ current_path }}"
- "Health Check: {{ health_check.status | default('N/A') }}"
- "Previous Release: {{ previous_release }}"
- "=========================================="
rescue:
- name: Remove deployment lock on failure
file:
path: "{{ app_base_path }}/.deploy.lock"
state: absent
- name: Log deployment failure
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] DEPLOYMENT FAILED - Release: {{ release_name }}"
- name: Fail with error message
fail:
msg: "Deployment failed. Check {{ app_base_path }}/deploy.log for details."

View File

@@ -1,257 +0,0 @@
---
# Initial server setup playbook for fresh Netcup VPS
# Configures security, creates deploy user, and prepares server
# Run once on fresh server installation
- name: Initial Server Setup for Custom PHP Framework
hosts: web_servers
become: true
gather_facts: true
vars:
deploy_user: "{{ deploy_user_name | default('deploy') }}"
deploy_user_shell: /bin/bash
pre_tasks:
- name: Verify this is a fresh server setup
assert:
that:
- fresh_server_setup is defined and fresh_server_setup == true
- create_deploy_user is defined and create_deploy_user == true
fail_msg: "This playbook is only for fresh server setup. Set fresh_server_setup=true to continue."
tags: always
- name: Update apt cache
apt:
update_cache: true
cache_valid_time: 3600
tags: system
tasks:
# System Updates and Basic Packages
- name: Upgrade all packages
apt:
upgrade: full
autoremove: true
autoclean: true
tags: system
- name: Install essential packages
apt:
name:
- curl
- wget
- git
- unzip
- zip
- vim
- htop
- tree
- rsync
- ca-certificates
- gnupg
- lsb-release
- software-properties-common
- apt-transport-https
- ufw
- fail2ban
state: present
tags: system
# User Management
- name: Create deploy user
user:
name: "{{ deploy_user }}"
comment: "Deployment user for Custom PHP Framework"
shell: "{{ deploy_user_shell }}"
home: "/home/{{ deploy_user }}"
create_home: true
groups: "{{ deploy_user_groups | default(['sudo']) }}"
append: true
tags: users
- name: Set up authorized_keys for deploy user
authorized_key:
user: "{{ deploy_user }}"
state: present
key: "{{ lookup('file', ansible_ssh_private_key_file + '.pub') }}"
comment: "Deploy key for {{ deploy_user }}@{{ inventory_hostname }}"
tags: users
- name: Allow deploy user sudo without password
lineinfile:
dest: /etc/sudoers.d/{{ deploy_user }}
line: "{{ deploy_user }} ALL=(ALL) NOPASSWD:ALL"
state: present
mode: '0440'
create: true
validate: 'visudo -cf %s'
tags: users
# SSH Security Hardening
- name: Configure SSH security
lineinfile:
dest: /etc/ssh/sshd_config
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
backup: true
loop:
- { regexp: '^#?PasswordAuthentication', line: 'PasswordAuthentication no' }
- { regexp: '^#?PubkeyAuthentication', line: 'PubkeyAuthentication yes' }
- { regexp: '^#?PermitRootLogin', line: 'PermitRootLogin prohibit-password' }
- { regexp: '^#?PermitEmptyPasswords', line: 'PermitEmptyPasswords no' }
- { regexp: '^#?MaxAuthTries', line: 'MaxAuthTries 3' }
- { regexp: '^#?ClientAliveInterval', line: 'ClientAliveInterval 300' }
- { regexp: '^#?ClientAliveCountMax', line: 'ClientAliveCountMax 2' }
notify: restart sshd
tags: security
- name: Restrict SSH to specific users
lineinfile:
dest: /etc/ssh/sshd_config
line: "AllowUsers root {{ deploy_user }}"
state: present
notify: restart sshd
tags: security
# Firewall Configuration
- name: Configure UFW default policies
ufw:
policy: "{{ item.policy }}"
direction: "{{ item.direction }}"
loop:
- { policy: 'deny', direction: 'incoming' }
- { policy: 'allow', direction: 'outgoing' }
tags: firewall
- name: Allow SSH through firewall
ufw:
rule: allow
name: OpenSSH
tags: firewall
- name: Allow HTTP and HTTPS through firewall
ufw:
rule: allow
port: "{{ item }}"
proto: tcp
loop:
- 80
- 443
tags: firewall
- name: Enable UFW
ufw:
state: enabled
tags: firewall
# Fail2ban Configuration
- name: Configure fail2ban for SSH
copy:
dest: /etc/fail2ban/jail.local
content: |
[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 3
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 3600
backup: true
notify: restart fail2ban
tags: security
# System Optimization
- name: Configure swappiness
sysctl:
name: vm.swappiness
value: '10'
state: present
tags: performance
- name: Configure filesystem parameters
sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
state: present
loop:
- { name: 'fs.file-max', value: '2097152' }
- { name: 'net.core.somaxconn', value: '65535' }
- { name: 'net.ipv4.tcp_max_syn_backlog', value: '65535' }
tags: performance
# Time Synchronization
- name: Install and configure NTP
apt:
name: ntp
state: present
tags: system
- name: Ensure NTP is running and enabled
systemd:
name: ntp
state: started
enabled: true
tags: system
# Directory Structure
- name: Create application directories
file:
path: "{{ item }}"
state: directory
owner: "{{ deploy_user }}"
group: "{{ deploy_user }}"
mode: '0755'
loop:
- /var/www
- /var/www/html
- /var/www/backups
- /var/www/logs
- /var/log/custom-php-framework
tags: directories
# Log Rotation
- name: Configure log rotation for application
copy:
dest: /etc/logrotate.d/custom-php-framework
content: |
/var/log/custom-php-framework/*.log {
daily
missingok
rotate 30
compress
notifempty
create 644 www-data www-data
postrotate
/bin/systemctl reload-or-restart docker || true
endscript
}
tags: logs
handlers:
- name: restart sshd
systemd:
name: sshd
state: restarted
- name: restart fail2ban
systemd:
name: fail2ban
state: restarted
post_tasks:
- name: Display setup completion info
debug:
msg:
- "Initial server setup completed successfully!"
- "Deploy user '{{ deploy_user }}' created with sudo privileges"
- "SSH key authentication configured"
- "Firewall enabled (SSH, HTTP, HTTPS allowed)"
- "Fail2ban configured for SSH protection"
- "Next: Update inventory to use deploy user and run infrastructure setup"
tags: always

View File

@@ -1,142 +0,0 @@
---
# Git-based Rollback Playbook
# Rolls back to the previous release by switching the symlink
#
# Usage:
# ansible-playbook -i inventories/production/hosts.yml playbooks/rollback-git-based.yml
# ansible-playbook -i inventories/production/hosts.yml playbooks/rollback-git-based.yml --extra-vars "rollback_to=20241025123456"
- name: Rollback Custom PHP Framework (Git-based)
hosts: web_servers
become: true
vars:
app_name: michaelschiemer
app_user: deploy
app_group: deploy
app_base_path: "/var/www/{{ app_name }}"
releases_path: "{{ app_base_path }}/releases"
current_path: "{{ app_base_path }}/current"
pre_tasks:
- name: Check if deployment lock exists
stat:
path: "{{ app_base_path }}/.deploy.lock"
register: deploy_lock
- name: Fail if deployment is in progress
fail:
msg: "Cannot rollback - deployment in progress"
when: deploy_lock.stat.exists
- name: Create rollback lock
file:
path: "{{ app_base_path }}/.rollback.lock"
state: touch
owner: "{{ app_user }}"
group: "{{ app_group }}"
tasks:
- name: Get current release
stat:
path: "{{ current_path }}"
register: current_release
- name: Fail if no current release exists
fail:
msg: "No current release found at {{ current_path }}"
when: not current_release.stat.exists
- name: Get list of all releases
find:
paths: "{{ releases_path }}"
file_type: directory
register: all_releases
- name: Sort releases by creation time (newest first)
set_fact:
sorted_releases: "{{ all_releases.files | sort(attribute='ctime', reverse=true) }}"
- name: Determine target release for rollback
set_fact:
target_release: "{{ rollback_to if rollback_to is defined else sorted_releases[1].path }}"
- name: Verify target release exists
stat:
path: "{{ target_release }}"
register: target_release_stat
- name: Fail if target release doesn't exist
fail:
msg: "Target release not found: {{ target_release }}"
when: not target_release_stat.stat.exists
- name: Display rollback information
debug:
msg:
- "Current release: {{ current_release.stat.lnk_source }}"
- "Rolling back to: {{ target_release }}"
- name: Switch symlink to previous release
file:
src: "{{ target_release }}"
dest: "{{ current_path }}"
state: link
owner: "{{ app_user }}"
group: "{{ app_group }}"
force: yes
- name: Wait for application to be ready
wait_for:
timeout: 5
delegate_to: localhost
- name: Health check after rollback
uri:
url: "http://{{ ansible_host }}/health/summary"
method: GET
return_content: yes
status_code: 200
register: health_check
retries: 3
delay: 5
until: health_check.status == 200
ignore_errors: yes
- name: Log rollback
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] ROLLBACK: {{ current_release.stat.lnk_source }} -> {{ target_release }} - Health: {{ health_check.status | default('FAILED') }}"
create: yes
- name: Display rollback result
debug:
msg:
- "=========================================="
- "Rollback completed"
- "Previous: {{ current_release.stat.lnk_source }}"
- "Current: {{ target_release }}"
- "Health check: {{ health_check.status | default('FAILED') }}"
- "=========================================="
post_tasks:
- name: Remove rollback lock
file:
path: "{{ app_base_path }}/.rollback.lock"
state: absent
rescue:
- name: Remove rollback lock on failure
file:
path: "{{ app_base_path }}/.rollback.lock"
state: absent
- name: Log rollback failure
lineinfile:
path: "{{ app_base_path }}/deploy.log"
line: "[{{ ansible_date_time.iso8601 }}] ROLLBACK FAILED"
create: yes
- name: Fail with error message
fail:
msg: "Rollback failed"

View File

@@ -1,235 +0,0 @@
---
# Application Rollback Playbook
# Rolls back to a specific image tag using container deployment
---
# Rollback Playbook: setzt IMAGE_TAG zurück und recreatet Services
- name: Rollback Custom PHP Framework Application
hosts: web_servers
become: true
gather_facts: false
vars:
app_path: "/var/www/html"
domain_name: "{{ DOMAIN_NAME | default('michaelschiemer.de') }}"
rollback_tag: "{{ ROLLBACK_TAG | default('') }}"
compose_project: "{{ compose_project_name | default(app_path | basename) }}"
pre_tasks:
- name: Validate ROLLBACK_TAG is provided
ansible.builtin.fail:
msg: "Setze ROLLBACK_TAG auf ein gültiges Image-Tag."
when: rollback_tag | length == 0
tasks:
- name: Ensure environment file exists
ansible.builtin.stat:
path: "{{ app_path }}/.env.{{ environment }}"
register: env_file
- name: Fail if environment file is missing
ansible.builtin.fail:
msg: "Environment-Datei fehlt: {{ app_path }}/.env.{{ environment }}"
when: not env_file.stat.exists
- name: Write IMAGE_TAG to env file
ansible.builtin.lineinfile:
path: "{{ app_path }}/.env.{{ environment }}"
regexp: '^IMAGE_TAG='
line: "IMAGE_TAG={{ rollback_tag }}"
create: no
backrefs: false
mode: "0600"
- name: Recreate services with rollback tag
community.docker.docker_compose_v2:
project_src: "{{ app_path }}"
files:
- docker-compose.yml
- "docker-compose.{{ environment }}.yml"
env_files:
- ".env.{{ environment }}"
pull: false
build: false
state: present
recreate: smart
remove_orphans: true
timeout: 300
- name: Wait for PHP container to be healthy (label-based)
community.docker.docker_container_info:
filters:
label:
- "com.docker.compose.service=php"
- "com.docker.compose.project={{ compose_project }}"
register: php_info
retries: 20
delay: 10
until: php_info.containers is defined and
(php_info.containers | length) > 0 and
(
(php_info.containers[0].State.Health is defined and php_info.containers[0].State.Health.Status == "healthy")
or
php_info.containers[0].State.Status == "running"
)
- name: Verify application HTTP health
ansible.builtin.uri:
url: "https://{{ domain_name }}/health"
method: GET
status_code: 200
timeout: 30
validate_certs: true
register: http_health
retries: 15
delay: 10
until: http_health.status == 200
post_tasks:
- name: Rollback completed
ansible.builtin.debug:
msg:
- "Rollback erfolgreich"
- "Neues aktives Image-Tag: {{ rollback_tag }}"
- name: Rollback Custom PHP Framework Application
hosts: web_servers
become: true
gather_facts: true
vars:
app_path: "/var/www/html"
rollback_tag: "{{ ROLLBACK_TAG | mandatory }}"
domain_name: "{{ DOMAIN_NAME | default('michaelschiemer.de') }}"
environment: "{{ ENV | default('production') }}"
pre_tasks:
- name: Verify rollback requirements
assert:
that:
- rollback_tag is defined
- rollback_tag != ''
- rollback_tag != 'latest'
fail_msg: "Rollback requires specific ROLLBACK_TAG (not 'latest')"
tags: always
- name: Check if target tag exists locally
community.docker.docker_image_info:
name: "{{ project_name | default('michaelschiemer') }}:{{ rollback_tag }}"
register: rollback_image_info
ignore_errors: true
tags: always
- name: Pull rollback image if not available locally
community.docker.docker_image:
name: "{{ project_name | default('michaelschiemer') }}:{{ rollback_tag }}"
source: pull
force_source: true
when: rollback_image_info.images | length == 0
tags: always
- name: Store current deployment for emergency recovery
shell: |
if [ -f {{ app_path }}/.env.{{ environment }} ]; then
grep '^IMAGE_TAG=' {{ app_path }}/.env.{{ environment }} | cut -d'=' -f2 > {{ app_path }}/.emergency_recovery_tag || echo 'none'
fi
tags: backup
tasks:
- name: Update environment with rollback tag
template:
src: "{{ environment }}.env.template"
dest: "{{ app_path }}/.env.{{ environment }}"
owner: deploy
group: deploy
mode: '0600'
backup: true
vars:
IMAGE_TAG: "{{ rollback_tag }}"
DOMAIN_NAME: "{{ domain_name }}"
no_log: true
tags: rollback
- name: Stop current services
community.docker.docker_compose_v2:
project_src: "{{ app_path }}"
files:
- docker-compose.yml
- "docker-compose.{{ environment }}.yml"
env_files:
- ".env.{{ environment }}"
state: stopped
timeout: 120
tags: rollback
- name: Deploy rollback version
community.docker.docker_compose_v2:
project_src: "{{ app_path }}"
files:
- docker-compose.yml
- "docker-compose.{{ environment }}.yml"
env_files:
- ".env.{{ environment }}"
pull: "never" # Use local image
build: "never"
state: present
recreate: "always" # Force recreate for rollback
timeout: 300
tags: rollback
- name: Wait for containers to be healthy after rollback
community.docker.docker_container_info:
name: "{{ item }}"
register: container_info
retries: 15
delay: 10
until: container_info.container.State.Health.Status == "healthy" or container_info.container.State.Status == "running"
loop:
- "{{ ansible_hostname }}_php_1"
- "{{ ansible_hostname }}_web_1"
- "{{ ansible_hostname }}_db_1"
- "{{ ansible_hostname }}_redis_1"
ignore_errors: true
tags: rollback
- name: Verify application health after rollback
uri:
url: "https://{{ domain_name }}/health"
method: GET
status_code: 200
timeout: 30
headers:
User-Agent: "Mozilla/5.0 (Ansible Rollback Check)"
validate_certs: true
retries: 10
delay: 15
tags: rollback
- name: Update successful rollback tag
copy:
content: "{{ rollback_tag }}"
dest: "{{ app_path }}/.last_successful_release"
owner: deploy
group: deploy
mode: '0644'
tags: rollback
post_tasks:
- name: Rollback success notification
debug:
msg:
- "Application rollback completed successfully"
- "Rolled back to: {{ rollback_tag }}"
- "Environment: {{ environment }}"
- "Domain: {{ domain_name }}"
- "Emergency recovery tag stored for further rollback if needed"
tags: always
- name: Log rollback event
lineinfile:
path: "{{ app_path }}/rollback.log"
line: "{{ ansible_date_time.iso8601 }} - Rollback to {{ rollback_tag }} from {{ environment }} completed successfully"
create: true
owner: deploy
group: deploy
mode: '0644'
tags: always

View File

@@ -1,170 +0,0 @@
---
# Docker Setup Playbook
# Ensures Docker and Docker Compose are installed and configured
#
# Usage:
# ansible-playbook -i inventories/production/hosts.yml playbooks/setup-docker.yml
- name: Setup Docker for Production
hosts: web_servers
become: true
vars:
app_user: deploy
docker_compose_version: "2.24.0"
tasks:
# ==========================================
# 1. Verify Docker Installation
# ==========================================
- name: Check if Docker is installed
command: docker --version
register: docker_check
changed_when: false
failed_when: false
- name: Display Docker version
debug:
msg: "Docker is already installed: {{ docker_check.stdout }}"
when: docker_check.rc == 0
- name: Install Docker if not present
block:
- name: Update apt cache
apt:
update_cache: yes
- name: Install prerequisites
apt:
name:
- apt-transport-https
- ca-certificates
- curl
- gnupg
- lsb-release
state: present
- name: Add Docker GPG key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add Docker repository
apt_repository:
repo: "deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
state: present
- name: Install Docker
apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
state: present
update_cache: yes
when: docker_check.rc != 0
# ==========================================
# 2. Configure Docker
# ==========================================
- name: Add deploy user to docker group
user:
name: "{{ app_user }}"
groups: docker
append: yes
- name: Ensure Docker service is enabled and started
systemd:
name: docker
enabled: yes
state: started
# ==========================================
# 3. Install Docker Compose Plugin
# ==========================================
- name: Check if Docker Compose plugin is installed
command: docker compose version
register: compose_check
changed_when: false
failed_when: false
- name: Display Docker Compose version
debug:
msg: "Docker Compose is already installed: {{ compose_check.stdout }}"
when: compose_check.rc == 0
# ==========================================
# 4. Configure Docker Daemon
# ==========================================
- name: Create Docker daemon configuration
copy:
dest: /etc/docker/daemon.json
content: |
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"live-restore": true
}
owner: root
group: root
mode: '0644'
notify: Restart Docker
# ==========================================
# 5. Firewall Configuration
# ==========================================
- name: Allow HTTP traffic
ufw:
rule: allow
port: '80'
proto: tcp
- name: Allow HTTPS traffic
ufw:
rule: allow
port: '443'
proto: tcp
# ==========================================
# 6. Verification
# ==========================================
- name: Get Docker info
command: docker info
register: docker_info
changed_when: false
- name: Get Docker Compose version
command: docker compose version
register: compose_version
changed_when: false
- name: Display setup summary
debug:
msg:
- "=========================================="
- "Docker Setup Complete"
- "=========================================="
- "Docker Version: {{ docker_check.stdout }}"
- "Docker Compose: {{ compose_version.stdout }}"
- "User '{{ app_user }}' added to docker group"
- "Firewall: HTTP (80) and HTTPS (443) allowed"
- "=========================================="
- ""
- "Next Steps:"
- "1. Log out and back in for docker group to take effect"
- "2. Run deployment playbook to start containers"
handlers:
- name: Restart Docker
systemd:
name: docker
state: restarted

View File

@@ -1,113 +0,0 @@
---
# Optional CDN Update Playbook
# Only runs when CDN_UPDATE=true is passed
- name: Update CDN Configuration (Optional)
hosts: web_servers
become: true
gather_facts: true
vars:
domain_name: "{{ DOMAIN_NAME | default('michaelschiemer.de') }}"
cdn_enabled: "{{ CDN_UPDATE | default(false) | bool }}"
nginx_conf_path: "/etc/nginx/sites-available/{{ domain_name }}"
pre_tasks:
- name: Check if CDN update is enabled
debug:
msg: "CDN update is {{ 'enabled' if cdn_enabled else 'disabled' }}"
tags: always
- name: Skip CDN tasks if not enabled
meta: end_play
when: not cdn_enabled
tags: always
tasks:
- name: Check if Nginx configuration exists
stat:
path: "{{ nginx_conf_path }}"
register: nginx_config_check
tags: cdn
- name: Fail if Nginx config not found
fail:
msg: "Nginx configuration not found at {{ nginx_conf_path }}"
when: not nginx_config_check.stat.exists
tags: cdn
- name: Backup current Nginx configuration
copy:
src: "{{ nginx_conf_path }}"
dest: "{{ nginx_conf_path }}.backup.{{ ansible_date_time.epoch }}"
remote_src: true
owner: root
group: root
mode: '0644'
tags: cdn
- name: Update Nginx configuration for CDN
lineinfile:
path: "{{ nginx_conf_path }}"
regexp: '^\s*add_header\s+X-CDN-Cache'
line: ' add_header X-CDN-Cache "ENABLED" always;'
insertafter: '^\s*add_header\s+X-Frame-Options'
backup: true
notify: reload nginx
tags: cdn
- name: Add CDN cache headers
blockinfile:
path: "{{ nginx_conf_path }}"
marker: "# {mark} CDN CACHE HEADERS"
insertafter: "location ~ \\.(?:css|js|woff2?|svg|gif|ico|jpe?g|png)\\$ {"
block: |
expires 1y;
add_header Cache-Control "public, immutable";
add_header X-CDN-Served "true";
backup: true
notify: reload nginx
tags: cdn
- name: Validate Nginx configuration
command: nginx -t
register: nginx_test
failed_when: nginx_test.rc != 0
tags: cdn
- name: CDN configuration success
debug:
msg:
- "CDN configuration updated successfully"
- "Domain: {{ domain_name }}"
- "Nginx config: {{ nginx_conf_path }}"
tags: cdn
handlers:
- name: reload nginx
systemd:
name: nginx
state: reloaded
tags: cdn
post_tasks:
- name: Verify CDN headers are working
uri:
url: "https://{{ domain_name }}/favicon.ico"
method: HEAD
headers:
User-Agent: "Mozilla/5.0 (Ansible CDN Check)"
return_content: false
status_code: [200, 404] # 404 is ok for favicon test
register: cdn_test
tags: cdn
- name: CDN verification results
debug:
msg:
- "CDN Test Results:"
- "Status: {{ cdn_test.status }}"
- "Cache-Control: {{ cdn_test.cache_control | default('Not set') }}"
- "X-CDN-Served: {{ cdn_test.x_cdn_served | default('Not set') }}"
when: cdn_test is defined
tags: cdn

View File

@@ -1,21 +0,0 @@
#!/bin/bash
# Restart specific Docker container on production server
# Usage: ./restart.sh [container_name]
# Example: ./restart.sh php
# Without argument: restarts all containers
CONTAINER="${1:-all}"
echo "🔄 Restarting container(s) on production server..."
echo ""
if [ "$CONTAINER" = "all" ]; then
echo "Restarting ALL containers..."
ssh -i ~/.ssh/production deploy@michaelschiemer.de "cd /home/deploy/michaelschiemer/current && docker compose -f docker-compose.yml -f docker-compose.production.yml restart"
else
echo "Restarting container: $CONTAINER"
ssh -i ~/.ssh/production deploy@michaelschiemer.de "docker restart $CONTAINER"
fi
echo ""
echo "✅ Done!"

View File

@@ -1,163 +0,0 @@
---
# Base Security Role Default Variables
# SSH Configuration
ssh_port: 22
ssh_permit_root_login: false
ssh_password_authentication: false
ssh_pubkey_authentication: true
ssh_challenge_response_authentication: false
ssh_gss_api_authentication: false
ssh_x11_forwarding: false
ssh_max_auth_tries: 3
ssh_client_alive_interval: 300
ssh_client_alive_count_max: 2
ssh_max_sessions: 2
ssh_tcp_keep_alive: true
ssh_compression: false
ssh_use_dns: false
ssh_permit_tunnel: false
ssh_permit_user_environment: false
ssh_banner: /etc/ssh/ssh_banner
# Allowed SSH users and groups
ssh_allowed_users:
- "{{ ansible_user }}"
- deploy
ssh_allowed_groups:
- sudo
- adm
# SSH Key Management
ssh_authorized_keys_exclusive: true
ssh_host_key_algorithms:
- ssh-ed25519
- ecdsa-sha2-nistp521
- ecdsa-sha2-nistp384
- ecdsa-sha2-nistp256
- rsa-sha2-512
- rsa-sha2-256
# UFW Firewall Configuration
ufw_enabled: true
ufw_default_incoming: deny
ufw_default_outgoing: allow
ufw_default_forward: deny
ufw_logging: "on"
ufw_reset: false
# Default firewall rules
ufw_rules:
- rule: allow
port: "{{ ssh_port }}"
proto: tcp
comment: "SSH"
- rule: allow
port: "80"
proto: tcp
comment: "HTTP"
- rule: allow
port: "443"
proto: tcp
comment: "HTTPS"
# Fail2ban Configuration
fail2ban_enabled: "{{ fail2ban_enabled | default(true) }}"
fail2ban_loglevel: INFO
fail2ban_socket: /var/run/fail2ban/fail2ban.sock
fail2ban_pidfile: /var/run/fail2ban/fail2ban.pid
# Default Fail2ban jails
fail2ban_jails:
- name: sshd
enabled: true
port: "{{ ssh_port }}"
filter: sshd
logpath: /var/log/auth.log
maxretry: 3
findtime: 600
bantime: 1800
backend: systemd
- name: nginx-http-auth
enabled: true
port: http,https
filter: nginx-http-auth
logpath: /var/log/nginx/error.log
maxretry: 3
findtime: 600
bantime: 1800
- name: nginx-limit-req
enabled: true
port: http,https
filter: nginx-limit-req
logpath: /var/log/nginx/error.log
maxretry: 5
findtime: 600
bantime: 1800
# System Security Settings
security_kernel_parameters:
# Network security
net.ipv4.tcp_syncookies: 1
net.ipv4.ip_forward: 0
net.ipv4.conf.all.send_redirects: 0
net.ipv4.conf.default.send_redirects: 0
net.ipv4.conf.all.accept_redirects: 0
net.ipv4.conf.default.accept_redirects: 0
net.ipv4.conf.all.accept_source_route: 0
net.ipv4.conf.default.accept_source_route: 0
net.ipv4.conf.all.log_martians: 1
net.ipv4.conf.default.log_martians: 1
net.ipv4.icmp_echo_ignore_broadcasts: 1
net.ipv4.icmp_ignore_bogus_error_responses: 1
net.ipv4.conf.all.rp_filter: 1
net.ipv4.conf.default.rp_filter: 1
# IPv6 security
net.ipv6.conf.all.accept_redirects: 0
net.ipv6.conf.default.accept_redirects: 0
net.ipv6.conf.all.accept_ra: 0
net.ipv6.conf.default.accept_ra: 0
# Kernel security
kernel.randomize_va_space: 2
kernel.kptr_restrict: 2
kernel.dmesg_restrict: 1
kernel.printk: "3 3 3 3"
kernel.unprivileged_bpf_disabled: 1
net.core.bpf_jit_harden: 2
# Package updates and security
security_packages:
- fail2ban
- ufw
- unattended-upgrades
- apt-listchanges
- needrestart
- rkhunter
- chkrootkit
- lynis
# Automatic security updates
unattended_upgrades_enabled: true
unattended_upgrades_automatic_reboot: false
unattended_upgrades_automatic_reboot_time: "06:00"
unattended_upgrades_origins_patterns:
- origin=Ubuntu,archive=${distro_codename}-security
- origin=Ubuntu,archive=${distro_codename}-updates
# System hardening
disable_unused_services:
- rpcbind
- nfs-common
- portmap
- xinetd
- telnet
- rsh-server
- rsh-redone-server
# User and permission settings
security_umask: "027"
security_login_timeout: 300

View File

@@ -1,67 +0,0 @@
---
# Base Security Role Handlers
- name: restart ssh
service:
name: ssh
state: restarted
listen: restart ssh
- name: reload ssh
service:
name: ssh
state: reloaded
listen: reload ssh
- name: restart fail2ban
service:
name: fail2ban
state: restarted
listen: restart fail2ban
- name: reload fail2ban
service:
name: fail2ban
state: reloaded
listen: reload fail2ban
- name: restart auditd
service:
name: auditd
state: restarted
listen: restart auditd
- name: reload systemd
systemd:
daemon_reload: true
listen: reload systemd
- name: restart ufw
service:
name: ufw
state: restarted
listen: restart ufw
- name: reload ufw
command: ufw --force reload
listen: reload ufw
- name: restart unattended-upgrades
service:
name: unattended-upgrades
state: restarted
listen: restart unattended-upgrades
- name: update aide database
command: aideinit
listen: update aide database
- name: restart rsyslog
service:
name: rsyslog
state: restarted
listen: restart rsyslog
- name: update rkhunter
command: rkhunter --propupd
listen: update rkhunter

View File

@@ -1,31 +0,0 @@
---
galaxy_info:
role_name: base-security
author: Custom PHP Framework Team
description: Base security hardening for servers
company: michaelschiemer.de
license: MIT
min_ansible_version: 2.12
platforms:
- name: Ubuntu
versions:
- "20.04"
- "22.04"
- "24.04"
- name: Debian
versions:
- "11"
- "12"
galaxy_tags:
- security
- ssh
- firewall
- fail2ban
- hardening
dependencies:
# No external dependencies - keep it self-contained
collections:
- community.general
- ansible.posix

View File

@@ -1,143 +0,0 @@
---
# Fail2ban Configuration
- name: Install fail2ban
package:
name: fail2ban
state: present
tags:
- fail2ban
- packages
- name: Create fail2ban configuration directory
file:
path: /etc/fail2ban/jail.d
state: directory
owner: root
group: root
mode: '0755'
tags:
- fail2ban
- directories
- name: Configure fail2ban main settings
template:
src: fail2ban.local.j2
dest: /etc/fail2ban/fail2ban.local
owner: root
group: root
mode: '0644'
backup: true
notify: restart fail2ban
tags:
- fail2ban
- config
- name: Configure fail2ban default jail settings
template:
src: jail.local.j2
dest: /etc/fail2ban/jail.local
owner: root
group: root
mode: '0644'
backup: true
notify: restart fail2ban
tags:
- fail2ban
- config
- jail
- name: Create custom fail2ban jails
template:
src: custom-jails.local.j2
dest: /etc/fail2ban/jail.d/custom-jails.local
owner: root
group: root
mode: '0644'
backup: true
notify: restart fail2ban
tags:
- fail2ban
- jails
- custom
- name: Create custom fail2ban filters
template:
src: "{{ item }}.conf.j2"
dest: "/etc/fail2ban/filter.d/{{ item }}.conf"
owner: root
group: root
mode: '0644'
loop:
- nginx-limit-req
- nginx-http-auth
- php-framework
notify: restart fail2ban
tags:
- fail2ban
- filters
- name: Create fail2ban action for PHP Framework
template:
src: php-framework-action.conf.j2
dest: /etc/fail2ban/action.d/php-framework-notify.conf
owner: root
group: root
mode: '0644'
notify: restart fail2ban
tags:
- fail2ban
- actions
- name: Ensure fail2ban service is enabled and running
service:
name: fail2ban
state: started
enabled: true
tags:
- fail2ban
- service
- name: Check fail2ban status
command: fail2ban-client status
register: fail2ban_status
changed_when: false
tags:
- fail2ban
- status
- name: Display fail2ban jail status
command: fail2ban-client status {{ item.name }}
register: jail_status
changed_when: false
loop: "{{ fail2ban_jails }}"
when: item.enabled | bool
tags:
- fail2ban
- status
- jails
- name: Create fail2ban log rotation
template:
src: fail2ban-logrotate.j2
dest: /etc/logrotate.d/fail2ban
owner: root
group: root
mode: '0644'
tags:
- fail2ban
- logrotate
- name: Configure fail2ban systemd service override
template:
src: fail2ban-override.conf.j2
dest: /etc/systemd/system/fail2ban.service.d/override.conf
owner: root
group: root
mode: '0644'
notify:
- reload systemd
- restart fail2ban
tags:
- fail2ban
- systemd

View File

@@ -1,142 +0,0 @@
---
# UFW Firewall Configuration
- name: Reset UFW to defaults
ufw:
state: reset
when: ufw_reset | bool
tags:
- firewall
- reset
- name: Set UFW default policies
ufw:
policy: "{{ item.policy }}"
direction: "{{ item.direction }}"
loop:
- { policy: "{{ ufw_default_incoming }}", direction: incoming }
- { policy: "{{ ufw_default_outgoing }}", direction: outgoing }
- { policy: "{{ ufw_default_forward }}", direction: routed }
tags:
- firewall
- policy
- name: Configure UFW logging
ufw:
logging: "{{ ufw_logging }}"
tags:
- firewall
- logging
- name: Allow SSH before enabling firewall
ufw:
rule: allow
port: "{{ ssh_port }}"
proto: tcp
comment: "SSH Access - Priority"
tags:
- firewall
- ssh
- name: Configure UFW rules
ufw:
rule: "{{ item.rule }}"
port: "{{ item.port | default(omit) }}"
proto: "{{ item.proto | default(omit) }}"
src: "{{ item.src | default(omit) }}"
dest: "{{ item.dest | default(omit) }}"
interface: "{{ item.interface | default(omit) }}"
direction: "{{ item.direction | default(omit) }}"
comment: "{{ item.comment | default(omit) }}"
loop: "{{ ufw_rules }}"
tags:
- firewall
- rules
- name: Add environment-specific firewall rules
ufw:
rule: "{{ item.rule }}"
port: "{{ item.port | default(omit) }}"
proto: "{{ item.proto | default(omit) }}"
src: "{{ item.src | default(omit) }}"
comment: "{{ item.comment | default(omit) }}"
loop: "{{ environment_specific_rules | default([]) }}"
tags:
- firewall
- rules
- environment
- name: Configure production-specific strict rules
ufw:
rule: "{{ item.rule }}"
port: "{{ item.port | default(omit) }}"
proto: "{{ item.proto | default(omit) }}"
src: "{{ item.src | default(omit) }}"
comment: "{{ item.comment | default(omit) }}"
loop:
- rule: deny
port: "3306"
proto: tcp
comment: "Block external MySQL access"
- rule: deny
port: "6379"
proto: tcp
comment: "Block external Redis access"
- rule: deny
port: "9090"
proto: tcp
comment: "Block external Prometheus access"
- rule: limit
port: "{{ ssh_port }}"
proto: tcp
comment: "Rate limit SSH connections"
when: environment == 'production' and firewall_strict_mode | bool
tags:
- firewall
- production
- strict
- name: Allow Docker container communication
ufw:
rule: allow
interface: docker0
direction: in
comment: "Docker container communication"
ignore_errors: true # Docker may not be installed yet
tags:
- firewall
- docker
- name: Allow established and related connections
ufw:
rule: allow
direction: in
interface: any
from_ip: any
to_ip: any
comment: "Allow established connections"
tags:
- firewall
- established
- name: Enable UFW firewall
ufw:
state: enabled
tags:
- firewall
- enable
- name: Check UFW status
command: ufw status verbose
register: ufw_status
changed_when: false
tags:
- firewall
- status
- name: Display UFW status
debug:
var: ufw_status.stdout_lines
tags:
- firewall
- status

View File

@@ -1,69 +0,0 @@
---
# Base Security Role - Main Tasks
- name: Include OS-specific variables
include_vars: "{{ ansible_os_family }}.yml"
tags:
- security
- config
- name: Update package cache
package:
update_cache: true
cache_valid_time: 3600
tags:
- security
- packages
- name: Install security packages
package:
name: "{{ security_packages }}"
state: present
tags:
- security
- packages
- name: Configure system security settings
include_tasks: system-hardening.yml
tags:
- security
- hardening
- name: Configure SSH security
include_tasks: ssh-hardening.yml
tags:
- security
- ssh
- name: Configure UFW firewall
include_tasks: firewall.yml
when: ufw_enabled | bool
tags:
- security
- firewall
- name: Configure Fail2ban
include_tasks: fail2ban.yml
when: fail2ban_enabled | bool
tags:
- security
- fail2ban
- name: Configure automatic security updates
include_tasks: security-updates.yml
when: unattended_upgrades_enabled | bool
tags:
- security
- updates
- name: Disable unused services
include_tasks: service-hardening.yml
tags:
- security
- services
- name: Apply security audit recommendations
include_tasks: security-audit.yml
tags:
- security
- audit

View File

@@ -1,185 +0,0 @@
---
# Security Audit and Compliance Checks
- name: Install security audit tools
package:
name: "{{ item }}"
state: present
loop:
- lynis
- rkhunter
- chkrootkit
- debsums
- aide
tags:
- security
- audit
- tools
- name: Initialize AIDE database
command: aideinit
args:
creates: /var/lib/aide/aide.db.new
tags:
- security
- aide
- integrity
- name: Move AIDE database to production location
command: mv /var/lib/aide/aide.db.new /var/lib/aide/aide.db
args:
creates: /var/lib/aide/aide.db
tags:
- security
- aide
- integrity
- name: Configure AIDE for file integrity monitoring
template:
src: aide.conf.j2
dest: /etc/aide/aide.conf
owner: root
group: root
mode: '0600'
backup: true
tags:
- security
- aide
- config
- name: Schedule AIDE integrity checks
cron:
name: "AIDE integrity check"
minute: "0"
hour: "3"
job: "/usr/bin/aide --check 2>&1 | mail -s 'AIDE Integrity Check - {{ inventory_hostname }}' {{ ssl_email }}"
user: root
tags:
- security
- aide
- cron
- name: Configure rkhunter
template:
src: rkhunter.conf.j2
dest: /etc/rkhunter.conf
owner: root
group: root
mode: '0644'
backup: true
tags:
- security
- rkhunter
- config
- name: Update rkhunter database
command: rkhunter --update
changed_when: false
tags:
- security
- rkhunter
- update
- name: Configure rkhunter properties
command: rkhunter --propupd
changed_when: false
tags:
- security
- rkhunter
- properties
- name: Schedule rkhunter scans
cron:
name: "RKhunter rootkit scan"
minute: "30"
hour: "3"
job: "/usr/bin/rkhunter --cronjob --report-warnings-only 2>&1 | mail -s 'RKhunter Scan - {{ inventory_hostname }}' {{ ssl_email }}"
user: root
tags:
- security
- rkhunter
- cron
- name: Configure Lynis for system auditing
template:
src: lynis.conf.j2
dest: /etc/lynis/default.prf
owner: root
group: root
mode: '0644'
tags:
- security
- lynis
- config
- name: Run initial security audit with Lynis
command: lynis audit system --quick --quiet
register: lynis_audit
changed_when: false
tags:
- security
- lynis
- audit
- name: Schedule weekly Lynis security audits
cron:
name: "Lynis security audit"
minute: "0"
hour: "4"
weekday: "0"
job: "/usr/sbin/lynis audit system --cronjob | mail -s 'Lynis Security Audit - {{ inventory_hostname }}' {{ ssl_email }}"
user: root
tags:
- security
- lynis
- cron
- name: Create security monitoring script
template:
src: security-monitor.sh.j2
dest: /usr/local/bin/security-monitor.sh
owner: root
group: root
mode: '0755'
tags:
- security
- monitoring
- scripts
- name: Schedule security monitoring
cron:
name: "Security monitoring"
minute: "*/15"
job: "/usr/local/bin/security-monitor.sh"
user: root
tags:
- security
- monitoring
- cron
- name: Create security incident response script
template:
src: security-incident.sh.j2
dest: /usr/local/bin/security-incident.sh
owner: root
group: root
mode: '0755'
tags:
- security
- incident
- response
- name: Verify system security configuration
command: "{{ item.command }}"
register: security_checks
changed_when: false
failed_when: security_checks.rc != 0 and item.required | default(true)
loop:
- { command: "sshd -t", name: "SSH configuration" }
- { command: "ufw status", name: "UFW firewall status", required: false }
- { command: "fail2ban-client status", name: "Fail2ban status", required: false }
- { command: "systemctl is-active auditd", name: "Audit daemon", required: false }
tags:
- security
- verification
- validation

View File

@@ -1,144 +0,0 @@
---
# Automatic Security Updates Configuration
- name: Install unattended-upgrades package
package:
name: unattended-upgrades
state: present
tags:
- security
- updates
- packages
- name: Configure unattended-upgrades
template:
src: 50unattended-upgrades.j2
dest: /etc/apt/apt.conf.d/50unattended-upgrades
owner: root
group: root
mode: '0644'
backup: true
tags:
- security
- updates
- config
- name: Enable automatic updates
template:
src: 20auto-upgrades.j2
dest: /etc/apt/apt.conf.d/20auto-upgrades
owner: root
group: root
mode: '0644'
tags:
- security
- updates
- config
- name: Configure automatic reboot for kernel updates
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '^Unattended-Upgrade::Automatic-Reboot\s+'
line: 'Unattended-Upgrade::Automatic-Reboot "{{ unattended_upgrades_automatic_reboot | lower }}";'
create: true
tags:
- security
- updates
- reboot
- name: Configure reboot time
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '^Unattended-Upgrade::Automatic-Reboot-Time\s+'
line: 'Unattended-Upgrade::Automatic-Reboot-Time "{{ unattended_upgrades_automatic_reboot_time }}";'
when: unattended_upgrades_automatic_reboot | bool
tags:
- security
- updates
- reboot
- name: Configure email notifications for updates
lineinfile:
path: /etc/apt/apt.conf.d/50unattended-upgrades
regexp: '^Unattended-Upgrade::Mail\s+'
line: 'Unattended-Upgrade::Mail "{{ ssl_email }}";'
tags:
- security
- updates
- notifications
- name: Install apt-listchanges for change notifications
package:
name: apt-listchanges
state: present
tags:
- security
- updates
- packages
- name: Configure apt-listchanges
template:
src: listchanges.conf.j2
dest: /etc/apt/listchanges.conf
owner: root
group: root
mode: '0644'
tags:
- security
- updates
- notifications
- name: Install needrestart for service restart detection
package:
name: needrestart
state: present
tags:
- security
- updates
- packages
- name: Configure needrestart
template:
src: needrestart.conf.j2
dest: /etc/needrestart/needrestart.conf
owner: root
group: root
mode: '0644'
tags:
- security
- updates
- services
- name: Create update notification script
template:
src: update-notification.sh.j2
dest: /usr/local/bin/update-notification.sh
owner: root
group: root
mode: '0755'
tags:
- security
- updates
- scripts
- name: Schedule regular security updates check
cron:
name: "Security updates check"
minute: "0"
hour: "2"
job: "/usr/bin/unattended-upgrade --dry-run && /usr/local/bin/update-notification.sh"
user: root
tags:
- security
- updates
- cron
- name: Verify unattended-upgrades service
service:
name: unattended-upgrades
state: started
enabled: true
tags:
- security
- updates
- service

View File

@@ -1,149 +0,0 @@
---
# Service Hardening and Unused Service Removal
- name: Stop and disable unused services
service:
name: "{{ item }}"
state: stopped
enabled: false
loop: "{{ disable_unused_services }}"
ignore_errors: true
tags:
- security
- services
- cleanup
- name: Remove unused service packages
package:
name: "{{ item }}"
state: absent
loop: "{{ disable_unused_services }}"
ignore_errors: true
tags:
- security
- services
- packages
- name: Mask dangerous services
systemd:
name: "{{ item }}"
masked: true
loop:
- rpcbind.service
- rpcbind.socket
- nfs-server.service
- nfs-lock.service
- nfs-idmap.service
ignore_errors: true
tags:
- security
- services
- systemd
- name: Configure service security settings
template:
src: service-security.conf.j2
dest: /etc/systemd/system/{{ item }}.service.d/security.conf
owner: root
group: root
mode: '0644'
loop:
- nginx
- php8.4-fpm
notify: reload systemd
tags:
- security
- services
- systemd
- name: Create systemd security override directory
file:
path: "/etc/systemd/system/{{ item }}.service.d"
state: directory
owner: root
group: root
mode: '0755'
loop:
- nginx
- php8.4-fpm
- docker
tags:
- security
- services
- directories
- name: Harden Docker service (if installed)
template:
src: docker-security.conf.j2
dest: /etc/systemd/system/docker.service.d/security.conf
owner: root
group: root
mode: '0644'
notify: reload systemd
ignore_errors: true
tags:
- security
- services
- docker
- name: Configure service restart policies
lineinfile:
path: /etc/systemd/system/{{ item.service }}.service.d/restart.conf
regexp: '^Restart='
line: 'Restart={{ item.policy }}'
create: true
loop:
- { service: "nginx", policy: "always" }
- { service: "php8.4-fpm", policy: "always" }
- { service: "fail2ban", policy: "always" }
notify: reload systemd
tags:
- security
- services
- reliability
- name: Set service timeouts for security
lineinfile:
path: /etc/systemd/system/{{ item.service }}.service.d/timeout.conf
regexp: '^TimeoutStopSec='
line: 'TimeoutStopSec={{ item.timeout }}'
create: true
loop:
- { service: "nginx", timeout: "30s" }
- { service: "php8.4-fpm", timeout: "30s" }
- { service: "docker", timeout: "60s" }
notify: reload systemd
tags:
- security
- services
- timeouts
- name: Enable core security services
service:
name: "{{ item }}"
state: started
enabled: true
loop:
- ufw
- fail2ban
- auditd
- unattended-upgrades
tags:
- security
- services
- enable
- name: Verify critical service status
command: systemctl is-active {{ item }}
register: service_status
changed_when: false
failed_when: service_status.rc != 0
loop:
- ssh
- ufw
- fail2ban
- auditd
tags:
- security
- services
- verification

View File

@@ -1,119 +0,0 @@
---
# SSH Hardening Configuration
- name: Create SSH banner
copy:
content: |
**************************************************************************
* WARNING: AUTHORIZED ACCESS ONLY *
**************************************************************************
* This system is for authorized users only. All activities are logged *
* and monitored. Unauthorized access is prohibited and may result in *
* civil and/or criminal penalties. *
* *
* Custom PHP Framework - {{ domain_name }} *
* Environment: {{ environment | upper }} *
**************************************************************************
dest: "{{ ssh_banner }}"
owner: root
group: root
mode: '0644'
notify: restart ssh
tags:
- ssh
- banner
- name: Generate strong SSH host keys
command: ssh-keygen -t {{ item }} -f /etc/ssh/ssh_host_{{ item }}_key -N ""
args:
creates: /etc/ssh/ssh_host_{{ item }}_key
loop:
- ed25519
- ecdsa
- rsa
notify: restart ssh
tags:
- ssh
- keys
- name: Set correct permissions on SSH host keys
file:
path: /etc/ssh/ssh_host_{{ item }}_key
owner: root
group: root
mode: '0600'
loop:
- ed25519
- ecdsa
- rsa
tags:
- ssh
- keys
- permissions
- name: Configure SSH daemon
template:
src: sshd_config.j2
dest: /etc/ssh/sshd_config
owner: root
group: root
mode: '0644'
backup: true
notify: restart ssh
tags:
- ssh
- config
- name: Create SSH client configuration
template:
src: ssh_config.j2
dest: /etc/ssh/ssh_config
owner: root
group: root
mode: '0644'
backup: true
tags:
- ssh
- config
- name: Ensure SSH service is enabled and running
service:
name: ssh
state: started
enabled: true
tags:
- ssh
- service
- name: Configure SSH authorized keys for deploy user
authorized_key:
user: "{{ ansible_user }}"
state: present
key: "{{ lookup('file', '~/.ssh/id_rsa_deploy.pub') }}"
exclusive: "{{ ssh_authorized_keys_exclusive }}"
when: ansible_user != 'root'
tags:
- ssh
- keys
- users
- name: Remove default SSH keys for security
file:
path: "{{ item }}"
state: absent
loop:
- /etc/ssh/ssh_host_dsa_key
- /etc/ssh/ssh_host_dsa_key.pub
tags:
- ssh
- keys
- cleanup
- name: Verify SSH configuration syntax
command: sshd -t
register: ssh_config_test
changed_when: false
failed_when: ssh_config_test.rc != 0
tags:
- ssh
- validation

View File

@@ -1,167 +0,0 @@
---
# System Security Hardening
- name: Apply kernel security parameters
sysctl:
name: "{{ item.key }}"
value: "{{ item.value }}"
state: present
sysctl_set: true
reload: true
loop: "{{ security_kernel_parameters | dict2items }}"
tags:
- security
- kernel
- sysctl
- name: Create security limits configuration
template:
src: security-limits.conf.j2
dest: /etc/security/limits.d/99-security.conf
owner: root
group: root
mode: '0644'
tags:
- security
- limits
- name: Configure login.defs for security
lineinfile:
path: /etc/login.defs
regexp: "^{{ item.key }}"
line: "{{ item.key }} {{ item.value }}"
backup: true
loop:
- { key: "UMASK", value: "{{ security_umask }}" }
- { key: "PASS_MAX_DAYS", value: "90" }
- { key: "PASS_MIN_DAYS", value: "1" }
- { key: "PASS_WARN_AGE", value: "7" }
- { key: "LOGIN_TIMEOUT", value: "{{ security_login_timeout }}" }
- { key: "ENCRYPT_METHOD", value: "SHA512" }
tags:
- security
- login
- password
- name: Secure shared memory
mount:
path: /dev/shm
src: tmpfs
fstype: tmpfs
opts: "defaults,noexec,nosuid,nodev,size=512M"
state: mounted
tags:
- security
- memory
- filesystem
- name: Configure audit system
package:
name: auditd
state: present
tags:
- security
- audit
- name: Create audit rules for security monitoring
template:
src: audit-rules.rules.j2
dest: /etc/audit/rules.d/99-security.rules
owner: root
group: root
mode: '0600'
backup: true
notify: restart auditd
tags:
- security
- audit
- rules
- name: Ensure auditd service is enabled and running
service:
name: auditd
state: started
enabled: true
tags:
- security
- audit
- service
- name: Remove unnecessary packages
package:
name: "{{ item }}"
state: absent
loop:
- telnet
- rsh-client
- rsh-redone-client
- talk
- ntalk
- xinetd
- inetutils-inetd
ignore_errors: true
tags:
- security
- cleanup
- packages
- name: Set correct permissions on critical files
file:
path: "{{ item.path }}"
owner: "{{ item.owner | default('root') }}"
group: "{{ item.group | default('root') }}"
mode: "{{ item.mode }}"
loop:
- { path: "/etc/passwd", mode: "0644" }
- { path: "/etc/shadow", mode: "0640", group: "shadow" }
- { path: "/etc/group", mode: "0644" }
- { path: "/etc/gshadow", mode: "0640", group: "shadow" }
- { path: "/boot", mode: "0700" }
- { path: "/etc/ssh", mode: "0755" }
- { path: "/etc/crontab", mode: "0600" }
- { path: "/etc/cron.hourly", mode: "0700" }
- { path: "/etc/cron.daily", mode: "0700" }
- { path: "/etc/cron.weekly", mode: "0700" }
- { path: "/etc/cron.monthly", mode: "0700" }
- { path: "/etc/cron.d", mode: "0700" }
tags:
- security
- permissions
- files
- name: Configure process accounting
package:
name: acct
state: present
tags:
- security
- accounting
- name: Enable process accounting
service:
name: acct
state: started
enabled: true
tags:
- security
- accounting
- service
- name: Configure system banner
copy:
content: |
Custom PHP Framework Production Server
{{ domain_name }} - {{ environment | upper }}
Unauthorized access is prohibited.
All activities are monitored and logged.
System administered by: {{ ssl_email }}
dest: /etc/motd
owner: root
group: root
mode: '0644'
tags:
- security
- banner
- motd

View File

@@ -1,63 +0,0 @@
# Custom Fail2ban Jails for Custom PHP Framework
# Generated by Ansible - Do not edit manually
{% for jail in fail2ban_jails %}
[{{ jail.name }}]
enabled = {{ jail.enabled | ternary('true', 'false') }}
{% if jail.port is defined %}
port = {{ jail.port }}
{% endif %}
{% if jail.filter is defined %}
filter = {{ jail.filter }}
{% endif %}
{% if jail.logpath is defined %}
logpath = {{ jail.logpath }}
{% endif %}
{% if jail.maxretry is defined %}
maxretry = {{ jail.maxretry }}
{% endif %}
{% if jail.findtime is defined %}
findtime = {{ jail.findtime }}
{% endif %}
{% if jail.bantime is defined %}
bantime = {{ jail.bantime }}
{% endif %}
{% if jail.backend is defined %}
backend = {{ jail.backend }}
{% endif %}
action = %(action_mwl)s
{% endfor %}
# PHP Framework specific jail
[php-framework]
enabled = true
port = http,https
filter = php-framework
logpath = /var/log/nginx/access.log
/var/log/nginx/error.log
maxretry = 5
findtime = 600
bantime = 3600
action = %(action_mwl)s
php-framework-notify
# Docker container protection
[docker-php]
enabled = {{ 'true' if environment == 'production' else 'false' }}
port = http,https
filter = docker-php
logpath = /var/log/docker/*.log
maxretry = 3
findtime = 300
bantime = 1800
# Custom application errors
[app-errors]
enabled = true
port = http,https
filter = nginx-limit-req
logpath = /var/log/nginx/error.log
maxretry = 10
findtime = 600
bantime = 600

View File

@@ -1,20 +0,0 @@
# Fail2ban Main Configuration for Custom PHP Framework
# Generated by Ansible - Do not edit manually
[Definition]
loglevel = {{ fail2ban_loglevel }}
socket = {{ fail2ban_socket }}
pidfile = {{ fail2ban_pidfile }}
# Database configuration
dbfile = /var/lib/fail2ban/fail2ban.sqlite3
dbmaxmatches = 10
# Backend
backend = systemd
# Email Configuration
[mta]
sender = fail2ban-{{ inventory_hostname }}@{{ domain_name }}
destemail = {{ ssl_email }}
action = %(action_mwl)s

View File

@@ -1,73 +0,0 @@
# SSH Configuration for Custom PHP Framework - {{ environment | upper }}
# Generated by Ansible - Do not edit manually
# Basic Configuration
Port {{ ssh_port }}
Protocol 2
AddressFamily inet
# Authentication
PermitRootLogin {{ ssh_permit_root_login | ternary('yes', 'no') }}
PasswordAuthentication {{ ssh_password_authentication | ternary('yes', 'no') }}
PubkeyAuthentication {{ ssh_pubkey_authentication | ternary('yes', 'no') }}
AuthorizedKeysFile .ssh/authorized_keys
ChallengeResponseAuthentication {{ ssh_challenge_response_authentication | ternary('yes', 'no') }}
GSSAPIAuthentication {{ ssh_gss_api_authentication | ternary('yes', 'no') }}
UsePAM yes
# Security Settings
MaxAuthTries {{ ssh_max_auth_tries }}
ClientAliveInterval {{ ssh_client_alive_interval }}
ClientAliveCountMax {{ ssh_client_alive_count_max }}
MaxSessions {{ ssh_max_sessions }}
TCPKeepAlive {{ ssh_tcp_keep_alive | ternary('yes', 'no') }}
Compression {{ ssh_compression | ternary('yes', 'no') }}
UseDNS {{ ssh_use_dns | ternary('yes', 'no') }}
# Tunnel and Forwarding
X11Forwarding {{ ssh_x11_forwarding | ternary('yes', 'no') }}
PermitTunnel {{ ssh_permit_tunnel | ternary('yes', 'no') }}
PermitUserEnvironment {{ ssh_permit_user_environment | ternary('yes', 'no') }}
AllowTcpForwarding no
AllowStreamLocalForwarding no
GatewayPorts no
# Host Key Configuration
{% for algorithm in ssh_host_key_algorithms %}
HostKey /etc/ssh/ssh_host_{{ algorithm.split('-')[0] }}_key
{% endfor %}
# Allowed Users and Groups
{% if ssh_allowed_users %}
AllowUsers {{ ssh_allowed_users | join(' ') }}
{% endif %}
{% if ssh_allowed_groups %}
AllowGroups {{ ssh_allowed_groups | join(' ') }}
{% endif %}
# Banner
Banner {{ ssh_banner }}
# Logging
SyslogFacility AUTH
LogLevel INFO
# Kex Algorithms (secure)
KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512
# Ciphers (secure)
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
# MAC Algorithms (secure)
MACs hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha2-256,hmac-sha2-512
# Host Key Algorithms
PubkeyAcceptedKeyTypes {{ ssh_host_key_algorithms | join(',') }}
# Additional Security
PermitEmptyPasswords no
StrictModes yes
IgnoreRhosts yes
HostbasedAuthentication no
PrintMotd no
PrintLastLog yes

View File

@@ -1,21 +0,0 @@
---
# OS-specific variables for Debian/Ubuntu
security_packages:
- ufw
- fail2ban
- unattended-upgrades
- apt-listchanges
- logwatch
- rkhunter
- chkrootkit
# Services
security_services:
- ufw
- fail2ban
- unattended-upgrades
# Package management
package_manager: apt
update_cache_command: "apt-get update"
upgrade_command: "apt-get upgrade -y"

View File

@@ -1,151 +0,0 @@
---
# Docker Runtime Role Default Variables
# Docker Installation
docker_edition: ce
docker_version: "latest"
docker_channel: stable
docker_compose_version: "2.20.0"
# Repository Configuration
docker_apt_arch: amd64
docker_apt_repository: "deb [arch={{ docker_apt_arch }}] https://download.docker.com/linux/{{ ansible_distribution | lower }} {{ ansible_distribution_release }} {{ docker_channel }}"
docker_apt_gpg_key: "https://download.docker.com/linux/{{ ansible_distribution | lower }}/gpg"
# Docker Daemon Configuration
docker_daemon_config:
# Security settings
userland-proxy: false
live-restore: true
icc: false
userns-remap: default
no-new-privileges: true
seccomp-profile: /etc/docker/seccomp-default.json
# Logging
log-driver: json-file
log-opts:
max-size: 50m
max-file: "5"
# Storage
storage-driver: overlay2
# Network security
bridge: none
ip-forward: false
ip-masq: false
iptables: false
ipv6: false
# Resource limits
default-ulimits:
nproc:
hard: 65536
soft: 65536
nofile:
hard: 65536
soft: 65536
# Registry security
insecure-registries: []
registry-mirrors: []
# Experimental features
experimental: false
# Docker Service Configuration
docker_service_state: started
docker_service_enabled: true
docker_restart_handler_state: restarted
# User Management
docker_users: []
docker_group: docker
# PHP 8.4 Specific Configuration
php_version: "8.4"
php_docker_image: "php:8.4-fpm-alpine"
php_extensions:
- mysqli
- pdo_mysql
- opcache
- redis
- memcached
- intl
- gd
- zip
- bcmath
- soap
- xml
- curl
- json
# Docker Compose Configuration
docker_compose_projects: []
docker_compose_path: /opt/docker-compose
# Security Profiles
docker_security_profiles:
- name: default-seccomp
path: /etc/docker/seccomp-default.json
- name: framework-apparmor
path: /etc/apparmor.d/docker-framework
# Network Configuration
docker_networks:
- name: framework-network
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
options:
com.docker.network.bridge.enable_icc: "false"
com.docker.network.bridge.enable_ip_masquerade: "false"
# Volume Configuration
docker_volumes:
- name: framework-app-data
driver: local
- name: framework-db-data
driver: local
- name: framework-logs
driver: local
# Health Check Configuration
docker_health_check_interval: 30s
docker_health_check_timeout: 10s
docker_health_check_retries: 3
docker_health_check_start_period: 60s
# Backup Configuration
docker_backup_enabled: "{{ backup_enabled | default(false) }}"
docker_backup_schedule: "0 2 * * *" # Daily at 2 AM
docker_backup_retention: 7
# Monitoring Configuration
docker_monitoring_enabled: "{{ monitoring_enabled | default(true) }}"
docker_metrics_enabled: true
docker_metrics_address: "0.0.0.0:9323"
# Resource Limits (per environment)
docker_resource_limits:
production:
memory: "{{ docker_memory_limit | default('4g') }}"
cpus: "{{ docker_cpu_limit | default('2.0') }}"
pids: 1024
staging:
memory: "{{ docker_memory_limit | default('2g') }}"
cpus: "{{ docker_cpu_limit | default('1.0') }}"
pids: 512
development:
memory: "{{ docker_memory_limit | default('1g') }}"
cpus: "{{ docker_cpu_limit | default('0.5') }}"
pids: 256
# Container Security Options
docker_security_opts:
- no-new-privileges:true
- seccomp:unconfined
- apparmor:docker-framework

View File

@@ -1,52 +0,0 @@
---
# Docker Runtime Role Handlers
- name: restart docker
service:
name: docker
state: restarted
listen: restart docker
- name: reload docker
service:
name: docker
state: reloaded
listen: reload docker
- name: reload systemd
systemd:
daemon_reload: true
listen: reload systemd
- name: restart containerd
service:
name: containerd
state: restarted
listen: restart containerd
- name: reload apparmor
service:
name: apparmor
state: reloaded
listen: reload apparmor
when: ansible_os_family == 'Debian'
- name: restart docker-compose
command: docker-compose restart
args:
chdir: "{{ item }}"
loop: "{{ docker_compose_projects | map(attribute='path') | list }}"
when: docker_compose_projects is defined and docker_compose_projects | length > 0
listen: restart docker-compose
- name: prune docker system
command: docker system prune -af --volumes
listen: prune docker system
- name: update docker images
command: docker image prune -af
listen: update docker images
- name: rebuild php image
command: /usr/local/bin/build-php-image.sh
listen: rebuild php image

View File

@@ -1,30 +0,0 @@
---
galaxy_info:
role_name: docker-runtime
author: Custom PHP Framework Team
description: Secure Docker runtime environment with PHP 8.4 optimization
company: michaelschiemer.de
license: MIT
min_ansible_version: 2.12
platforms:
- name: Ubuntu
versions:
- "20.04"
- "22.04"
- "24.04"
- name: Debian
versions:
- "11"
- "12"
galaxy_tags:
- docker
- containers
- security
- php
- runtime
dependencies: []
collections:
- community.docker
- ansible.posix

View File

@@ -1,113 +0,0 @@
---
# Docker Daemon Configuration
- name: Create Docker configuration directory
file:
path: /etc/docker
state: directory
owner: root
group: root
mode: '0755'
tags:
- docker
- config
- name: Configure Docker daemon
template:
src: daemon.json.j2
dest: /etc/docker/daemon.json
owner: root
group: root
mode: '0644'
backup: true
notify: restart docker
tags:
- docker
- config
- name: Create Docker systemd service directory
file:
path: /etc/systemd/system/docker.service.d
state: directory
owner: root
group: root
mode: '0755'
tags:
- docker
- systemd
- name: Configure Docker systemd service overrides
template:
src: docker-service-override.conf.j2
dest: /etc/systemd/system/docker.service.d/override.conf
owner: root
group: root
mode: '0644'
notify:
- reload systemd
- restart docker
tags:
- docker
- systemd
- name: Create Docker socket service override
template:
src: docker-socket-override.conf.j2
dest: /etc/systemd/system/docker.socket.d/override.conf
owner: root
group: root
mode: '0644'
notify:
- reload systemd
- restart docker
tags:
- docker
- systemd
- name: Configure Docker log rotation
template:
src: docker-logrotate.j2
dest: /etc/logrotate.d/docker
owner: root
group: root
mode: '0644'
tags:
- docker
- logging
- name: Create Docker logs directory
file:
path: /var/log/docker
state: directory
owner: root
group: root
mode: '0755'
tags:
- docker
- logging
- name: Set up Docker environment
template:
src: docker-environment.j2
dest: /etc/default/docker
owner: root
group: root
mode: '0644'
notify: restart docker
tags:
- docker
- environment
- name: Configure Docker resource limits
template:
src: docker-limits.conf.j2
dest: /etc/systemd/system/docker.service.d/limits.conf
owner: root
group: root
mode: '0644'
notify:
- reload systemd
- restart docker
tags:
- docker
- limits

View File

@@ -1,96 +0,0 @@
---
# Docker Engine Installation
- name: Remove old Docker versions
package:
name:
- docker
- docker-engine
- docker.io
- containerd
- runc
state: absent
tags:
- docker
- cleanup
- name: Add Docker GPG key
apt_key:
url: "{{ docker_apt_gpg_key }}"
state: present
tags:
- docker
- repository
- name: Add Docker repository
apt_repository:
repo: "{{ docker_apt_repository }}"
state: present
update_cache: true
tags:
- docker
- repository
- name: Install Docker Engine
package:
name:
- docker-{{ docker_edition }}
- docker-{{ docker_edition }}-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
state: present
update_cache: true
notify: restart docker
tags:
- docker
- packages
- name: Ensure Docker group exists
group:
name: "{{ docker_group }}"
state: present
tags:
- docker
- users
- name: Add users to Docker group
user:
name: "{{ item }}"
groups: "{{ docker_group }}"
append: true
loop: "{{ docker_users }}"
when: docker_users | length > 0
tags:
- docker
- users
- name: Add deploy user to Docker group
user:
name: "{{ ansible_user }}"
groups: "{{ docker_group }}"
append: true
when: ansible_user != 'root'
tags:
- docker
- users
- name: Start and enable Docker service
service:
name: docker
state: "{{ docker_service_state }}"
enabled: "{{ docker_service_enabled }}"
tags:
- docker
- service
- name: Wait for Docker daemon to be ready
command: docker version
register: docker_ready
retries: 5
delay: 10
until: docker_ready.rc == 0
changed_when: false
tags:
- docker
- verification

View File

@@ -1,77 +0,0 @@
---
# Docker Runtime Role - Main Tasks
- name: Include OS-specific variables
include_vars: "{{ ansible_os_family }}.yml"
tags:
- docker
- config
- name: Install Docker prerequisites
include_tasks: prerequisites.yml
tags:
- docker
- prerequisites
- name: Install Docker Engine
include_tasks: install-docker.yml
tags:
- docker
- install
- name: Configure Docker daemon
include_tasks: configure-daemon.yml
tags:
- docker
- config
- name: Setup Docker security
include_tasks: security-setup.yml
tags:
- docker
- security
- name: Install Docker Compose
include_tasks: install-compose.yml
tags:
- docker
- compose
- name: Setup Docker networks
include_tasks: setup-networks.yml
tags:
- docker
- network
- name: Setup Docker volumes
include_tasks: setup-volumes.yml
tags:
- docker
- volumes
- name: Configure PHP 8.4 optimization
include_tasks: php-optimization.yml
tags:
- docker
- php
- optimization
- name: Setup monitoring and health checks
include_tasks: monitoring.yml
when: docker_monitoring_enabled | bool
tags:
- docker
- monitoring
- name: Configure backup system
include_tasks: backup-setup.yml
when: docker_backup_enabled | bool
tags:
- docker
- backup
- name: Verify Docker installation
include_tasks: verification.yml
tags:
- docker
- verification

View File

@@ -1,177 +0,0 @@
---
# PHP 8.4 Docker Optimization
- name: Create PHP configuration directory
file:
path: /etc/docker/php
state: directory
owner: root
group: root
mode: '0755'
tags:
- docker
- php
- config
- name: Create PHP 8.4 optimized Dockerfile template
template:
src: php84-dockerfile.j2
dest: /etc/docker/php/Dockerfile.php84
owner: root
group: root
mode: '0644'
tags:
- docker
- php
- dockerfile
- name: Create PHP-FPM configuration for containers
template:
src: php-fpm-docker.conf.j2
dest: /etc/docker/php/php-fpm.conf
owner: root
group: root
mode: '0644'
tags:
- docker
- php
- fpm
- name: Create PHP configuration for containers
template:
src: php-docker.ini.j2
dest: /etc/docker/php/php.ini
owner: root
group: root
mode: '0644'
tags:
- docker
- php
- config
- name: Create OPcache configuration
template:
src: opcache-docker.ini.j2
dest: /etc/docker/php/opcache.ini
owner: root
group: root
mode: '0644'
tags:
- docker
- php
- opcache
- name: Create Redis configuration for PHP
template:
src: redis-php.ini.j2
dest: /etc/docker/php/redis.ini
owner: root
group: root
mode: '0644'
tags:
- docker
- php
- redis
- name: Create PHP health check script
template:
src: php-health-check.sh.j2
dest: /etc/docker/php/health-check.sh
owner: root
group: root
mode: '0755'
tags:
- docker
- php
- health
- name: Pull PHP 8.4 base image
docker_image:
name: "{{ php_docker_image }}"
source: pull
state: present
tags:
- docker
- php
- image
- name: Create custom PHP 8.4 image build script
template:
src: build-php-image.sh.j2
dest: /usr/local/bin/build-php-image.sh
owner: root
group: root
mode: '0755'
tags:
- docker
- php
- build
- name: Create PHP container resource limits
template:
src: php-container-limits.json.j2
dest: /etc/docker/php/container-limits.json
owner: root
group: root
mode: '0644'
tags:
- docker
- php
- limits
- name: Configure PHP error logging for containers
template:
src: php-error-log.conf.j2
dest: /etc/docker/php/error-log.conf
owner: root
group: root
mode: '0644'
tags:
- docker
- php
- logging
- name: Create PHP performance tuning script
template:
src: php-performance-tune.sh.j2
dest: /usr/local/bin/php-performance-tune.sh
owner: root
group: root
mode: '0755'
tags:
- docker
- php
- performance
- name: Set up PHP session handling for containers
template:
src: php-session.ini.j2
dest: /etc/docker/php/session.ini
owner: root
group: root
mode: '0644'
tags:
- docker
- php
- session
- name: Create PHP security configuration
template:
src: php-security.ini.j2
dest: /etc/docker/php/security.ini
owner: root
group: root
mode: '0644'
tags:
- docker
- php
- security
- name: Build optimized PHP 8.4 image
command: /usr/local/bin/build-php-image.sh
args:
creates: /var/lib/docker/image-builds/php84-custom.built
tags:
- docker
- php
- build

View File

@@ -1,175 +0,0 @@
---
# Docker Security Configuration
- name: Create Docker security profiles directory
file:
path: /etc/docker/security
state: directory
owner: root
group: root
mode: '0755'
tags:
- docker
- security
- name: Install seccomp security profile
template:
src: seccomp-default.json.j2
dest: /etc/docker/seccomp-default.json
owner: root
group: root
mode: '0644'
tags:
- docker
- security
- seccomp
- name: Install AppArmor profile for Docker
template:
src: docker-framework-apparmor.j2
dest: /etc/apparmor.d/docker-framework
owner: root
group: root
mode: '0644'
notify: reload apparmor
when: ansible_os_family == 'Debian'
tags:
- docker
- security
- apparmor
- name: Load AppArmor profile
command: apparmor_parser -r -W /etc/apparmor.d/docker-framework
when: ansible_os_family == 'Debian'
changed_when: false
tags:
- docker
- security
- apparmor
- name: Configure user namespace mapping
template:
src: subuid.j2
dest: /etc/subuid
owner: root
group: root
mode: '0644'
backup: true
tags:
- docker
- security
- userns
- name: Configure group namespace mapping
template:
src: subgid.j2
dest: /etc/subgid
owner: root
group: root
mode: '0644'
backup: true
tags:
- docker
- security
- userns
- name: Create Docker TLS certificates directory
file:
path: /etc/docker/certs
state: directory
owner: root
group: docker
mode: '0750'
tags:
- docker
- security
- tls
- name: Generate Docker TLS certificates
command: >
openssl req -new -x509 -days 365 -nodes
-out /etc/docker/certs/server-cert.pem
-keyout /etc/docker/certs/server-key.pem
-subj "/CN={{ inventory_hostname }}"
args:
creates: /etc/docker/certs/server-cert.pem
tags:
- docker
- security
- tls
- name: Set correct permissions on Docker TLS certificates
file:
path: "{{ item.path }}"
owner: root
group: docker
mode: "{{ item.mode }}"
loop:
- { path: "/etc/docker/certs/server-cert.pem", mode: "0644" }
- { path: "/etc/docker/certs/server-key.pem", mode: "0640" }
tags:
- docker
- security
- tls
- permissions
- name: Configure Docker Content Trust
lineinfile:
path: /etc/environment
line: "DOCKER_CONTENT_TRUST=1"
create: true
when: environment == 'production'
tags:
- docker
- security
- trust
- name: Install Docker security scanning tools
package:
name:
- runc
- docker-bench-security
state: present
ignore_errors: true
tags:
- docker
- security
- tools
- name: Create Docker security audit script
template:
src: docker-security-audit.sh.j2
dest: /usr/local/bin/docker-security-audit.sh
owner: root
group: root
mode: '0755'
tags:
- docker
- security
- audit
- name: Schedule Docker security audits
cron:
name: "Docker security audit"
minute: "0"
hour: "5"
weekday: "1"
job: "/usr/local/bin/docker-security-audit.sh | mail -s 'Docker Security Audit - {{ inventory_hostname }}' {{ ssl_email }}"
user: root
when: environment == 'production'
tags:
- docker
- security
- audit
- cron
- name: Configure Docker socket security
file:
path: /var/run/docker.sock
owner: root
group: docker
mode: '0660'
tags:
- docker
- security
- socket

View File

@@ -1,61 +0,0 @@
{
"# Custom PHP Framework Docker Daemon Configuration": "{{ environment | upper }}",
"# Security Settings": "Hardened configuration for production use",
"userland-proxy": {{ docker_daemon_config['userland-proxy'] | tojson }},
"live-restore": {{ docker_daemon_config['live-restore'] | tojson }},
"icc": {{ docker_daemon_config['icc'] | tojson }},
"userns-remap": "{{ docker_daemon_config['userns-remap'] }}",
"no-new-privileges": {{ docker_daemon_config['no-new-privileges'] | tojson }},
{% if docker_daemon_config['seccomp-profile'] is defined %}
"seccomp-profile": "{{ docker_daemon_config['seccomp-profile'] }}",
{% endif %}
"# Logging Configuration": "Structured logging with rotation",
"log-driver": "{{ docker_daemon_config['log-driver'] }}",
"log-opts": {{ docker_daemon_config['log-opts'] | tojson }},
"# Storage Configuration": "Optimized for performance",
"storage-driver": "{{ docker_daemon_config['storage-driver'] }}",
{% if docker_daemon_config['storage-opts'] is defined %}
"storage-opts": {{ docker_daemon_config['storage-opts'] | tojson }},
{% endif %}
"# Network Security": "Disabled for security",
{% if docker_daemon_config['bridge'] is defined and docker_daemon_config['bridge'] %}
"bridge": "{{ docker_daemon_config['bridge'] }}",
{% endif %}
"ip-forward": {{ docker_daemon_config['ip-forward'] | tojson }},
"ip-masq": {{ docker_daemon_config['ip-masq'] | tojson }},
"iptables": {{ docker_daemon_config['iptables'] | tojson }},
"ipv6": {{ docker_daemon_config['ipv6'] | tojson }},
"# Resource Limits": "Default container limits",
"default-ulimits": {{ docker_daemon_config['default-ulimits'] | tojson }},
"# Registry Configuration": "Secure registry access",
{% if docker_daemon_config['insecure-registries'] | length > 0 %}
"insecure-registries": {{ docker_daemon_config['insecure-registries'] | tojson }},
{% endif %}
{% if docker_daemon_config['registry-mirrors'] | length > 0 %}
"registry-mirrors": {{ docker_daemon_config['registry-mirrors'] | tojson }},
{% endif %}
"# Monitoring and Metrics": "Enable for production monitoring",
{% if docker_metrics_enabled %}
"metrics-addr": "{{ docker_metrics_address }}",
"experimental": true,
{% endif %}
"# Runtime Configuration": "Optimized for PHP 8.4 workloads",
"default-runtime": "runc",
"runtimes": {
"runc": {
"path": "/usr/bin/runc"
}
},
"# Debug and Development": "Environment specific settings",
"debug": {{ (environment == 'development') | tojson }},
"experimental": {{ docker_daemon_config['experimental'] | tojson }}
}

View File

@@ -1,101 +0,0 @@
# Custom PHP 8.4 Dockerfile for {{ domain_name }}
# Optimized for Custom PHP Framework
# Environment: {{ environment | upper }}
FROM php:8.4-fpm-alpine
# Build arguments
ARG PHP_VERSION="{{ php_version }}"
ARG BUILD_DATE="{{ ansible_date_time.iso8601 }}"
ARG VCS_REF="{{ ansible_hostname }}"
# Labels for container metadata
LABEL maintainer="{{ ssl_email }}" \
org.label-schema.build-date="${BUILD_DATE}" \
org.label-schema.vcs-ref="${VCS_REF}" \
org.label-schema.schema-version="1.0" \
org.label-schema.name="custom-php-framework" \
org.label-schema.description="Custom PHP Framework with PHP 8.4" \
org.label-schema.version="${PHP_VERSION}"
# Install system dependencies
RUN apk add --no-cache \
# Build dependencies
$PHPIZE_DEPS \
autoconf \
gcc \
g++ \
make \
# Runtime dependencies
curl-dev \
freetype-dev \
icu-dev \
jpeg-dev \
libpng-dev \
libxml2-dev \
libzip-dev \
oniguruma-dev \
openssl-dev \
postgresql-dev \
sqlite-dev \
# System tools
git \
unzip \
wget
# Install PHP extensions
{% for extension in php_extensions %}
RUN docker-php-ext-install {{ extension }}
{% endfor %}
# Install and configure OPcache
RUN docker-php-ext-install opcache
# Install Redis extension
RUN pecl install redis && docker-php-ext-enable redis
# Install Xdebug for development
{% if environment == 'development' %}
RUN pecl install xdebug && docker-php-ext-enable xdebug
{% endif %}
# Configure PHP
COPY php.ini /usr/local/etc/php/conf.d/99-custom.ini
COPY opcache.ini /usr/local/etc/php/conf.d/10-opcache.ini
COPY redis.ini /usr/local/etc/php/conf.d/20-redis.ini
COPY security.ini /usr/local/etc/php/conf.d/30-security.ini
COPY session.ini /usr/local/etc/php/conf.d/40-session.ini
# Configure PHP-FPM
COPY php-fpm.conf /usr/local/etc/php-fpm.d/www.conf
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer \
&& composer --version
# Create application user
RUN addgroup -g 1000 -S www && \
adduser -u 1000 -S www -G www
# Set up application directory
WORKDIR /var/www/html
# Set proper permissions
RUN chown -R www:www /var/www/html
# Security: Run as non-root user
USER www
# Health check
COPY health-check.sh /usr/local/bin/health-check.sh
HEALTHCHECK --interval={{ docker_health_check_interval }} \
--timeout={{ docker_health_check_timeout }} \
--start-period={{ docker_health_check_start_period }} \
--retries={{ docker_health_check_retries }} \
CMD /usr/local/bin/health-check.sh
# Expose PHP-FPM port
EXPOSE 9000
# Default command
CMD ["php-fpm"]

View File

@@ -1,148 +0,0 @@
---
# Monitoring Role Default Variables
# General Configuration
monitoring_enabled: "{{ monitoring_enabled | default(true) }}"
health_checks_enabled: "{{ health_checks_enabled | default(true) }}"
monitoring_user: monitoring
monitoring_group: monitoring
monitoring_home: /opt/monitoring
# Node Exporter Configuration
node_exporter_enabled: true
node_exporter_version: "1.6.1"
node_exporter_port: 9100
node_exporter_bind_address: "127.0.0.1"
node_exporter_user: node_exporter
node_exporter_group: node_exporter
# Prometheus Configuration (basic)
prometheus_enabled: false # Can be enabled for advanced monitoring
prometheus_version: "2.45.0"
prometheus_port: 9090
prometheus_bind_address: "127.0.0.1"
prometheus_retention_time: "15d"
prometheus_retention_size: "10GB"
# Health Check Configuration
health_check_interval: 30
health_check_timeout: 10
health_check_retries: 3
# Service Health Checks
service_checks:
- name: nginx
command: "systemctl is-active nginx"
interval: 30
timeout: 5
retries: 2
- name: docker
command: "docker version"
interval: 60
timeout: 10
retries: 3
- name: php-fpm
command: "docker exec php php-fpm -t"
interval: 60
timeout: 15
retries: 2
- name: mysql
command: "docker exec mysql mysqladmin ping -h localhost"
interval: 60
timeout: 10
retries: 3
# Application Health Checks
app_health_checks:
- name: framework-health
url: "https://{{ domain_name }}/health"
method: GET
expected_status: 200
timeout: 10
interval: 30
- name: api-health
url: "https://{{ domain_name }}/api/health"
method: GET
expected_status: 200
timeout: 5
interval: 60
# System Monitoring Thresholds
monitoring_thresholds:
cpu_usage_warning: 70
cpu_usage_critical: 90
memory_usage_warning: 80
memory_usage_critical: 95
disk_usage_warning: 80
disk_usage_critical: 90
load_average_warning: 2.0
load_average_critical: 4.0
# Log Monitoring
log_monitoring_enabled: true
log_files_to_monitor:
- path: /var/log/nginx/error.log
patterns:
- "error"
- "warn"
- "crit"
alert_threshold: 10 # alerts per minute
- path: /var/log/nginx/access.log
patterns:
- "5[0-9][0-9]" # 5xx errors
- "4[0-9][0-9]" # 4xx errors
alert_threshold: 20
- path: /var/log/auth.log
patterns:
- "Failed password"
- "authentication failure"
alert_threshold: 5
# Alerting Configuration
alerting_enabled: true
alert_email: "{{ ssl_email }}"
alert_methods:
- email
- log
# Backup Monitoring
backup_monitoring_enabled: "{{ backup_enabled | default(false) }}"
backup_check_command: "/usr/local/bin/check-backups.sh"
backup_alert_threshold: 24 # hours
# Performance Monitoring
performance_monitoring_enabled: true
performance_check_interval: 300 # 5 minutes
performance_metrics:
- response_time
- throughput
- error_rate
- resource_usage
# Container Monitoring
docker_monitoring_enabled: true
docker_stats_interval: 60
docker_health_check_command: "docker ps --format 'table {{.Names}}\\t{{.Status}}\\t{{.Ports}}'"
# Custom Framework Monitoring
framework_monitoring:
console_health_check: "php console.php framework:health-check"
mcp_server_check: "php console.php mcp:server --test"
queue_monitoring: "php console.php queue:status"
cache_monitoring: "php console.php cache:status"
# Monitoring Scripts Location
monitoring_scripts_dir: "{{ monitoring_home }}/scripts"
monitoring_logs_dir: "/var/log/monitoring"
monitoring_config_dir: "{{ monitoring_home }}/config"
# Cleanup Configuration
log_retention_days: 30
metrics_retention_days: 7
cleanup_schedule: "0 2 * * *" # Daily at 2 AM

View File

@@ -1,45 +0,0 @@
---
# Monitoring Role Handlers
- name: reload systemd
systemd:
daemon_reload: true
listen: reload systemd
- name: restart monitoring
systemd:
name: "{{ item }}"
state: restarted
loop:
- health-check.service
listen: restart monitoring
ignore_errors: true
- name: restart node-exporter
systemd:
name: node_exporter
state: restarted
listen: restart node-exporter
when: node_exporter_enabled | bool
- name: start monitoring services
systemd:
name: "{{ item }}"
state: started
enabled: true
loop:
- health-check.timer
listen: start monitoring services
ignore_errors: true
- name: reload monitoring config
command: "{{ monitoring_scripts_dir }}/monitoring-utils.sh reload"
listen: reload monitoring config
become_user: "{{ monitoring_user }}"
ignore_errors: true
- name: test alerts
command: "{{ monitoring_scripts_dir }}/send-alert.sh TEST 'Test Alert' 'This is a test alert from Ansible deployment'"
listen: test alerts
become_user: "{{ monitoring_user }}"
ignore_errors: true

View File

@@ -1,31 +0,0 @@
---
galaxy_info:
role_name: monitoring
author: Custom PHP Framework Team
description: System monitoring and health checks for PHP applications
company: michaelschiemer.de
license: MIT
min_ansible_version: 2.12
platforms:
- name: Ubuntu
versions:
- "20.04"
- "22.04"
- "24.04"
- name: Debian
versions:
- "11"
- "12"
galaxy_tags:
- monitoring
- health-checks
- metrics
- alerting
- prometheus
- node-exporter
dependencies: []
collections:
- community.general
- ansible.posix

View File

@@ -1,112 +0,0 @@
---
# Health Checks Configuration
- name: Create health check scripts
template:
src: health-check.sh.j2
dest: "{{ monitoring_scripts_dir }}/health-check-{{ item.name }}.sh"
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0755'
loop: "{{ service_checks }}"
tags:
- monitoring
- health-checks
- scripts
- name: Create application health check script
template:
src: app-health-check.sh.j2
dest: "{{ monitoring_scripts_dir }}/app-health-check.sh"
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0755'
tags:
- monitoring
- health-checks
- application
- name: Create framework-specific health checks
template:
src: framework-health-check.sh.j2
dest: "{{ monitoring_scripts_dir }}/framework-health-check.sh"
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0755'
tags:
- monitoring
- health-checks
- framework
- name: Create comprehensive health check runner
template:
src: run-health-checks.sh.j2
dest: "{{ monitoring_scripts_dir }}/run-health-checks.sh"
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0755'
tags:
- monitoring
- health-checks
- runner
- name: Create health check systemd service
template:
src: health-check.service.j2
dest: /etc/systemd/system/health-check.service
owner: root
group: root
mode: '0644'
notify: reload systemd
tags:
- monitoring
- health-checks
- systemd
- name: Create health check systemd timer
template:
src: health-check.timer.j2
dest: /etc/systemd/system/health-check.timer
owner: root
group: root
mode: '0644'
notify: reload systemd
tags:
- monitoring
- health-checks
- systemd
- name: Enable and start health check timer
systemd:
name: health-check.timer
enabled: true
state: started
daemon_reload: true
tags:
- monitoring
- health-checks
- systemd
- name: Create health check status endpoint
template:
src: health-status.php.j2
dest: /var/www/html/health
owner: "{{ nginx_user | default('www-data') }}"
group: "{{ nginx_group | default('www-data') }}"
mode: '0644'
tags:
- monitoring
- health-checks
- web
- name: Schedule individual health checks
cron:
name: "Health check - {{ item.name }}"
minute: "*/{{ item.interval }}"
job: "{{ monitoring_scripts_dir }}/health-check-{{ item.name }}.sh"
user: "{{ monitoring_user }}"
loop: "{{ service_checks }}"
tags:
- monitoring
- health-checks
- cron

View File

@@ -1,67 +0,0 @@
---
# Monitoring Role - Main Tasks
- name: Include OS-specific variables
include_vars: "{{ ansible_os_family }}.yml"
tags:
- monitoring
- config
- name: Setup monitoring infrastructure
include_tasks: setup-monitoring.yml
tags:
- monitoring
- setup
- name: Install and configure Node Exporter
include_tasks: node-exporter.yml
when: node_exporter_enabled | bool
tags:
- monitoring
- node-exporter
- name: Setup health checks
include_tasks: health-checks.yml
when: health_checks_enabled | bool
tags:
- monitoring
- health-checks
- name: Configure system monitoring
include_tasks: system-monitoring.yml
tags:
- monitoring
- system
- name: Setup application monitoring
include_tasks: app-monitoring.yml
tags:
- monitoring
- application
- name: Configure Docker monitoring
include_tasks: docker-monitoring.yml
when: docker_monitoring_enabled | bool
tags:
- monitoring
- docker
- name: Setup log monitoring
include_tasks: log-monitoring.yml
when: log_monitoring_enabled | bool
tags:
- monitoring
- logs
- name: Configure alerting
include_tasks: alerting.yml
when: alerting_enabled | bool
tags:
- monitoring
- alerting
- name: Setup monitoring cleanup
include_tasks: cleanup.yml
tags:
- monitoring
- cleanup

View File

@@ -1,79 +0,0 @@
---
# Monitoring Infrastructure Setup
- name: Create monitoring user
user:
name: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
system: true
shell: /bin/bash
home: "{{ monitoring_home }}"
create_home: true
tags:
- monitoring
- users
- name: Create monitoring group
group:
name: "{{ monitoring_group }}"
system: true
tags:
- monitoring
- users
- name: Create monitoring directories
file:
path: "{{ item }}"
state: directory
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0755'
loop:
- "{{ monitoring_home }}"
- "{{ monitoring_scripts_dir }}"
- "{{ monitoring_logs_dir }}"
- "{{ monitoring_config_dir }}"
- /etc/systemd/system
tags:
- monitoring
- directories
- name: Install monitoring dependencies
package:
name:
- curl
- wget
- jq
- bc
- mailutils
- logrotate
state: present
tags:
- monitoring
- packages
- name: Create monitoring configuration file
template:
src: monitoring.conf.j2
dest: "{{ monitoring_config_dir }}/monitoring.conf"
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0644'
tags:
- monitoring
- config
- name: Create monitoring utility scripts
template:
src: "{{ item }}.sh.j2"
dest: "{{ monitoring_scripts_dir }}/{{ item }}.sh"
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0755'
loop:
- monitoring-utils
- send-alert
- check-thresholds
tags:
- monitoring
- scripts

View File

@@ -1,108 +0,0 @@
---
# System Resource Monitoring
- name: Create system monitoring script
template:
src: system-monitor.sh.j2
dest: "{{ monitoring_scripts_dir }}/system-monitor.sh"
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0755'
tags:
- monitoring
- system
- scripts
- name: Create resource usage checker
template:
src: check-resources.sh.j2
dest: "{{ monitoring_scripts_dir }}/check-resources.sh"
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0755'
tags:
- monitoring
- system
- resources
- name: Create disk usage monitoring script
template:
src: check-disk-usage.sh.j2
dest: "{{ monitoring_scripts_dir }}/check-disk-usage.sh"
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0755'
tags:
- monitoring
- system
- disk
- name: Create memory monitoring script
template:
src: check-memory.sh.j2
dest: "{{ monitoring_scripts_dir }}/check-memory.sh"
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0755'
tags:
- monitoring
- system
- memory
- name: Create CPU monitoring script
template:
src: check-cpu.sh.j2
dest: "{{ monitoring_scripts_dir }}/check-cpu.sh"
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0755'
tags:
- monitoring
- system
- cpu
- name: Create load average monitoring script
template:
src: check-load.sh.j2
dest: "{{ monitoring_scripts_dir }}/check-load.sh"
owner: "{{ monitoring_user }}"
group: "{{ monitoring_group }}"
mode: '0755'
tags:
- monitoring
- system
- load
- name: Schedule system resource monitoring
cron:
name: "System resource monitoring"
minute: "*/5"
job: "{{ monitoring_scripts_dir }}/system-monitor.sh"
user: "{{ monitoring_user }}"
tags:
- monitoring
- system
- cron
- name: Schedule resource usage alerts
cron:
name: "Resource usage alerts"
minute: "*/10"
job: "{{ monitoring_scripts_dir }}/check-resources.sh"
user: "{{ monitoring_user }}"
tags:
- monitoring
- system
- alerts
- name: Create system monitoring log rotation
template:
src: system-monitoring-logrotate.j2
dest: /etc/logrotate.d/system-monitoring
owner: root
group: root
mode: '0644'
tags:
- monitoring
- system
- logrotate

View File

@@ -1,95 +0,0 @@
#!/bin/bash
# System Resource Monitoring Script
# Custom PHP Framework - {{ environment | upper }}
# Generated by Ansible
set -euo pipefail
# Configuration
LOG_DIR="{{ monitoring_logs_dir }}"
LOG_FILE="${LOG_DIR}/system-monitor.log"
ALERT_SCRIPT="{{ monitoring_scripts_dir }}/send-alert.sh"
CONFIG_FILE="{{ monitoring_config_dir }}/monitoring.conf"
# Load configuration
source "${CONFIG_FILE}"
# Create log directory if it doesn't exist
mkdir -p "${LOG_DIR}"
# Function to log with timestamp
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" >> "${LOG_FILE}"
}
# Function to check CPU usage
check_cpu() {
local cpu_usage
cpu_usage=$(top -bn1 | grep "Cpu(s)" | sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | awk '{print 100 - $1}')
cpu_usage=${cpu_usage%.*} # Remove decimal part
log "CPU Usage: ${cpu_usage}%"
if (( cpu_usage > {{ monitoring_thresholds.cpu_usage_critical }} )); then
"${ALERT_SCRIPT}" "CRITICAL" "CPU Usage Critical" "CPU usage is ${cpu_usage}% (Critical threshold: {{ monitoring_thresholds.cpu_usage_critical }}%)"
elif (( cpu_usage > {{ monitoring_thresholds.cpu_usage_warning }} )); then
"${ALERT_SCRIPT}" "WARNING" "CPU Usage High" "CPU usage is ${cpu_usage}% (Warning threshold: {{ monitoring_thresholds.cpu_usage_warning }}%)"
fi
}
# Function to check memory usage
check_memory() {
local mem_usage
mem_usage=$(free | grep Mem | awk '{printf "%.0f", $3/$2 * 100.0}')
log "Memory Usage: ${mem_usage}%"
if (( mem_usage > {{ monitoring_thresholds.memory_usage_critical }} )); then
"${ALERT_SCRIPT}" "CRITICAL" "Memory Usage Critical" "Memory usage is ${mem_usage}% (Critical threshold: {{ monitoring_thresholds.memory_usage_critical }}%)"
elif (( mem_usage > {{ monitoring_thresholds.memory_usage_warning }} )); then
"${ALERT_SCRIPT}" "WARNING" "Memory Usage High" "Memory usage is ${mem_usage}% (Warning threshold: {{ monitoring_thresholds.memory_usage_warning }}%)"
fi
}
# Function to check disk usage
check_disk() {
local disk_usage
disk_usage=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')
log "Disk Usage: ${disk_usage}%"
if (( disk_usage > {{ monitoring_thresholds.disk_usage_critical }} )); then
"${ALERT_SCRIPT}" "CRITICAL" "Disk Usage Critical" "Disk usage is ${disk_usage}% (Critical threshold: {{ monitoring_thresholds.disk_usage_critical }}%)"
elif (( disk_usage > {{ monitoring_thresholds.disk_usage_warning }} )); then
"${ALERT_SCRIPT}" "WARNING" "Disk Usage High" "Disk usage is ${disk_usage}% (Warning threshold: {{ monitoring_thresholds.disk_usage_warning }}%)"
fi
}
# Function to check load average
check_load() {
local load_avg
load_avg=$(uptime | awk -F'load average:' '{ print $2 }' | cut -d, -f1 | tr -d ' ')
log "Load Average: ${load_avg}"
if (( $(echo "${load_avg} > {{ monitoring_thresholds.load_average_critical }}" | bc -l) )); then
"${ALERT_SCRIPT}" "CRITICAL" "Load Average Critical" "Load average is ${load_avg} (Critical threshold: {{ monitoring_thresholds.load_average_critical }})"
elif (( $(echo "${load_avg} > {{ monitoring_thresholds.load_average_warning }}" | bc -l) )); then
"${ALERT_SCRIPT}" "WARNING" "Load Average High" "Load average is ${load_avg} (Warning threshold: {{ monitoring_thresholds.load_average_warning }})"
fi
}
# Main monitoring function
main() {
log "Starting system monitoring check"
check_cpu
check_memory
check_disk
check_load
log "System monitoring check completed"
}
# Run main function
main "$@"

View File

@@ -1,184 +0,0 @@
---
# Nginx Proxy Role Default Variables
# Nginx Installation
nginx_version: "latest"
nginx_package: nginx
nginx_service: nginx
nginx_user: www-data
nginx_group: www-data
# SSL Configuration
ssl_provider: "{{ ssl_provider | default('letsencrypt') }}"
ssl_email: "{{ ssl_email }}"
ssl_certificate_path: "{{ ssl_certificate_path | default('/etc/letsencrypt/live/' + domain_name) }}"
ssl_protocols:
- TLSv1.2
- TLSv1.3
ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
ssl_prefer_server_ciphers: true
ssl_session_cache: "shared:SSL:10m"
ssl_session_timeout: "1d"
ssl_session_tickets: false
ssl_stapling: true
ssl_stapling_verify: true
# HSTS Configuration
hsts_enabled: true
hsts_max_age: 63072000 # 2 years
hsts_include_subdomains: true
hsts_preload: true
# Security Headers
security_headers:
X-Frame-Options: "SAMEORIGIN"
X-Content-Type-Options: "nosniff"
X-XSS-Protection: "1; mode=block"
Referrer-Policy: "strict-origin-when-cross-origin"
Permissions-Policy: "geolocation=(), microphone=(), camera=()"
Content-Security-Policy: "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' data:; connect-src 'self'"
# Rate Limiting
rate_limiting_enabled: true
rate_limit_zone: "api"
rate_limit_requests: "10r/s"
rate_limit_burst: 20
rate_limit_nodelay: true
# Upstream Configuration
upstream_servers:
- name: php-backend
servers:
- address: "127.0.0.1:9000"
weight: 1
max_fails: 3
fail_timeout: 30s
keepalive: 32
keepalive_requests: 100
keepalive_timeout: 60s
# Virtual Hosts
nginx_vhosts:
- server_name: "{{ domain_name }}"
listen: "443 ssl http2"
root: "/var/www/html/public"
index: "index.php index.html"
ssl_certificate: "{{ ssl_certificate_path }}/fullchain.pem"
ssl_certificate_key: "{{ ssl_certificate_path }}/privkey.pem"
access_log: "/var/log/nginx/{{ domain_name }}-access.log main"
error_log: "/var/log/nginx/{{ domain_name }}-error.log"
extra_parameters: |
# PHP-FPM Configuration
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php-backend;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_param HTTPS on;
fastcgi_param HTTP_SCHEME https;
}
# API Rate Limiting
location /api/ {
limit_req zone={{ rate_limit_zone }} burst={{ rate_limit_burst }}{{ ' nodelay' if rate_limit_nodelay else '' }};
try_files $uri $uri/ /index.php$is_args$args;
}
# Static Assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
# Security
location ~ /\.ht {
deny all;
}
location ~ /\. {
deny all;
}
# HTTP to HTTPS redirect
nginx_redirect_vhost:
server_name: "{{ domain_name }}"
listen: "80"
return: "301 https://$server_name$request_uri"
# Global Nginx Configuration
nginx_worker_processes: "{{ nginx_worker_processes | default('auto') }}"
nginx_worker_connections: "{{ nginx_worker_connections | default(1024) }}"
nginx_multi_accept: true
nginx_sendfile: true
nginx_tcp_nopush: true
nginx_tcp_nodelay: true
nginx_keepalive_timeout: 65
nginx_keepalive_requests: 100
nginx_server_tokens: false
nginx_client_max_body_size: "100M"
nginx_client_body_timeout: 60
nginx_client_header_timeout: 60
nginx_send_timeout: 60
# Logging Configuration
nginx_access_log_format: |
'$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'$request_time $upstream_response_time'
nginx_error_log_level: "{{ log_level | default('warn') }}"
# Gzip Configuration
nginx_gzip: true
nginx_gzip_vary: true
nginx_gzip_proxied: any
nginx_gzip_comp_level: 6
nginx_gzip_types:
- text/plain
- text/css
- text/xml
- text/javascript
- application/javascript
- application/json
- application/xml+rss
- application/atom+xml
- image/svg+xml
# Cache Configuration
nginx_cache_enabled: true
nginx_cache_path: "/var/cache/nginx"
nginx_cache_levels: "1:2"
nginx_cache_keys_zone: "framework_cache:10m"
nginx_cache_max_size: "1g"
nginx_cache_inactive: "60m"
nginx_cache_use_temp_path: false
# Real IP Configuration
nginx_real_ip_header: "X-Forwarded-For"
nginx_set_real_ip_from:
- "127.0.0.1"
- "10.0.0.0/8"
- "172.16.0.0/12"
- "192.168.0.0/16"
# Let's Encrypt Configuration
letsencrypt_enabled: "{{ ssl_provider == 'letsencrypt' }}"
letsencrypt_email: "{{ ssl_email }}"
letsencrypt_domains:
- "{{ domain_name }}"
letsencrypt_webroot_path: "/var/www/letsencrypt"
letsencrypt_renewal_cron: true
letsencrypt_renewal_user: root
letsencrypt_renewal_minute: "30"
letsencrypt_renewal_hour: "2"
# Monitoring and Status
nginx_status_enabled: "{{ monitoring_enabled | default(true) }}"
nginx_status_location: "/nginx_status"
nginx_status_allowed_ips:
- "127.0.0.1"
- "::1"

View File

@@ -1,53 +0,0 @@
---
# Nginx Proxy Role Handlers
- name: restart nginx
service:
name: "{{ nginx_service }}"
state: restarted
listen: restart nginx
- name: reload nginx
service:
name: "{{ nginx_service }}"
state: reloaded
listen: reload nginx
- name: start nginx
service:
name: "{{ nginx_service }}"
state: started
enabled: true
listen: start nginx
- name: stop nginx
service:
name: "{{ nginx_service }}"
state: stopped
listen: stop nginx
- name: validate nginx config
command: nginx -t
register: nginx_config_test
changed_when: false
failed_when: nginx_config_test.rc != 0
listen: validate nginx config
- name: reload systemd
systemd:
daemon_reload: true
listen: reload systemd
- name: renew letsencrypt certificates
command: certbot renew --quiet
listen: renew letsencrypt certificates
when: letsencrypt_enabled | bool
- name: update nginx status
uri:
url: "http://localhost/{{ nginx_status_location }}"
method: GET
status_code: 200
listen: update nginx status
when: nginx_status_enabled | bool
ignore_errors: true

View File

@@ -1,31 +0,0 @@
---
galaxy_info:
role_name: nginx-proxy
author: Custom PHP Framework Team
description: Nginx reverse proxy with SSL termination and security headers
company: michaelschiemer.de
license: MIT
min_ansible_version: 2.12
platforms:
- name: Ubuntu
versions:
- "20.04"
- "22.04"
- "24.04"
- name: Debian
versions:
- "11"
- "12"
galaxy_tags:
- nginx
- proxy
- ssl
- security
- web
- letsencrypt
dependencies: []
collections:
- community.crypto
- ansible.posix

View File

@@ -1,144 +0,0 @@
---
# Nginx Main Configuration
- name: Backup original nginx.conf
copy:
src: /etc/nginx/nginx.conf
dest: /etc/nginx/nginx.conf.backup
remote_src: true
owner: root
group: root
mode: '0644'
ignore_errors: true
tags:
- nginx
- config
- backup
- name: Configure main nginx.conf
template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
owner: root
group: root
mode: '0644'
backup: true
notify: reload nginx
tags:
- nginx
- config
- name: Configure upstream servers
template:
src: upstream.conf.j2
dest: /etc/nginx/conf.d/upstream.conf
owner: root
group: root
mode: '0644'
notify: reload nginx
tags:
- nginx
- upstream
- name: Configure security headers
template:
src: security-headers.conf.j2
dest: /etc/nginx/conf.d/security-headers.conf
owner: root
group: root
mode: '0644'
notify: reload nginx
tags:
- nginx
- security
- name: Configure SSL settings
template:
src: ssl-settings.conf.j2
dest: /etc/nginx/conf.d/ssl-settings.conf
owner: root
group: root
mode: '0644'
notify: reload nginx
tags:
- nginx
- ssl
- name: Configure gzip compression
template:
src: gzip.conf.j2
dest: /etc/nginx/conf.d/gzip.conf
owner: root
group: root
mode: '0644'
notify: reload nginx
tags:
- nginx
- compression
- name: Configure caching
template:
src: cache.conf.j2
dest: /etc/nginx/conf.d/cache.conf
owner: root
group: root
mode: '0644'
when: nginx_cache_enabled | bool
notify: reload nginx
tags:
- nginx
- cache
- name: Configure real IP detection
template:
src: real-ip.conf.j2
dest: /etc/nginx/conf.d/real-ip.conf
owner: root
group: root
mode: '0644'
notify: reload nginx
tags:
- nginx
- real-ip
- name: Remove default site
file:
path: "{{ item }}"
state: absent
loop:
- /etc/nginx/sites-enabled/default
- /var/www/html/index.nginx-debian.html
notify: reload nginx
tags:
- nginx
- cleanup
- name: Create custom error pages
template:
src: "{{ item }}.html.j2"
dest: "/var/www/html/{{ item }}.html"
owner: "{{ nginx_user }}"
group: "{{ nginx_group }}"
mode: '0644'
loop:
- 403
- 404
- 500
- 502
- 503
- 504
tags:
- nginx
- error-pages
- name: Configure custom error pages
template:
src: error-pages.conf.j2
dest: /etc/nginx/conf.d/error-pages.conf
owner: root
group: root
mode: '0644'
notify: reload nginx
tags:
- nginx
- error-pages

View File

@@ -1,86 +0,0 @@
---
# Nginx Installation
- name: Update package cache
package:
update_cache: true
cache_valid_time: 3600
tags:
- nginx
- packages
- name: Install Nginx and dependencies
package:
name:
- "{{ nginx_package }}"
- openssl
- ca-certificates
state: present
tags:
- nginx
- packages
- name: Install Let's Encrypt client (Certbot)
package:
name:
- certbot
- python3-certbot-nginx
state: present
when: letsencrypt_enabled | bool
tags:
- nginx
- ssl
- letsencrypt
- name: Create Nginx directories
file:
path: "{{ item }}"
state: directory
owner: root
group: root
mode: '0755'
loop:
- /etc/nginx/sites-available
- /etc/nginx/sites-enabled
- /etc/nginx/conf.d
- /var/log/nginx
- "{{ nginx_cache_path }}"
- /var/www/html
tags:
- nginx
- directories
- name: Create Let's Encrypt webroot directory
file:
path: "{{ letsencrypt_webroot_path }}"
state: directory
owner: "{{ nginx_user }}"
group: "{{ nginx_group }}"
mode: '0755'
when: letsencrypt_enabled | bool
tags:
- nginx
- ssl
- directories
- name: Set proper permissions on log directory
file:
path: /var/log/nginx
state: directory
owner: "{{ nginx_user }}"
group: "{{ nginx_group }}"
mode: '0755'
tags:
- nginx
- permissions
- name: Ensure Nginx user exists
user:
name: "{{ nginx_user }}"
system: true
shell: /bin/false
home: /var/cache/nginx
create_home: false
tags:
- nginx
- users

View File

@@ -1,13 +0,0 @@
---
# Log Rotation Configuration for Nginx
- name: Configure nginx log rotation
template:
src: nginx-logrotate.j2
dest: /etc/logrotate.d/nginx
owner: root
group: root
mode: '0644'
tags:
- nginx
- logging

View File

@@ -1,65 +0,0 @@
---
# Nginx Proxy Role - Main Tasks
- name: Include OS-specific variables
include_vars: "{{ ansible_os_family }}.yml"
tags:
- nginx
- config
- name: Install Nginx and prerequisites
include_tasks: install-nginx.yml
tags:
- nginx
- install
- name: Configure Nginx
include_tasks: configure-nginx.yml
tags:
- nginx
- config
- name: Setup SSL certificates
include_tasks: ssl-setup.yml
tags:
- nginx
- ssl
- name: Configure security headers and hardening
include_tasks: security-config.yml
tags:
- nginx
- security
- name: Setup virtual hosts
include_tasks: vhosts-config.yml
tags:
- nginx
- vhosts
- name: Configure rate limiting
include_tasks: rate-limiting.yml
when: rate_limiting_enabled | bool
tags:
- nginx
- security
- rate-limit
- name: Setup monitoring and status
include_tasks: monitoring.yml
when: nginx_status_enabled | bool
tags:
- nginx
- monitoring
- name: Configure log rotation
include_tasks: log-rotation.yml
tags:
- nginx
- logging
- name: Validate configuration and start services
include_tasks: validation.yml
tags:
- nginx
- validation

View File

@@ -1,14 +0,0 @@
---
# Monitoring Configuration for Nginx
- name: Configure Nginx status endpoint
template:
src: status.conf.j2
dest: "{{ nginx_conf_d_path }}/status.conf"
owner: root
group: root
mode: '0644'
notify: restart nginx
tags:
- nginx
- monitoring

View File

@@ -1,15 +0,0 @@
---
# Rate Limiting Configuration for Nginx
- name: Create rate limiting configuration
template:
src: rate-limiting.conf.j2
dest: "{{ nginx_conf_d_path }}/rate-limiting.conf"
owner: root
group: root
mode: '0644'
notify: restart nginx
tags:
- nginx
- security
- rate-limit

View File

@@ -1,38 +0,0 @@
---
# Security Configuration for Nginx
- name: Create security headers configuration
template:
src: security-headers.conf.j2
dest: "{{ nginx_conf_d_path }}/security-headers.conf"
owner: root
group: root
mode: '0644'
notify: restart nginx
tags:
- nginx
- security
- headers
- name: Configure SSL settings
template:
src: ssl-settings.conf.j2
dest: "{{ nginx_conf_d_path }}/ssl-settings.conf"
owner: root
group: root
mode: '0644'
when: ssl_provider is defined
notify: restart nginx
tags:
- nginx
- ssl
- security
- name: Remove default Nginx site
file:
path: "{{ nginx_sites_enabled_path }}/default"
state: absent
notify: restart nginx
tags:
- nginx
- security

View File

@@ -1,162 +0,0 @@
---
# SSL Certificate Setup
- name: Create SSL directories
file:
path: "{{ item }}"
state: directory
owner: root
group: root
mode: '0755'
loop:
- /etc/ssl/private
- /etc/ssl/certs
- "{{ ssl_certificate_path | dirname }}"
tags:
- nginx
- ssl
- directories
- name: Generate DH parameters for SSL
openssl_dhparam:
path: /etc/ssl/certs/dhparam.pem
size: 2048
owner: root
group: root
mode: '0644'
tags:
- nginx
- ssl
- dhparam
- name: Generate self-signed certificate for initial setup
block:
- name: Generate private key
openssl_privatekey:
path: /etc/ssl/private/{{ domain_name }}.key
size: 2048
type: RSA
owner: root
group: root
mode: '0600'
- name: Generate self-signed certificate
openssl_certificate:
path: /etc/ssl/certs/{{ domain_name }}.crt
privatekey_path: /etc/ssl/private/{{ domain_name }}.key
provider: selfsigned
common_name: "{{ domain_name }}"
subject_alt_name:
- "DNS:{{ domain_name }}"
- "DNS:www.{{ domain_name }}"
owner: root
group: root
mode: '0644'
when: ssl_provider == 'self-signed' or environment == 'development'
tags:
- nginx
- ssl
- self-signed
- name: Setup Let's Encrypt certificates
block:
- name: Check if certificates already exist
stat:
path: "{{ ssl_certificate_path }}/fullchain.pem"
register: letsencrypt_cert
- name: Create temporary Nginx config for Let's Encrypt
template:
src: nginx-letsencrypt-temp.conf.j2
dest: /etc/nginx/sites-available/letsencrypt-temp
owner: root
group: root
mode: '0644'
when: not letsencrypt_cert.stat.exists
- name: Enable temporary Nginx config
file:
src: /etc/nginx/sites-available/letsencrypt-temp
dest: /etc/nginx/sites-enabled/letsencrypt-temp
state: link
when: not letsencrypt_cert.stat.exists
notify: reload nginx
- name: Start Nginx for Let's Encrypt validation
service:
name: "{{ nginx_service }}"
state: started
enabled: true
when: not letsencrypt_cert.stat.exists
- name: Obtain Let's Encrypt certificate
command: >
certbot certonly
--webroot
--webroot-path {{ letsencrypt_webroot_path }}
--email {{ letsencrypt_email }}
--agree-tos
--non-interactive
--expand
{% for domain in letsencrypt_domains %}
-d {{ domain }}
{% endfor %}
when: not letsencrypt_cert.stat.exists
tags:
- ssl
- letsencrypt
- certificate
- name: Remove temporary Nginx config
file:
path: /etc/nginx/sites-enabled/letsencrypt-temp
state: absent
when: not letsencrypt_cert.stat.exists
notify: reload nginx
- name: Setup automatic certificate renewal
cron:
name: "Renew Let's Encrypt certificates"
minute: "{{ letsencrypt_renewal_minute }}"
hour: "{{ letsencrypt_renewal_hour }}"
job: "certbot renew --quiet && systemctl reload nginx"
user: "{{ letsencrypt_renewal_user }}"
when: letsencrypt_renewal_cron | bool
when: letsencrypt_enabled | bool and environment != 'development'
tags:
- nginx
- ssl
- letsencrypt
- name: Set up SSL certificate paths
set_fact:
ssl_cert_file: >-
{%- if letsencrypt_enabled and environment != 'development' -%}
{{ ssl_certificate_path }}/fullchain.pem
{%- else -%}
/etc/ssl/certs/{{ domain_name }}.crt
{%- endif -%}
ssl_key_file: >-
{%- if letsencrypt_enabled and environment != 'development' -%}
{{ ssl_certificate_path }}/privkey.pem
{%- else -%}
/etc/ssl/private/{{ domain_name }}.key
{%- endif -%}
tags:
- nginx
- ssl
- config
- name: Verify SSL certificate files exist
stat:
path: "{{ item }}"
register: ssl_files_check
loop:
- "{{ ssl_cert_file }}"
- "{{ ssl_key_file }}"
failed_when: not ssl_files_check.results | selectattr('stat.exists') | list
tags:
- nginx
- ssl
- verification

View File

@@ -1,37 +0,0 @@
---
# Validation and Service Management for Nginx
- name: Test nginx configuration
command: nginx -t
register: nginx_test
changed_when: false
tags:
- nginx
- validation
- name: Display nginx test results
debug:
msg: "Nginx configuration test: {{ 'PASSED' if nginx_test.rc == 0 else 'FAILED' }}"
tags:
- nginx
- validation
- name: Start and enable nginx service
systemd:
name: nginx
state: started
enabled: true
daemon_reload: true
tags:
- nginx
- service
- name: Wait for nginx to be ready
wait_for:
port: 80
host: "{{ ansible_default_ipv4.address | default('127.0.0.1') }}"
delay: 2
timeout: 30
tags:
- nginx
- validation

View File

@@ -1,50 +0,0 @@
---
# Virtual Hosts Configuration for Nginx
- name: Create virtual host configuration
template:
src: vhost.conf.j2
dest: "{{ nginx_sites_available_path }}/{{ domain_name }}"
owner: root
group: root
mode: '0644'
notify: restart nginx
tags:
- nginx
- vhosts
- name: Enable virtual host
file:
src: "{{ nginx_sites_available_path }}/{{ domain_name }}"
dest: "{{ nginx_sites_enabled_path }}/{{ domain_name }}"
state: link
notify: restart nginx
tags:
- nginx
- vhosts
- name: Create HTTP to HTTPS redirect configuration
template:
src: redirect-vhost.conf.j2
dest: "{{ nginx_sites_available_path }}/{{ domain_name }}-redirect"
owner: root
group: root
mode: '0644'
when: ssl_provider is defined and environment != 'development'
notify: restart nginx
tags:
- nginx
- ssl
- redirect
- name: Enable HTTP to HTTPS redirect
file:
src: "{{ nginx_sites_available_path }}/{{ domain_name }}-redirect"
dest: "{{ nginx_sites_enabled_path }}/{{ domain_name }}-redirect"
state: link
when: ssl_provider is defined and environment != 'development'
notify: restart nginx
tags:
- nginx
- ssl
- redirect

View File

@@ -1,48 +0,0 @@
# Nginx Configuration for Custom PHP Framework
# Environment: {{ environment | upper }}
# Generated by Ansible - Do not edit manually
user {{ nginx_user }};
worker_processes {{ nginx_worker_processes }};
pid /run/nginx.pid;
# Load modules
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections {{ nginx_worker_connections }};
multi_accept {{ nginx_multi_accept | ternary('on', 'off') }};
use epoll;
}
http {
# Basic Settings
sendfile {{ nginx_sendfile | ternary('on', 'off') }};
tcp_nopush {{ nginx_tcp_nopush | ternary('on', 'off') }};
tcp_nodelay {{ nginx_tcp_nodelay | ternary('on', 'off') }};
keepalive_timeout {{ nginx_keepalive_timeout }};
keepalive_requests {{ nginx_keepalive_requests }};
types_hash_max_size 2048;
server_tokens {{ nginx_server_tokens | ternary('on', 'off') }};
# Client Settings
client_max_body_size {{ nginx_client_max_body_size }};
client_body_timeout {{ nginx_client_body_timeout }};
client_header_timeout {{ nginx_client_header_timeout }};
send_timeout {{ nginx_send_timeout }};
# MIME Types
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging Format
log_format main {{ nginx_access_log_format | quote }};
# Default Logging
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log {{ nginx_error_log_level }};
# Include additional configuration files
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

View File

@@ -1,49 +0,0 @@
# Security Headers Configuration
# Custom PHP Framework - {{ environment | upper }}
# Security Headers
{% for header, value in security_headers.items() %}
add_header {{ header }} "{{ value }}" always;
{% endfor %}
# HSTS (HTTP Strict Transport Security)
{% if hsts_enabled %}
add_header Strict-Transport-Security "max-age={{ hsts_max_age }}{% if hsts_include_subdomains %}; includeSubDomains{% endif %}{% if hsts_preload %}; preload{% endif %}" always;
{% endif %}
# Additional Security Measures
add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive" always;
# Server Information Hiding
more_clear_headers 'Server';
more_set_headers 'Server: Custom-Framework/{{ environment }}';
# Prevent clickjacking for admin areas
location /admin {
add_header X-Frame-Options "DENY" always;
}
# Additional security for API endpoints
location /api {
# Rate limiting is handled in separate config
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
}
# Disable access to sensitive files
location ~* \.(env|git|gitignore|gitattributes|htaccess|htpasswd|ini|log|sh|sql|conf)$ {
deny all;
return 404;
}
# Prevent access to hidden files and directories
location ~ /\. {
deny all;
return 404;
}
# Block access to backup and temporary files
location ~* \.(bak|backup|swp|tmp|temp|~)$ {
deny all;
return 404;
}

View File

@@ -1,27 +0,0 @@
# SSL Configuration for Custom PHP Framework
# Environment: {{ environment | upper }}
# SSL Protocols and Ciphers
ssl_protocols {{ ssl_protocols | join(' ') }};
ssl_ciphers {{ ssl_ciphers }};
ssl_prefer_server_ciphers {{ ssl_prefer_server_ciphers | ternary('on', 'off') }};
# SSL Session Caching
ssl_session_cache {{ ssl_session_cache }};
ssl_session_timeout {{ ssl_session_timeout }};
ssl_session_tickets {{ ssl_session_tickets | ternary('on', 'off') }};
# OCSP Stapling
ssl_stapling {{ ssl_stapling | ternary('on', 'off') }};
ssl_stapling_verify {{ ssl_stapling_verify | ternary('on', 'off') }};
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# DH Parameters
ssl_dhparam /etc/ssl/certs/dhparam.pem;
# SSL Security Headers
add_header Strict-Transport-Security "max-age={{ hsts_max_age }}; includeSubDomains; preload" always;
# SSL Buffer Size (performance optimization)
ssl_buffer_size 4k;

View File

@@ -1,37 +0,0 @@
---
# OS-specific variables for Debian/Ubuntu systems
# Nginx package and service names
nginx_package: nginx
nginx_service: nginx
nginx_user: www-data
nginx_group: www-data
# System paths
nginx_conf_path: /etc/nginx/nginx.conf
nginx_sites_available_path: /etc/nginx/sites-available
nginx_sites_enabled_path: /etc/nginx/sites-enabled
nginx_conf_d_path: /etc/nginx/conf.d
nginx_log_path: /var/log/nginx
nginx_pid_file: /run/nginx.pid
# SSL paths
ssl_cert_path: /etc/ssl/certs
ssl_private_path: /etc/ssl/private
dhparam_path: /etc/ssl/certs/dhparam.pem
# Let's Encrypt paths
certbot_package: certbot
letsencrypt_path: /etc/letsencrypt
letsencrypt_webroot_path: /var/www/letsencrypt
# System packages
nginx_dependencies:
- nginx
- certbot
- python3-certbot-nginx
- openssl
- python3-openssl
# PHP-FPM socket path (if needed)
php_fpm_socket: unix:/run/php/php8.4-fpm.sock

View File

@@ -1,32 +0,0 @@
#!/bin/bash
# Simple Production Deployment Script
# Usage: ./deploy.sh
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
INFRA_DIR="$(dirname "$SCRIPT_DIR")"
cd "$INFRA_DIR"
echo "🚀 Deploying to production..."
echo "📍 Infrastructure directory: $INFRA_DIR"
echo ""
# Check if vault password file exists
if [[ ! -f ".vault_pass" ]]; then
echo "❌ Vault password file not found: .vault_pass"
echo " Create this file with your Ansible Vault password"
exit 1
fi
# Run deployment playbook
ansible-playbook \
-i inventories/production/hosts.yml \
playbooks/deploy-rsync-based.yml \
--vault-password-file .vault_pass
echo ""
echo "✅ Deployment complete!"
echo "🔍 Check status:"
echo " ansible web_servers -i inventories/production/hosts.yml -m shell -a 'docker ps' --vault-password-file .vault_pass"

View File

@@ -1,46 +0,0 @@
#!/bin/bash
# Quick Production Code Sync
# Usage: ./quick-sync.sh
# Note: Does NOT update .env.production (use update-env.sh for that)
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$(dirname "$SCRIPT_DIR")")"
INFRA_DIR="$PROJECT_ROOT/deployment/infrastructure"
cd "$PROJECT_ROOT"
echo "🔄 Quick sync to production (code only)..."
echo "📍 Project root: $PROJECT_ROOT"
echo ""
# Sync code changes (excludes .env files and development artifacts)
rsync -avz \
--exclude '.env' \
--exclude '.env.local' \
--exclude '.env.development' \
--exclude '.env.production' \
--exclude 'node_modules/' \
--exclude '.git/' \
--exclude 'vendor/' \
--exclude 'tests/' \
--exclude '.idea/' \
--exclude '.vscode/' \
--exclude '*.log' \
./ deploy@94.16.110.151:/home/deploy/michaelschiemer/current/
echo ""
echo "🔄 Restarting PHP and web containers..."
ansible web_servers \
-i "$INFRA_DIR/inventories/production/hosts.yml" \
-m shell \
-a "cd /home/deploy/michaelschiemer/current && docker compose restart php web" \
--vault-password-file "$INFRA_DIR/.vault_pass"
echo ""
echo "✅ Quick sync complete!"
echo ""
echo "⚠️ Note: This does NOT update .env.production"
echo " To update configuration, use: ./update-env.sh"

View File

@@ -1,74 +0,0 @@
#!/bin/bash
# Update Production .env.production File
# Usage: ./update-env.sh
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$(dirname "$SCRIPT_DIR")")"
INFRA_DIR="$PROJECT_ROOT/deployment/infrastructure"
SOURCE_ENV="$PROJECT_ROOT/deployment/applications/environments/.env.production"
REMOTE_PATH="/home/deploy/michaelschiemer/shared/.env.production"
cd "$PROJECT_ROOT"
echo "🔍 Validating .env.production..."
if [[ ! -f "$SOURCE_ENV" ]]; then
echo "❌ Source .env.production not found at: $SOURCE_ENV"
exit 1
fi
# Validate required variables
REQUIRED_VARS=("DB_DRIVER" "DB_HOST" "DB_PORT" "DB_DATABASE" "DB_USERNAME" "DB_PASSWORD")
VALIDATION_FAILED=0
for var in "${REQUIRED_VARS[@]}"; do
if ! grep -q "^${var}=" "$SOURCE_ENV"; then
echo "❌ Missing required variable: $var"
VALIDATION_FAILED=1
fi
done
# Check PostgreSQL port
if ! grep -q "^DB_PORT=5432" "$SOURCE_ENV"; then
echo "⚠️ Warning: DB_PORT should be 5432 for PostgreSQL"
read -p "Continue anyway? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
fi
if [[ $VALIDATION_FAILED -eq 1 ]]; then
echo "❌ Validation failed"
exit 1
fi
echo "✅ Validation passed"
echo ""
echo "📤 Uploading .env.production to production server..."
ansible web_servers \
-i "$INFRA_DIR/inventories/production/hosts.yml" \
-m copy \
-a "src=$SOURCE_ENV dest=$REMOTE_PATH mode=0644" \
--vault-password-file "$INFRA_DIR/.vault_pass"
echo ""
echo "🔄 Restarting containers..."
ansible web_servers \
-i "$INFRA_DIR/inventories/production/hosts.yml" \
-m shell \
-a "cd /home/deploy/michaelschiemer/current && docker compose restart php web queue-worker" \
--vault-password-file "$INFRA_DIR/.vault_pass"
echo ""
echo "✅ .env.production updated and containers restarted!"
echo ""
echo "🔍 Verify:"
echo " curl -I https://michaelschiemer.de"
echo " (Should return HTTP/2 200 OK)"

View File

@@ -1,3 +0,0 @@
# SECURITY: Never commit SSH keys or secrets to version control!
*
!.gitignore

View File

@@ -1,22 +0,0 @@
---
# Fresh Server Setup Playbook for Netcup VPS
# Run this first on a fresh server installation
# Usage: ansible-playbook -i inventories/production/hosts.yml setup-fresh-server.yml
- import_playbook: playbooks/initial-server-setup.yml
when: fresh_server_setup | default(false)
- name: Switch to deploy user for infrastructure setup
hosts: web_servers
gather_facts: false
tasks:
- name: Update inventory configuration to use deploy user
debug:
msg:
- "Initial setup complete! Now update your inventory:"
- "1. Change ansible_user from 'root' to 'deploy'"
- "2. Set fresh_server_setup: false"
- "3. Run: ansible-playbook -i inventories/production/hosts.yml site.yml"
tags: always
when: fresh_server_setup | default(false)

View File

@@ -1,298 +0,0 @@
---
# Master Site Playbook for Custom PHP Framework
# Coordinates different deployment scenarios (infrastructure setup, application deployment, rollbacks)
- name: Custom PHP Framework Infrastructure Deployment
hosts: all
become: true
gather_facts: true
vars:
# Deployment metadata
deployment_timestamp: "{{ ansible_date_time.epoch }}"
deployment_version: "{{ ansible_date_time.iso8601 }}"
pre_tasks:
- name: Display deployment information
debug:
msg:
- "Deploying Custom PHP Framework"
- "Environment: {{ environment | upper }}"
- "Domain: {{ domain_name }}"
- "PHP Version: {{ php_version }}"
- "Target Host: {{ inventory_hostname }}"
- "Deployment Time: {{ ansible_date_time.iso8601 }}"
tags: always
- name: Verify environment requirements
assert:
that:
- deploy_env is defined
- deploy_env in ['production', 'staging', 'development']
- domain_name is defined
- ssl_email is defined
- php_version == '8.4'
fail_msg: "Required variables are not properly defined"
success_msg: "Environment requirements verified"
tags: always
- name: Update system packages
package:
update_cache: true
upgrade: safe
cache_valid_time: 3600
when: environment != 'development'
tags:
- system
- packages
- name: Install essential system packages
package:
name: "{{ common_packages }}"
state: present
tags:
- system
- packages
- name: Configure timezone
timezone:
name: "{{ timezone }}"
tags: system
- name: Configure system locale
locale_gen:
name: "{{ locale }}"
state: present
tags: system
roles:
# Base Security Hardening
- role: base-security
tags:
- security
- base
when: security_level is defined
# Docker Runtime Environment
- role: docker-runtime
tags:
- docker
- runtime
- php
# Nginx Reverse Proxy with SSL
- role: nginx-proxy
tags:
- nginx
- proxy
- ssl
when: nginx_enabled | default(true)
# System Monitoring and Health Checks
- role: monitoring
tags:
- monitoring
- health
when: monitoring_enabled | default(true)
post_tasks:
- name: Create deployment marker
copy:
content: |
Deployment Information:
- Environment: {{ environment }}
- Domain: {{ domain_name }}
- PHP Version: {{ php_version }}
- Deployment Time: {{ ansible_date_time.iso8601 }}
- Deployed By: {{ ansible_user }}
- Ansible Version: {{ ansible_version.full }}
- Framework Version: {{ framework.version | default('1.0.0') }}
dest: /opt/deployment-info.txt
owner: root
group: root
mode: '0644'
tags: always
- name: Verify critical services are running
service_facts:
tags: verification
- name: Check critical services status
assert:
that:
- ansible_facts.services['nginx.service'].state == 'running'
- ansible_facts.services['docker.service'].state == 'running'
- ansible_facts.services['ufw.service'].state == 'running' or not (firewall_strict_mode | default(true))
- ansible_facts.services['fail2ban.service'].state == 'running' or not (fail2ban_enabled | default(true))
fail_msg: "Critical services are not running properly"
success_msg: "All critical services are running"
tags: verification
- name: Perform application health check
uri:
url: "{{ 'https' if ssl_provider != 'self-signed' and environment != 'development' else 'http' }}://{{ domain_name }}/health"
method: GET
status_code: [200, 404] # 404 is acceptable if health endpoint doesn't exist yet
timeout: 30
validate_certs: "{{ environment == 'production' }}"
register: health_check
ignore_errors: true
tags: verification
- name: Display health check results
debug:
msg:
- "Health check status: {{ health_check.status | default('Failed') }}"
- "Response time: {{ health_check.elapsed | default('N/A') }}s"
tags: verification
- name: Create deployment summary
debug:
msg:
- "=== DEPLOYMENT COMPLETED SUCCESSFULLY ==="
- "Environment: {{ environment | upper }}"
- "Domain: {{ domain_name }}"
- "SSL: {{ 'Enabled' if ssl_provider != 'self-signed' else 'Self-signed' }}"
- "PHP Version: {{ php_version }}"
- "Docker: Running"
- "Nginx: Running"
- "Security: {{ 'Hardened' if security_level == 'high' else 'Standard' }}"
- "Monitoring: {{ 'Enabled' if monitoring_enabled else 'Disabled' }}"
- "Backup: {{ 'Enabled' if backup_enabled else 'Disabled' }}"
- "Deployment Time: {{ (ansible_date_time.epoch | int - deployment_timestamp | int) }}s"
- "========================================"
tags: always
# Additional playbooks for specific operations
- name: Framework Application Deployment
hosts: web_servers
become: true
gather_facts: false
vars:
app_path: "/var/www/html"
tasks:
- name: Ensure application directory exists
file:
path: "{{ app_path }}"
state: directory
owner: www-data
group: www-data
mode: '0755'
tags: app
- name: Create framework health check endpoint
copy:
content: |
<?php
// Custom PHP Framework Health Check
// Generated by Ansible
header('Content-Type: application/json');
$health = [
'status' => 'healthy',
'timestamp' => date('c'),
'environment' => '{{ environment }}',
'php_version' => PHP_VERSION,
'framework_version' => '{{ framework.version | default("1.0.0") }}',
'checks' => []
];
// Check PHP version
$health['checks']['php'] = version_compare(PHP_VERSION, '8.4.0', '>=') ? 'ok' : 'warning';
// Check if framework is loadable
$health['checks']['framework'] = file_exists('/var/www/html/public/index.php') ? 'ok' : 'error';
// Check write permissions
$health['checks']['permissions'] = is_writable('/var/www/html/storage') ? 'ok' : 'warning';
// Determine overall status
$hasError = in_array('error', $health['checks']);
$hasWarning = in_array('warning', $health['checks']);
if ($hasError) {
$health['status'] = 'unhealthy';
http_response_code(500);
} elseif ($hasWarning) {
$health['status'] = 'warning';
http_response_code(200);
} else {
http_response_code(200);
}
echo json_encode($health, JSON_PRETTY_PRINT);
dest: "{{ app_path }}/health.php"
owner: www-data
group: www-data
mode: '0644'
tags: app
- name: Create basic index.php if it doesn't exist
copy:
content: |
<?php
// Custom PHP Framework - Basic Index
// Environment: {{ environment | upper }}
echo "<h1>Custom PHP Framework</h1>";
echo "<p>Environment: {{ environment | upper }}</p>";
echo "<p>PHP Version: " . PHP_VERSION . "</p>";
echo "<p>Server Time: " . date('Y-m-d H:i:s') . "</p>";
echo "<p>Domain: {{ domain_name }}</p>";
if (file_exists('/var/www/html/health.php')) {
echo '<p><a href="/health.php">Health Check</a></p>';
}
dest: "{{ app_path }}/index.php"
owner: www-data
group: www-data
mode: '0644'
force: false
tags: app
- name: Security Validation Playbook
hosts: web_servers
become: true
gather_facts: false
tasks:
- name: Validate SSL configuration
command: nginx -t
register: nginx_test
changed_when: false
tags: ssl
# - name: Check SSL certificate validity
# openssl_certificate_info:
# path: "{{ ssl_cert_file }}"
# register: cert_info
# when: ssl_cert_file is defined
# tags: ssl
- name: Validate firewall rules
command: ufw status numbered
register: ufw_status
changed_when: false
tags: firewall
- name: Check fail2ban status
command: fail2ban-client status
register: fail2ban_status
changed_when: false
when: fail2ban_enabled | default(true)
tags: security
- name: Security validation summary
debug:
msg:
- "=== SECURITY VALIDATION ==="
- "Nginx Config: {{ 'Valid' if nginx_test.rc == 0 else 'Invalid' }}"
- "SSL Certificate: {{ 'Valid' if cert_info.valid_to else 'Check Required' }}"
- "Firewall: Active"
- "Fail2ban: {{ 'Active' if fail2ban_status.rc == 0 else 'Inactive' }}"
- "=========================="
tags: security

View File

@@ -1,11 +0,0 @@
#!/bin/bash
# Show Docker container status on production server
# Usage: ./status.sh
echo "🐳 Docker Container Status on Production Server"
echo ""
ssh -i ~/.ssh/production deploy@michaelschiemer.de "docker ps -a --format 'table {{.Names}}\t{{.Status}}\t{{.Ports}}'"
echo ""
echo "✅ Done!"

View File

@@ -1,170 +0,0 @@
# Production Environment Configuration
# Auto-generated from template - DO NOT EDIT DIRECTLY
# Generated on: {{ ansible_date_time.date }} {{ ansible_date_time.time }}
# Image Tag: {{ IMAGE_TAG }}
# Environment: {{ environment }}
# Project Configuration
COMPOSE_PROJECT_NAME={{ project_name | default('michaelschiemer') }}
DOMAIN_NAME={{ DOMAIN_NAME }}
IMAGE_TAG={{ IMAGE_TAG }}
# Environment
APP_ENV=production
APP_DEBUG=false
APP_TIMEZONE={{ timezone | default('Europe/Berlin') }}
APP_LOCALE={{ locale | default('de') }}
# SSL/HTTPS Configuration
APP_SSL_ENABLED=true
SSL_CERT_PATH=/etc/letsencrypt/live/{{ DOMAIN_NAME }}
FORCE_HTTPS=true
# Database Configuration (Production)
DB_DRIVER={{ DB_DRIVER | default('mysql') }}
DB_HOST={{ DB_HOST | default('db') }}
DB_PORT={{ DB_PORT | default(3306) }}
DB_DATABASE={{ DB_DATABASE }}
DB_USERNAME={{ DB_USERNAME }}
DB_PASSWORD={{ DB_PASSWORD }}
DB_ROOT_PASSWORD={{ DB_ROOT_PASSWORD }}
DB_CHARSET=utf8mb4
DB_COLLATION=utf8mb4_unicode_ci
# Redis Configuration
REDIS_HOST={{ REDIS_HOST | default('redis') }}
REDIS_PORT={{ REDIS_PORT | default(6379) }}
REDIS_PASSWORD={{ REDIS_PASSWORD }}
REDIS_DATABASE=0
REDIS_PREFIX={{ project_name | default('michaelschiemer') }}_prod_
# Session Configuration (Production Security)
SESSION_DRIVER=redis
SESSION_LIFETIME=120
SESSION_ENCRYPT=true
SESSION_SECURE_COOKIE=true
SESSION_HTTP_ONLY=true
SESSION_SAME_SITE=strict
# Session Fingerprinting (Production Security)
SESSION_FINGERPRINT_STRICT=true
SESSION_FINGERPRINT_USER_AGENT=true
SESSION_FINGERPRINT_ACCEPT_LANGUAGE=true
SESSION_FINGERPRINT_IP_PREFIX=true
SESSION_FINGERPRINT_THRESHOLD=0.8
# Cache Configuration
CACHE_DRIVER=redis
CACHE_TTL=3600
CACHE_PREFIX={{ project_name | default('michaelschiemer') }}_cache_prod_
# Queue Configuration
QUEUE_DRIVER=redis
QUEUE_CONNECTION=redis
QUEUE_PREFIX={{ project_name | default('michaelschiemer') }}_queue_prod_
WORKER_QUEUE=production
WORKER_TIMEOUT=300
WORKER_MEMORY_LIMIT=512
WORKER_SLEEP=1
WORKER_TRIES=5
WORKER_BATCH_SIZE=10
# Mail Configuration (Production)
MAIL_DRIVER={{ MAIL_DRIVER }}
MAIL_HOST={{ MAIL_HOST }}
MAIL_PORT={{ MAIL_PORT }}
MAIL_USERNAME={{ MAIL_USERNAME }}
MAIL_PASSWORD={{ MAIL_PASSWORD }}
MAIL_ENCRYPTION={{ MAIL_ENCRYPTION | default('tls') }}
MAIL_FROM_ADDRESS={{ MAIL_FROM_ADDRESS | default('kontakt@michaelschiemer.de') }}
MAIL_FROM_NAME="{{ MAIL_FROM_NAME | default('Michael Schiemer') }}"
# Logging Configuration (Production)
LOG_CHANNEL=stack
LOG_LEVEL=warning
LOG_STACK_CHANNELS=single,syslog
LOG_ROTATE_DAYS=30
LOG_MAX_FILES=10
# External APIs (Production)
SHOPIFY_WEBHOOK_SECRET={{ SHOPIFY_WEBHOOK_SECRET }}
RAPIDMAIL_USERNAME={{ RAPIDMAIL_USERNAME }}
RAPIDMAIL_PASSWORD={{ RAPIDMAIL_PASSWORD }}
RAPIDMAIL_TEST_MODE=false
# Analytics Configuration (Production)
ANALYTICS_ENABLED=true
ANALYTICS_TRACK_PAGE_VIEWS=true
ANALYTICS_TRACK_API_CALLS=true
ANALYTICS_TRACK_USER_ACTIONS=true
ANALYTICS_TRACK_ERRORS=true
ANALYTICS_TRACK_PERFORMANCE=true
# Monitoring & Health Checks
PROMETHEUS_ENABLED={{ PROMETHEUS_ENABLED | default(true) }}
PROMETHEUS_PORT={{ PROMETHEUS_PORT | default(9090) }}
GRAFANA_ADMIN_PASSWORD={{ GRAFANA_ADMIN_PASSWORD }}
# Security Configuration
APP_KEY={{ APP_KEY }}
CSRF_TOKEN_LIFETIME=7200
RATE_LIMIT_PER_MINUTE=60
MAX_LOGIN_ATTEMPTS=5
LOGIN_LOCKOUT_DURATION=900
# Performance Configuration (Production)
PHP_MEMORY_LIMIT={{ PHP_MEMORY_LIMIT | default('512M') }}
PHP_MAX_EXECUTION_TIME={{ PHP_MAX_EXECUTION_TIME | default(30) }}
PHP_OPCACHE_ENABLE=1
PHP_OPCACHE_MEMORY_CONSUMPTION={{ PHP_OPCACHE_MEMORY_CONSUMPTION | default(256) }}
PHP_OPCACHE_MAX_ACCELERATED_FILES=20000
PHP_OPCACHE_REVALIDATE_FREQ=0
PHP_OPCACHE_VALIDATE_TIMESTAMPS=0
PHP_REALPATH_CACHE_SIZE=4M
PHP_REALPATH_CACHE_TTL=3600
# Nginx Configuration (Production)
NGINX_WORKER_PROCESSES={{ ansible_processor_vcpus | default(4) }}
NGINX_WORKER_CONNECTIONS=2048
NGINX_KEEPALIVE_TIMEOUT=65
NGINX_CLIENT_MAX_BODY_SIZE=50m
# Database Performance (Production)
MYSQL_INNODB_BUFFER_POOL_SIZE=1G
MYSQL_INNODB_LOG_FILE_SIZE=256M
MYSQL_MAX_CONNECTIONS=100
MYSQL_QUERY_CACHE_SIZE=0
# Backup Configuration
BACKUP_ENABLED={{ BACKUP_ENABLED | default(true) }}
BACKUP_SCHEDULE={{ BACKUP_SCHEDULE | default('0 2 * * *') }}
BACKUP_RETENTION_DAYS={{ BACKUP_RETENTION_DAYS | default(30) }}
{% if S3_BACKUP_ENABLED | default(false) %}
BACKUP_S3_BUCKET={{ BACKUP_S3_BUCKET }}
BACKUP_S3_ACCESS_KEY={{ BACKUP_S3_ACCESS_KEY }}
BACKUP_S3_SECRET_KEY={{ BACKUP_S3_SECRET_KEY }}
{% endif %}
# SSL/TLS Configuration
SSL_PROTOCOLS=TLSv1.2 TLSv1.3
SSL_CIPHERS=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
SSL_PREFER_SERVER_CIPHERS=off
SSL_SESSION_CACHE_SIZE=10m
SSL_SESSION_TIMEOUT=10m
# Container User IDs (Production)
UID=33
GID=33
# Restart Policy
RESTART_POLICY=always
# Resource Limits (Production)
PHP_MEMORY_LIMIT_DOCKER={{ PHP_MEMORY_LIMIT_DOCKER | default('2G') }}
PHP_CPU_LIMIT={{ PHP_CPU_LIMIT | default('2.0') }}
NGINX_MEMORY_LIMIT_DOCKER={{ NGINX_MEMORY_LIMIT_DOCKER | default('256M') }}
NGINX_CPU_LIMIT={{ NGINX_CPU_LIMIT | default('0.5') }}
DB_MEMORY_LIMIT_DOCKER={{ DB_MEMORY_LIMIT_DOCKER | default('2G') }}
DB_CPU_LIMIT={{ DB_CPU_LIMIT | default('2.0') }}
REDIS_MEMORY_LIMIT_DOCKER={{ REDIS_MEMORY_LIMIT_DOCKER | default('1G') }}
REDIS_CPU_LIMIT={{ REDIS_CPU_LIMIT | default('0.5') }}

View File

@@ -1,158 +0,0 @@
# Staging Environment Configuration
# Auto-generated from template - DO NOT EDIT DIRECTLY
# Generated on: {{ ansible_date_time.date }} {{ ansible_date_time.time }}
# Image Tag: {{ IMAGE_TAG }}
# Environment: {{ environment }}
# Project Configuration
COMPOSE_PROJECT_NAME={{ project_name | default('michaelschiemer') }}-staging
DOMAIN_NAME={{ DOMAIN_NAME }}
IMAGE_TAG={{ IMAGE_TAG }}
# Environment
APP_ENV=staging
APP_DEBUG={{ APP_DEBUG | default(true) }}
APP_TIMEZONE={{ timezone | default('Europe/Berlin') }}
APP_LOCALE={{ locale | default('de') }}
# SSL/HTTPS Configuration
APP_SSL_ENABLED=true
SSL_CERT_PATH=/etc/letsencrypt/live/{{ DOMAIN_NAME }}
FORCE_HTTPS=true
# Database Configuration (Staging)
DB_DRIVER={{ DB_DRIVER | default('mysql') }}
DB_HOST={{ DB_HOST | default('db') }}
DB_PORT={{ DB_PORT | default(3306) }}
DB_DATABASE={{ DB_DATABASE }}
DB_USERNAME={{ DB_USERNAME }}
DB_PASSWORD={{ DB_PASSWORD }}
DB_ROOT_PASSWORD={{ DB_ROOT_PASSWORD }}
DB_CHARSET=utf8mb4
DB_COLLATION=utf8mb4_unicode_ci
# Redis Configuration
REDIS_HOST={{ REDIS_HOST | default('redis') }}
REDIS_PORT={{ REDIS_PORT | default(6379) }}
REDIS_PASSWORD={{ REDIS_PASSWORD }}
REDIS_DATABASE=1
REDIS_PREFIX={{ project_name | default('michaelschiemer') }}_staging_
# Session Configuration
SESSION_DRIVER=redis
SESSION_LIFETIME=240
SESSION_ENCRYPT=true
SESSION_SECURE_COOKIE=true
SESSION_HTTP_ONLY=true
SESSION_SAME_SITE=lax
# Cache Configuration
CACHE_DRIVER=redis
CACHE_TTL=1800
CACHE_PREFIX={{ project_name | default('michaelschiemer') }}_cache_staging_
# Queue Configuration
QUEUE_DRIVER=redis
QUEUE_CONNECTION=redis
QUEUE_PREFIX={{ project_name | default('michaelschiemer') }}_queue_staging_
WORKER_QUEUE=staging
WORKER_TIMEOUT=300
WORKER_MEMORY_LIMIT=256
WORKER_SLEEP=3
WORKER_TRIES=3
WORKER_BATCH_SIZE=5
# Mail Configuration (Staging)
MAIL_DRIVER={{ MAIL_DRIVER | default('log') }}
MAIL_HOST={{ MAIL_HOST | default('localhost') }}
MAIL_PORT={{ MAIL_PORT | default(1025) }}
MAIL_USERNAME={{ MAIL_USERNAME | default('') }}
MAIL_PASSWORD={{ MAIL_PASSWORD | default('') }}
MAIL_ENCRYPTION={{ MAIL_ENCRYPTION | default('null') }}
MAIL_FROM_ADDRESS={{ MAIL_FROM_ADDRESS | default('staging@michaelschiemer.de') }}
MAIL_FROM_NAME="{{ MAIL_FROM_NAME | default('Michael Schiemer (Staging)') }}"
# Logging Configuration (Staging)
LOG_CHANNEL=stack
LOG_LEVEL={{ LOG_LEVEL | default('debug') }}
LOG_STACK_CHANNELS=single,daily
LOG_ROTATE_DAYS=7
LOG_MAX_FILES=5
# External APIs (Staging - Test Mode)
SHOPIFY_WEBHOOK_SECRET={{ SHOPIFY_WEBHOOK_SECRET | default('test-webhook-secret') }}
RAPIDMAIL_USERNAME={{ RAPIDMAIL_USERNAME | default('test') }}
RAPIDMAIL_PASSWORD={{ RAPIDMAIL_PASSWORD | default('test') }}
RAPIDMAIL_TEST_MODE=true
# Analytics Configuration (Staging)
ANALYTICS_ENABLED={{ ANALYTICS_ENABLED | default(false) }}
ANALYTICS_TRACK_PAGE_VIEWS=false
ANALYTICS_TRACK_API_CALLS=true
ANALYTICS_TRACK_USER_ACTIONS=true
ANALYTICS_TRACK_ERRORS=true
ANALYTICS_TRACK_PERFORMANCE=true
# Monitoring & Health Checks
PROMETHEUS_ENABLED={{ PROMETHEUS_ENABLED | default(false) }}
PROMETHEUS_PORT={{ PROMETHEUS_PORT | default(9091) }}
GRAFANA_ADMIN_PASSWORD={{ GRAFANA_ADMIN_PASSWORD | default('admin') }}
# Security Configuration
APP_KEY={{ APP_KEY }}
CSRF_TOKEN_LIFETIME=14400
RATE_LIMIT_PER_MINUTE=120
MAX_LOGIN_ATTEMPTS=10
LOGIN_LOCKOUT_DURATION=300
# Performance Configuration (Staging)
PHP_MEMORY_LIMIT={{ PHP_MEMORY_LIMIT | default('256M') }}
PHP_MAX_EXECUTION_TIME={{ PHP_MAX_EXECUTION_TIME | default(60) }}
PHP_OPCACHE_ENABLE=1
PHP_OPCACHE_MEMORY_CONSUMPTION={{ PHP_OPCACHE_MEMORY_CONSUMPTION | default(128) }}
PHP_OPCACHE_MAX_ACCELERATED_FILES=10000
PHP_OPCACHE_REVALIDATE_FREQ=2
PHP_OPCACHE_VALIDATE_TIMESTAMPS=1
PHP_REALPATH_CACHE_SIZE=2M
PHP_REALPATH_CACHE_TTL=600
# Nginx Configuration (Staging)
NGINX_WORKER_PROCESSES={{ ansible_processor_vcpus | default(2) }}
NGINX_WORKER_CONNECTIONS=1024
NGINX_KEEPALIVE_TIMEOUT=30
NGINX_CLIENT_MAX_BODY_SIZE=100m
# Database Performance (Staging)
MYSQL_INNODB_BUFFER_POOL_SIZE=256M
MYSQL_INNODB_LOG_FILE_SIZE=128M
MYSQL_MAX_CONNECTIONS=50
MYSQL_QUERY_CACHE_SIZE=0
# Backup Configuration
BACKUP_ENABLED={{ BACKUP_ENABLED | default(false) }}
BACKUP_SCHEDULE={{ BACKUP_SCHEDULE | default('0 3 * * *') }}
BACKUP_RETENTION_DAYS={{ BACKUP_RETENTION_DAYS | default(7) }}
# SSL/TLS Configuration
SSL_PROTOCOLS=TLSv1.2 TLSv1.3
SSL_CIPHERS=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256
SSL_PREFER_SERVER_CIPHERS=off
SSL_SESSION_CACHE_SIZE=5m
SSL_SESSION_TIMEOUT=5m
# Container User IDs (Staging)
UID=33
GID=33
# Restart Policy
RESTART_POLICY=unless-stopped
# Resource Limits (Staging)
PHP_MEMORY_LIMIT_DOCKER={{ PHP_MEMORY_LIMIT_DOCKER | default('1G') }}
PHP_CPU_LIMIT={{ PHP_CPU_LIMIT | default('1.0') }}
NGINX_MEMORY_LIMIT_DOCKER={{ NGINX_MEMORY_LIMIT_DOCKER | default('128M') }}
NGINX_CPU_LIMIT={{ NGINX_CPU_LIMIT | default('0.25') }}
DB_MEMORY_LIMIT_DOCKER={{ DB_MEMORY_LIMIT_DOCKER | default('1G') }}
DB_CPU_LIMIT={{ DB_CPU_LIMIT | default('1.0') }}
REDIS_MEMORY_LIMIT_DOCKER={{ REDIS_MEMORY_LIMIT_DOCKER | default('256M') }}
REDIS_CPU_LIMIT={{ REDIS_CPU_LIMIT | default('0.25') }}