feat(deploy): add Gitea CI/CD workflows and production Docker Compose config
- Add staging deployment workflow (deploy-staging.yml) - Add production deployment workflow (deploy-production.yml) - Add workflow documentation (README.md) - Add secrets setup guide (SECRETS_SETUP_GUIDE.md) - Add production Docker Compose configuration (docker-compose.prod.yml) Workflows implement automated deployment with SSH-based remote execution, health checks, rollback on failure, and smoke testing.
This commit is contained in:
741
.gitea/workflows/README.md
Normal file
741
.gitea/workflows/README.md
Normal file
@@ -0,0 +1,741 @@
|
|||||||
|
# Gitea CI/CD Workflows Documentation
|
||||||
|
|
||||||
|
Comprehensive guide for the automated deployment workflows using Gitea Actions.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This project uses Gitea Actions for automated deployments to staging and production environments. The workflows are designed for:
|
||||||
|
|
||||||
|
- **Zero-downtime deployments** via rolling updates
|
||||||
|
- **Automatic rollback** on deployment failures
|
||||||
|
- **Environment-specific configurations** using Docker Compose overlays
|
||||||
|
- **Database protection** with automated backups (production)
|
||||||
|
- **Comprehensive health checks** and smoke tests
|
||||||
|
- **Deployment audit trail** via persistent logs
|
||||||
|
|
||||||
|
## Workflow Files
|
||||||
|
|
||||||
|
### 1. Staging Deployment (`deploy-staging.yml`)
|
||||||
|
|
||||||
|
**Purpose:** Automated deployment to staging environment for testing and validation.
|
||||||
|
|
||||||
|
**Triggers:**
|
||||||
|
- Push to `staging` branch
|
||||||
|
- Manual workflow dispatch
|
||||||
|
|
||||||
|
**Target:** `https://staging.michaelschiemer.de`
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
- Fast deployment cycle (30-second health check)
|
||||||
|
- Basic health verification
|
||||||
|
- Keeps 5 deployment backups
|
||||||
|
- No database backup (non-critical environment)
|
||||||
|
|
||||||
|
**Workflow Steps:**
|
||||||
|
1. Build Docker image with `ENV=staging`
|
||||||
|
2. Push to private registry (localhost:5000)
|
||||||
|
3. Deploy to staging server via SSH
|
||||||
|
4. Basic health checks
|
||||||
|
5. Automatic rollback on failure
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2. Production Deployment (`deploy-production.yml`)
|
||||||
|
|
||||||
|
**Purpose:** Production deployment with enhanced safety features and verification.
|
||||||
|
|
||||||
|
**Triggers:**
|
||||||
|
- Push to `main` or `production` branches
|
||||||
|
- Manual workflow dispatch
|
||||||
|
|
||||||
|
**Target:** `https://michaelschiemer.de`
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
- **Database backup** before deployment (aborts on backup failure)
|
||||||
|
- **Database migrations** after container startup
|
||||||
|
- **Extended health checks** (60-second wait, multiple verification layers)
|
||||||
|
- **Smoke tests** for functional verification
|
||||||
|
- **Deployment logging** for audit trail
|
||||||
|
- **Graceful shutdown** for active request handling
|
||||||
|
- Keeps 10 deployment backups
|
||||||
|
|
||||||
|
**Workflow Steps:**
|
||||||
|
1. Build Docker image with `ENV=production`
|
||||||
|
2. Push to private registry
|
||||||
|
3. Create database backup (optional via `skip_backup` input)
|
||||||
|
4. Gracefully stop current containers
|
||||||
|
5. Deploy new containers
|
||||||
|
6. Run database migrations
|
||||||
|
7. Extended health verification
|
||||||
|
8. Smoke tests (main page + API)
|
||||||
|
9. Automatic rollback on failure
|
||||||
|
10. Log deployment outcome
|
||||||
|
11. Clean up build artifacts
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Required Gitea Secrets
|
||||||
|
|
||||||
|
Configure these secrets in your Gitea repository settings (`Settings` → `Secrets`).
|
||||||
|
|
||||||
|
### Staging Secrets
|
||||||
|
|
||||||
|
| Secret Name | Description | Example Value |
|
||||||
|
|-------------|-------------|---------------|
|
||||||
|
| `STAGING_HOST` | Staging server hostname or IP | `staging.example.com` or `203.0.113.42` |
|
||||||
|
| `STAGING_USER` | SSH username for staging server | `deploy` or `www-data` |
|
||||||
|
| `STAGING_SSH_KEY` | Private SSH key (PEM format) | `-----BEGIN RSA PRIVATE KEY-----...` |
|
||||||
|
| `STAGING_SSH_PORT` | SSH port (optional, defaults to 22) | `22` or `2222` |
|
||||||
|
|
||||||
|
### Production Secrets
|
||||||
|
|
||||||
|
| Secret Name | Description | Example Value |
|
||||||
|
|-------------|-------------|---------------|
|
||||||
|
| `PRODUCTION_HOST` | Production server hostname or IP | `michaelschiemer.de` or `198.51.100.10` |
|
||||||
|
| `PRODUCTION_USER` | SSH username for production server | `deploy` or `www-data` |
|
||||||
|
| `PRODUCTION_SSH_KEY` | Private SSH key (PEM format) | `-----BEGIN RSA PRIVATE KEY-----...` |
|
||||||
|
| `PRODUCTION_SSH_PORT` | SSH port (optional, defaults to 22) | `22` or `2222` |
|
||||||
|
|
||||||
|
**SSH Key Generation:**
|
||||||
|
```bash
|
||||||
|
# Generate SSH key pair (on your local machine)
|
||||||
|
ssh-keygen -t rsa -b 4096 -f deployment_key -C "gitea-deployment"
|
||||||
|
|
||||||
|
# Copy public key to target server
|
||||||
|
ssh-copy-id -i deployment_key.pub deploy@server.example.com
|
||||||
|
|
||||||
|
# Add private key to Gitea secrets (entire content)
|
||||||
|
cat deployment_key
|
||||||
|
```
|
||||||
|
|
||||||
|
**Security Best Practices:**
|
||||||
|
- Use dedicated deployment user with minimal permissions
|
||||||
|
- Restrict SSH key to specific commands via `authorized_keys` options
|
||||||
|
- Rotate SSH keys regularly (quarterly recommended)
|
||||||
|
- Never commit SSH keys to repository
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Manual Workflow Triggering
|
||||||
|
|
||||||
|
### Via Gitea UI
|
||||||
|
|
||||||
|
1. Navigate to your repository
|
||||||
|
2. Click `Actions` tab
|
||||||
|
3. Select the workflow (`Deploy to Staging` or `Deploy to Production`)
|
||||||
|
4. Click `Run workflow`
|
||||||
|
5. Choose branch
|
||||||
|
6. Set input parameters (if applicable)
|
||||||
|
7. Click `Run workflow`
|
||||||
|
|
||||||
|
### Via Git Push
|
||||||
|
|
||||||
|
**Staging Deployment:**
|
||||||
|
```bash
|
||||||
|
# Push to staging branch
|
||||||
|
git checkout staging
|
||||||
|
git merge develop
|
||||||
|
git push origin staging
|
||||||
|
# Workflow triggers automatically
|
||||||
|
```
|
||||||
|
|
||||||
|
**Production Deployment:**
|
||||||
|
```bash
|
||||||
|
# Push to main/production branch
|
||||||
|
git checkout main
|
||||||
|
git merge staging
|
||||||
|
git push origin main
|
||||||
|
# Workflow triggers automatically
|
||||||
|
```
|
||||||
|
|
||||||
|
### Workflow Input Parameters
|
||||||
|
|
||||||
|
**Production Workflow:**
|
||||||
|
- `force_rebuild`: Force rebuild Docker image even if code hasn't changed (default: `false`)
|
||||||
|
- `skip_backup`: Skip database backup step - **NOT RECOMMENDED** (default: `false`)
|
||||||
|
|
||||||
|
**Use Case for `skip_backup`:**
|
||||||
|
Emergency hotfix deployment when backup would cause unacceptable delay. Only use if:
|
||||||
|
- Recent backup exists
|
||||||
|
- Issue is critical (security vulnerability, production down)
|
||||||
|
- Backup failure is blocking deployment
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment Monitoring
|
||||||
|
|
||||||
|
### Real-Time Monitoring
|
||||||
|
|
||||||
|
**Via Gitea UI:**
|
||||||
|
1. Navigate to `Actions` tab
|
||||||
|
2. Click on running workflow
|
||||||
|
3. View real-time logs for each step
|
||||||
|
4. Check for errors or warnings
|
||||||
|
|
||||||
|
**Via Server Logs:**
|
||||||
|
```bash
|
||||||
|
# SSH to target server
|
||||||
|
ssh deploy@server.example.com
|
||||||
|
|
||||||
|
# Staging logs
|
||||||
|
tail -f /opt/framework-staging/current/storage/logs/app.log
|
||||||
|
|
||||||
|
# Production logs
|
||||||
|
tail -f /opt/framework-production/current/storage/logs/app.log
|
||||||
|
|
||||||
|
# Deployment log (production only)
|
||||||
|
tail -f /opt/framework-production/deployment.log
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deployment Status Verification
|
||||||
|
|
||||||
|
**Check Container Status:**
|
||||||
|
```bash
|
||||||
|
# Staging
|
||||||
|
cd /opt/framework-staging/current
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml ps
|
||||||
|
|
||||||
|
# Production
|
||||||
|
cd /opt/framework-production/current
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml ps
|
||||||
|
```
|
||||||
|
|
||||||
|
**Health Check Endpoints:**
|
||||||
|
```bash
|
||||||
|
# Staging
|
||||||
|
curl -k https://staging.michaelschiemer.de/health
|
||||||
|
|
||||||
|
# Production
|
||||||
|
curl -k https://michaelschiemer.de/health
|
||||||
|
curl -k https://michaelschiemer.de/api/health
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Health Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "healthy",
|
||||||
|
"timestamp": "2025-01-28T15:30:00Z",
|
||||||
|
"version": "2.x",
|
||||||
|
"services": {
|
||||||
|
"database": "connected",
|
||||||
|
"redis": "connected",
|
||||||
|
"queue": "running"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Rollback Procedures
|
||||||
|
|
||||||
|
### Automatic Rollback
|
||||||
|
|
||||||
|
Both workflows include automatic rollback on deployment failure:
|
||||||
|
|
||||||
|
**Trigger Conditions:**
|
||||||
|
- Build failure
|
||||||
|
- Health check failure
|
||||||
|
- Smoke test failure (production)
|
||||||
|
- Database migration failure (production)
|
||||||
|
|
||||||
|
**Rollback Process:**
|
||||||
|
1. Stop failed deployment containers
|
||||||
|
2. Restore most recent backup deployment
|
||||||
|
3. Start restored containers
|
||||||
|
4. Verify rollback success
|
||||||
|
5. Log rollback event
|
||||||
|
|
||||||
|
**Note:** Automatic rollback restores the application, but **database changes are NOT rolled back automatically**. See Manual Database Rollback below.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Manual Rollback
|
||||||
|
|
||||||
|
**When to use:**
|
||||||
|
- Issue discovered after successful deployment
|
||||||
|
- Need to rollback to specific version (not just previous)
|
||||||
|
|
||||||
|
#### Application Rollback
|
||||||
|
|
||||||
|
**Staging:**
|
||||||
|
```bash
|
||||||
|
ssh deploy@staging.example.com
|
||||||
|
|
||||||
|
cd /opt/framework-staging
|
||||||
|
|
||||||
|
# List available backups
|
||||||
|
ls -lt backup_*
|
||||||
|
|
||||||
|
# Stop current deployment
|
||||||
|
cd current
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml down
|
||||||
|
cd ..
|
||||||
|
|
||||||
|
# Restore specific backup
|
||||||
|
rm -rf current
|
||||||
|
cp -r backup_20250128_143000 current
|
||||||
|
|
||||||
|
# Start restored deployment
|
||||||
|
cd current
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml up -d
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml ps
|
||||||
|
curl -k https://staging.michaelschiemer.de/health
|
||||||
|
```
|
||||||
|
|
||||||
|
**Production:**
|
||||||
|
```bash
|
||||||
|
ssh deploy@michaelschiemer.de
|
||||||
|
|
||||||
|
cd /opt/framework-production
|
||||||
|
|
||||||
|
# List available backups
|
||||||
|
ls -lt backup_*
|
||||||
|
|
||||||
|
# Stop current deployment
|
||||||
|
cd current
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml down
|
||||||
|
cd ..
|
||||||
|
|
||||||
|
# Restore specific backup
|
||||||
|
rm -rf current
|
||||||
|
cp -r backup_20250128_150000 current
|
||||||
|
|
||||||
|
# Start restored deployment
|
||||||
|
cd current
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml up -d
|
||||||
|
|
||||||
|
# Wait for services
|
||||||
|
sleep 30
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml ps
|
||||||
|
curl -k https://michaelschiemer.de/health
|
||||||
|
curl -k https://michaelschiemer.de/api/health
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Database Rollback (Production Only)
|
||||||
|
|
||||||
|
**CRITICAL:** Database rollback is a destructive operation. Only perform if:
|
||||||
|
- You have confirmed backup from before problematic deployment
|
||||||
|
- You understand data loss implications
|
||||||
|
- Issue cannot be fixed forward
|
||||||
|
|
||||||
|
**Process:**
|
||||||
|
```bash
|
||||||
|
ssh deploy@michaelschiemer.de
|
||||||
|
|
||||||
|
cd /opt/framework-production/current
|
||||||
|
|
||||||
|
# List available database backups
|
||||||
|
ls -lt storage/backups/backup_*.sql
|
||||||
|
|
||||||
|
# Verify backup integrity
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec -T production-app \
|
||||||
|
php console.php db:verify-backup --file=storage/backups/backup_20250128_150000.sql
|
||||||
|
|
||||||
|
# Stop application to prevent new writes
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml stop production-app
|
||||||
|
|
||||||
|
# Restore database from backup
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec -T production-app \
|
||||||
|
php console.php db:restore --file=storage/backups/backup_20250128_150000.sql --force
|
||||||
|
|
||||||
|
# Start application
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml start production-app
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
curl -k https://michaelschiemer.de/health
|
||||||
|
```
|
||||||
|
|
||||||
|
**Database Rollback Best Practices:**
|
||||||
|
- Always create new backup before rollback
|
||||||
|
- Document rollback reason in deployment log
|
||||||
|
- Notify team immediately
|
||||||
|
- Review application logs for data consistency issues
|
||||||
|
- Consider rolling forward with fix instead
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
|
||||||
|
#### 1. Workflow Fails: "Permission denied (publickey)"
|
||||||
|
|
||||||
|
**Cause:** SSH authentication failed
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
- Verify SSH key is correctly added to Gitea secrets (entire key content)
|
||||||
|
- Ensure public key is in `~/.ssh/authorized_keys` on target server
|
||||||
|
- Check SSH key permissions on server (`chmod 600 ~/.ssh/authorized_keys`)
|
||||||
|
- Test SSH connection manually: `ssh -i deployment_key deploy@server.example.com`
|
||||||
|
|
||||||
|
#### 2. Health Check Fails
|
||||||
|
|
||||||
|
**Staging:**
|
||||||
|
```bash
|
||||||
|
# Check container status
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml ps
|
||||||
|
|
||||||
|
# Check logs
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml logs staging-app
|
||||||
|
|
||||||
|
# Check PHP-FPM
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml exec staging-app php -v
|
||||||
|
```
|
||||||
|
|
||||||
|
**Production:**
|
||||||
|
```bash
|
||||||
|
# Extended diagnostics
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml ps
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml logs production-app
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec production-app php -v
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec production-app pgrep php-fpm
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec production-redis redis-cli ping
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Database Backup Fails (Production)
|
||||||
|
|
||||||
|
**Symptoms:**
|
||||||
|
- Workflow aborts at step [0/6]
|
||||||
|
- Error: "Database backup failed - deployment aborted"
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
```bash
|
||||||
|
# Check database connection
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec production-app \
|
||||||
|
php console.php db:status
|
||||||
|
|
||||||
|
# Check backup directory permissions
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec production-app \
|
||||||
|
ls -la storage/backups/
|
||||||
|
|
||||||
|
# Fix permissions if needed
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec production-app \
|
||||||
|
chown -R www-data:www-data storage/backups/
|
||||||
|
|
||||||
|
# Test backup manually
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec production-app \
|
||||||
|
php console.php db:backup --output=storage/backups/test_backup.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Database Migration Fails (Production)
|
||||||
|
|
||||||
|
**Symptoms:**
|
||||||
|
- Workflow fails at step [5/6]
|
||||||
|
- Error: "Database migration failed"
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
```bash
|
||||||
|
# Check migration status
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec production-app \
|
||||||
|
php console.php db:status
|
||||||
|
|
||||||
|
# Review migration logs
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml logs production-app | grep migration
|
||||||
|
|
||||||
|
# Run migration manually with verbose output
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec production-app \
|
||||||
|
php console.php db:migrate --force --verbose
|
||||||
|
|
||||||
|
# If migration is stuck, rollback and retry
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec production-app \
|
||||||
|
php console.php db:rollback 1
|
||||||
|
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec production-app \
|
||||||
|
php console.php db:migrate --force
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 5. Image Push Fails
|
||||||
|
|
||||||
|
**Symptoms:**
|
||||||
|
- Workflow fails at "Push image to private registry"
|
||||||
|
- Error: "connection refused" or "unauthorized"
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
```bash
|
||||||
|
# Check registry is accessible from CI runner
|
||||||
|
curl http://localhost:5000/v2/_catalog
|
||||||
|
|
||||||
|
# Verify registry authentication (if configured)
|
||||||
|
docker login localhost:5000
|
||||||
|
|
||||||
|
# Check registry container is running
|
||||||
|
docker ps | grep registry
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 6. Smoke Tests Fail (Production)
|
||||||
|
|
||||||
|
**Symptoms:**
|
||||||
|
- Health checks pass but smoke tests fail
|
||||||
|
- Error: "Main page failed" or "API health check failed"
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
```bash
|
||||||
|
# Test endpoints manually
|
||||||
|
curl -v -k https://michaelschiemer.de/
|
||||||
|
curl -v -k https://michaelschiemer.de/api/health
|
||||||
|
|
||||||
|
# Check Traefik routing
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml logs production-nginx
|
||||||
|
|
||||||
|
# Check application logs
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml logs production-app | tail -100
|
||||||
|
|
||||||
|
# Verify Traefik labels
|
||||||
|
docker inspect production-nginx | grep -A 20 Labels
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Staging vs Production Differences
|
||||||
|
|
||||||
|
Comprehensive comparison of workflow behaviors.
|
||||||
|
|
||||||
|
| Feature | Staging | Production |
|
||||||
|
|---------|---------|------------|
|
||||||
|
| **Trigger Branches** | `staging` | `main`, `production` |
|
||||||
|
| **Image Tag** | `staging` | `latest` |
|
||||||
|
| **Deployment Directory** | `/opt/framework-staging/` | `/opt/framework-production/` |
|
||||||
|
| **Database Backup** | ❌ No | ✅ Yes (with abort on failure) |
|
||||||
|
| **Database Migrations** | ❌ Manual | ✅ Automatic |
|
||||||
|
| **Health Check Wait** | 30 seconds | 60 seconds |
|
||||||
|
| **Health Checks** | Basic (container status, PHP version, HTTP via nginx) | Extended (+ PHP-FPM, Traefik, Redis) |
|
||||||
|
| **Smoke Tests** | ❌ No | ✅ Yes (main page + API) |
|
||||||
|
| **Backup Retention** | 5 backups | 10 backups |
|
||||||
|
| **Container Shutdown** | `docker-compose down` (immediate) | `docker-compose stop` (graceful) |
|
||||||
|
| **Deployment Logging** | ❌ No | ✅ Yes (deployment.log) |
|
||||||
|
| **Build Artifact Cleanup** | ❌ No | ✅ Yes |
|
||||||
|
| **Target URL** | https://staging.michaelschiemer.de | https://michaelschiemer.de |
|
||||||
|
| **Manual Inputs** | `force_rebuild` | `force_rebuild`, `skip_backup` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Development Workflow
|
||||||
|
|
||||||
|
**Recommended Branch Flow:**
|
||||||
|
```
|
||||||
|
develop → staging → main (production)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Process:**
|
||||||
|
1. Develop features on feature branches
|
||||||
|
2. Merge to `develop` branch
|
||||||
|
3. When ready for testing: `git merge develop` → `staging`
|
||||||
|
4. Deploy to staging automatically
|
||||||
|
5. Test on staging environment
|
||||||
|
6. If tests pass: `git merge staging` → `main`
|
||||||
|
7. Deploy to production automatically
|
||||||
|
|
||||||
|
### Pre-Deployment Checklist
|
||||||
|
|
||||||
|
**Staging:**
|
||||||
|
- [ ] All tests pass locally
|
||||||
|
- [ ] Code reviewed and approved
|
||||||
|
- [ ] No breaking changes without migration path
|
||||||
|
- [ ] Dependencies updated in composer.json/package.json
|
||||||
|
|
||||||
|
**Production:**
|
||||||
|
- [ ] Tested on staging environment
|
||||||
|
- [ ] Database migrations tested on staging
|
||||||
|
- [ ] Performance impact assessed
|
||||||
|
- [ ] Rollback plan documented
|
||||||
|
- [ ] Team notified of deployment window
|
||||||
|
- [ ] Recent database backup verified
|
||||||
|
- [ ] Monitoring alerts configured
|
||||||
|
|
||||||
|
### Post-Deployment Verification
|
||||||
|
|
||||||
|
**Staging:**
|
||||||
|
```bash
|
||||||
|
# Basic checks
|
||||||
|
curl -k https://staging.michaelschiemer.de/health
|
||||||
|
curl -k https://staging.michaelschiemer.de/api/health
|
||||||
|
|
||||||
|
# Manual testing of new features
|
||||||
|
```
|
||||||
|
|
||||||
|
**Production:**
|
||||||
|
```bash
|
||||||
|
# Automated checks (from CI workflow)
|
||||||
|
curl -k https://michaelschiemer.de/
|
||||||
|
curl -k https://michaelschiemer.de/api/health
|
||||||
|
|
||||||
|
# Manual verification
|
||||||
|
# - Test critical user flows
|
||||||
|
# - Check analytics/monitoring dashboards
|
||||||
|
# - Review error logs
|
||||||
|
# - Verify database migrations applied
|
||||||
|
|
||||||
|
# Check deployment log
|
||||||
|
ssh deploy@michaelschiemer.de tail /opt/framework-production/deployment.log
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deployment Scheduling
|
||||||
|
|
||||||
|
**Staging:** Deploy anytime during business hours
|
||||||
|
|
||||||
|
**Production:**
|
||||||
|
- **Preferred Window:** Off-peak hours (e.g., 2-6 AM local time)
|
||||||
|
- **Emergency Deployments:** Anytime (use `skip_backup` if necessary)
|
||||||
|
- **Major Releases:** Schedule during maintenance window with advance notice
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Emergency Procedures
|
||||||
|
|
||||||
|
### Production Down - Complete Outage
|
||||||
|
|
||||||
|
**Immediate Response:**
|
||||||
|
```bash
|
||||||
|
# 1. Check container status
|
||||||
|
ssh deploy@michaelschiemer.de
|
||||||
|
cd /opt/framework-production/current
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml ps
|
||||||
|
|
||||||
|
# 2. If containers stopped, restart
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml up -d
|
||||||
|
|
||||||
|
# 3. If restart fails, rollback
|
||||||
|
cd /opt/framework-production
|
||||||
|
rm -rf current
|
||||||
|
cp -r $(ls -dt backup_* | head -n1) current
|
||||||
|
cd current
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml up -d
|
||||||
|
|
||||||
|
# 4. Verify recovery
|
||||||
|
curl -k https://michaelschiemer.de/health
|
||||||
|
|
||||||
|
# 5. Investigate root cause
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml logs production-app
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Corruption
|
||||||
|
|
||||||
|
**Recovery Steps:**
|
||||||
|
```bash
|
||||||
|
# 1. Stop application immediately
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml stop production-app
|
||||||
|
|
||||||
|
# 2. Verify most recent backup
|
||||||
|
ls -lt /opt/framework-production/current/storage/backups/
|
||||||
|
|
||||||
|
# 3. Restore from backup (see Database Rollback section)
|
||||||
|
|
||||||
|
# 4. Verify data integrity
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec production-app \
|
||||||
|
php console.php db:verify-integrity
|
||||||
|
|
||||||
|
# 5. Restart application
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml start production-app
|
||||||
|
```
|
||||||
|
|
||||||
|
### Failed Deployment with No Rollback
|
||||||
|
|
||||||
|
**If automatic rollback fails:**
|
||||||
|
```bash
|
||||||
|
# 1. SSH to server
|
||||||
|
ssh deploy@michaelschiemer.de
|
||||||
|
|
||||||
|
# 2. Manual rollback (see Manual Rollback section)
|
||||||
|
|
||||||
|
# 3. If rollback unavailable, emergency restore
|
||||||
|
cd /opt/framework-production
|
||||||
|
git clone https://git.michaelschiemer.de/michael/framework.git emergency-deploy
|
||||||
|
cd emergency-deploy
|
||||||
|
git checkout <last-known-good-commit>
|
||||||
|
|
||||||
|
# 4. Build and deploy manually
|
||||||
|
docker build -f docker/php/Dockerfile -t localhost:5000/framework:emergency .
|
||||||
|
docker push localhost:5000/framework:emergency
|
||||||
|
|
||||||
|
# 5. Update docker-compose.prod.yml to use emergency tag
|
||||||
|
cd /opt/framework-production/current
|
||||||
|
# Edit docker-compose.prod.yml: image: localhost:5000/framework:emergency
|
||||||
|
|
||||||
|
# 6. Deploy
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Monitoring and Alerting
|
||||||
|
|
||||||
|
### Recommended Monitoring
|
||||||
|
|
||||||
|
**Application Metrics:**
|
||||||
|
- Response time (target: <200ms p95)
|
||||||
|
- Error rate (target: <0.1%)
|
||||||
|
- Request throughput
|
||||||
|
- Queue depth
|
||||||
|
|
||||||
|
**Infrastructure Metrics:**
|
||||||
|
- Container health status
|
||||||
|
- CPU usage (target: <70%)
|
||||||
|
- Memory usage (target: <80%)
|
||||||
|
- Disk space (alert: <20% free)
|
||||||
|
|
||||||
|
**Database Metrics:**
|
||||||
|
- Query performance
|
||||||
|
- Connection pool utilization
|
||||||
|
- Replication lag (if applicable)
|
||||||
|
- Backup success rate
|
||||||
|
|
||||||
|
### Alert Configuration
|
||||||
|
|
||||||
|
**Critical Alerts (immediate notification):**
|
||||||
|
- Production deployment failed
|
||||||
|
- Automatic rollback triggered
|
||||||
|
- Health check failure (3 consecutive)
|
||||||
|
- Database backup failure
|
||||||
|
- Container restart loop
|
||||||
|
|
||||||
|
**Warning Alerts (review within 1 hour):**
|
||||||
|
- Staging deployment failed
|
||||||
|
- Smoke test failure
|
||||||
|
- Slow health check response (>5s)
|
||||||
|
- Disk space <30%
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Additional Resources
|
||||||
|
|
||||||
|
- **Main Documentation:** `deployment/NEW_ARCHITECTURE.md`
|
||||||
|
- **Architecture Analysis:** `deployment/legacy/ARCHITECTURE_ANALYSIS.md`
|
||||||
|
- **Docker Compose Files:** Root directory (`docker-compose.*.yml`)
|
||||||
|
- **Framework Documentation:** `docs/` directory
|
||||||
|
- **Troubleshooting Guide:** `docs/guides/troubleshooting.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
### Regular Tasks
|
||||||
|
|
||||||
|
**Weekly:**
|
||||||
|
- Review deployment logs
|
||||||
|
- Check backup retention
|
||||||
|
- Verify health check reliability
|
||||||
|
- Update dependencies (staging first)
|
||||||
|
|
||||||
|
**Monthly:**
|
||||||
|
- Rotate SSH keys
|
||||||
|
- Review and clean old backups (>30 days)
|
||||||
|
- Test rollback procedures
|
||||||
|
- Update workflow documentation
|
||||||
|
|
||||||
|
**Quarterly:**
|
||||||
|
- Disaster recovery drill
|
||||||
|
- Performance baseline review
|
||||||
|
- Security audit of deployment process
|
||||||
|
- Workflow optimization review
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Last Updated:** 2025-01-28
|
||||||
|
**Workflow Version:** 1.0
|
||||||
|
**Maintained by:** DevOps Team
|
||||||
481
.gitea/workflows/SECRETS_SETUP_GUIDE.md
Normal file
481
.gitea/workflows/SECRETS_SETUP_GUIDE.md
Normal file
@@ -0,0 +1,481 @@
|
|||||||
|
# Gitea Secrets Configuration Guide
|
||||||
|
|
||||||
|
**Purpose:** Step-by-step guide to configure all required secrets for staging and production deployments.
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- Admin access to Gitea repository
|
||||||
|
- SSH access to staging and production servers
|
||||||
|
- OpenSSH installed locally
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start Checklist
|
||||||
|
|
||||||
|
- [ ] Generate SSH key pair
|
||||||
|
- [ ] Distribute public key to target servers
|
||||||
|
- [ ] Configure staging secrets in Gitea
|
||||||
|
- [ ] Configure production secrets in Gitea
|
||||||
|
- [ ] Test SSH connection manually
|
||||||
|
- [ ] Verify secrets are accessible to workflows
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 1: Generate SSH Keys
|
||||||
|
|
||||||
|
### Option A: Generate New Deployment Keys (Recommended)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Navigate to project root
|
||||||
|
cd /home/michael/dev/michaelschiemer
|
||||||
|
|
||||||
|
# Create SSH keys directory
|
||||||
|
mkdir -p .gitea/ssh-keys
|
||||||
|
cd .gitea/ssh-keys
|
||||||
|
|
||||||
|
# Generate staging key
|
||||||
|
ssh-keygen -t rsa -b 4096 -f gitea-staging-deploy \
|
||||||
|
-C "gitea-staging-deployment" -N ""
|
||||||
|
|
||||||
|
# Generate production key
|
||||||
|
ssh-keygen -t rsa -b 4096 -f gitea-production-deploy \
|
||||||
|
-C "gitea-production-deployment" -N ""
|
||||||
|
|
||||||
|
# Verify keys created
|
||||||
|
ls -la
|
||||||
|
# Expected output:
|
||||||
|
# gitea-staging-deploy (private key)
|
||||||
|
# gitea-staging-deploy.pub (public key)
|
||||||
|
# gitea-production-deploy (private key)
|
||||||
|
# gitea-production-deploy.pub (public key)
|
||||||
|
```
|
||||||
|
|
||||||
|
**⚠️ Security Note:**
|
||||||
|
- Private keys should NEVER be committed to git
|
||||||
|
- Add `.gitea/ssh-keys/` to `.gitignore` if not already present
|
||||||
|
- Store private keys securely (e.g., password manager, Vault)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 2: Distribute Public Keys to Servers
|
||||||
|
|
||||||
|
### Staging Server Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Copy public key to staging server
|
||||||
|
ssh-copy-id -i .gitea/ssh-keys/gitea-staging-deploy.pub deploy@YOUR_STAGING_HOST
|
||||||
|
|
||||||
|
# Or manually (if ssh-copy-id not available):
|
||||||
|
cat .gitea/ssh-keys/gitea-staging-deploy.pub | \
|
||||||
|
ssh deploy@YOUR_STAGING_HOST "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
|
||||||
|
|
||||||
|
# Set proper permissions on server
|
||||||
|
ssh deploy@YOUR_STAGING_HOST "chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys"
|
||||||
|
|
||||||
|
# Test connection
|
||||||
|
ssh -i .gitea/ssh-keys/gitea-staging-deploy deploy@YOUR_STAGING_HOST "echo 'Staging SSH connection successful'"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Production Server Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Copy public key to production server
|
||||||
|
ssh-copy-id -i .gitea/ssh-keys/gitea-production-deploy.pub deploy@YOUR_PRODUCTION_HOST
|
||||||
|
|
||||||
|
# Or manually:
|
||||||
|
cat .gitea/ssh-keys/gitea-production-deploy.pub | \
|
||||||
|
ssh deploy@YOUR_PRODUCTION_HOST "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
|
||||||
|
|
||||||
|
# Set proper permissions
|
||||||
|
ssh deploy@YOUR_PRODUCTION_HOST "chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys"
|
||||||
|
|
||||||
|
# Test connection
|
||||||
|
ssh -i .gitea/ssh-keys/gitea-production-deploy deploy@YOUR_PRODUCTION_HOST "echo 'Production SSH connection successful'"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Deployment User Requirements:**
|
||||||
|
- User must exist on target server (e.g., `deploy`, `www-data`, `ubuntu`)
|
||||||
|
- User must have sudo privileges for Docker commands
|
||||||
|
- User must have write access to deployment directories:
|
||||||
|
- Staging: `/opt/framework-staging/`
|
||||||
|
- Production: `/opt/framework-production/`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 3: Prepare Secret Values
|
||||||
|
|
||||||
|
### Extract Private Key Content
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /home/michael/dev/michaelschiemer/.gitea/ssh-keys
|
||||||
|
|
||||||
|
# Display staging private key (copy entire output)
|
||||||
|
echo "=== STAGING_SSH_KEY ==="
|
||||||
|
cat gitea-staging-deploy
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Display production private key (copy entire output)
|
||||||
|
echo "=== PRODUCTION_SSH_KEY ==="
|
||||||
|
cat gitea-production-deploy
|
||||||
|
echo ""
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important:** Copy the **entire key content** including:
|
||||||
|
```
|
||||||
|
-----BEGIN RSA PRIVATE KEY-----
|
||||||
|
[key content here]
|
||||||
|
-----END RSA PRIVATE KEY-----
|
||||||
|
```
|
||||||
|
|
||||||
|
### Determine Server Details
|
||||||
|
|
||||||
|
**Staging Server:**
|
||||||
|
```bash
|
||||||
|
# Get staging hostname/IP (replace with your actual server)
|
||||||
|
STAGING_HOST="staging.michaelschiemer.de" # or IP: 203.0.113.42
|
||||||
|
|
||||||
|
# Get SSH port (default: 22)
|
||||||
|
STAGING_SSH_PORT="22"
|
||||||
|
|
||||||
|
# Get deployment user
|
||||||
|
STAGING_USER="deploy"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Production Server:**
|
||||||
|
```bash
|
||||||
|
# Get production hostname/IP
|
||||||
|
PRODUCTION_HOST="michaelschiemer.de" # or IP: 198.51.100.10
|
||||||
|
|
||||||
|
# Get SSH port (default: 22)
|
||||||
|
PRODUCTION_SSH_PORT="22"
|
||||||
|
|
||||||
|
# Get deployment user
|
||||||
|
PRODUCTION_USER="deploy"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 4: Configure Secrets in Gitea
|
||||||
|
|
||||||
|
### Access Gitea Secrets Configuration
|
||||||
|
|
||||||
|
1. **Navigate to Repository:**
|
||||||
|
- Open Gitea web interface
|
||||||
|
- Go to your framework repository
|
||||||
|
|
||||||
|
2. **Access Secrets Settings:**
|
||||||
|
- Click `Settings` (⚙️ icon)
|
||||||
|
- Click `Secrets` in left sidebar
|
||||||
|
- Or direct URL: `https://git.michaelschiemer.de/michael/framework/settings/secrets`
|
||||||
|
|
||||||
|
3. **Add New Secret:**
|
||||||
|
- Click `Add Secret` button
|
||||||
|
- Fill in `Name` and `Value` fields
|
||||||
|
- Click `Add Secret` to save
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Staging Secrets Configuration
|
||||||
|
|
||||||
|
**Secret 1: STAGING_HOST**
|
||||||
|
- **Name:** `STAGING_HOST`
|
||||||
|
- **Value:** `staging.michaelschiemer.de` (or your staging server hostname/IP)
|
||||||
|
- **Description:** Staging server hostname or IP address
|
||||||
|
|
||||||
|
**Secret 2: STAGING_USER**
|
||||||
|
- **Name:** `STAGING_USER`
|
||||||
|
- **Value:** `deploy` (or your deployment user)
|
||||||
|
- **Description:** SSH username for staging deployments
|
||||||
|
|
||||||
|
**Secret 3: STAGING_SSH_KEY**
|
||||||
|
- **Name:** `STAGING_SSH_KEY`
|
||||||
|
- **Value:** [Paste entire content of `gitea-staging-deploy` private key]
|
||||||
|
- **Description:** Private SSH key for staging authentication
|
||||||
|
- **⚠️ Important:** Include `-----BEGIN RSA PRIVATE KEY-----` and `-----END RSA PRIVATE KEY-----` lines
|
||||||
|
|
||||||
|
**Secret 4: STAGING_SSH_PORT**
|
||||||
|
- **Name:** `STAGING_SSH_PORT`
|
||||||
|
- **Value:** `22` (or your custom SSH port)
|
||||||
|
- **Description:** SSH port for staging server (optional, defaults to 22)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Production Secrets Configuration
|
||||||
|
|
||||||
|
**Secret 5: PRODUCTION_HOST**
|
||||||
|
- **Name:** `PRODUCTION_HOST`
|
||||||
|
- **Value:** `michaelschiemer.de` (or your production server hostname/IP)
|
||||||
|
- **Description:** Production server hostname or IP address
|
||||||
|
|
||||||
|
**Secret 6: PRODUCTION_USER**
|
||||||
|
- **Name:** `PRODUCTION_USER`
|
||||||
|
- **Value:** `deploy` (or your deployment user)
|
||||||
|
- **Description:** SSH username for production deployments
|
||||||
|
|
||||||
|
**Secret 7: PRODUCTION_SSH_KEY**
|
||||||
|
- **Name:** `PRODUCTION_SSH_KEY`
|
||||||
|
- **Value:** [Paste entire content of `gitea-production-deploy` private key]
|
||||||
|
- **Description:** Private SSH key for production authentication
|
||||||
|
- **⚠️ Important:** Include `-----BEGIN RSA PRIVATE KEY-----` and `-----END RSA PRIVATE KEY-----` lines
|
||||||
|
|
||||||
|
**Secret 8: PRODUCTION_SSH_PORT**
|
||||||
|
- **Name:** `PRODUCTION_SSH_PORT`
|
||||||
|
- **Value:** `22` (or your custom SSH port)
|
||||||
|
- **Description:** SSH port for production server (optional, defaults to 22)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 5: Verify Secrets Configuration
|
||||||
|
|
||||||
|
### Via Gitea UI
|
||||||
|
|
||||||
|
1. Navigate to `Settings` → `Secrets`
|
||||||
|
2. Verify all 8 secrets are listed:
|
||||||
|
- ✅ STAGING_HOST
|
||||||
|
- ✅ STAGING_USER
|
||||||
|
- ✅ STAGING_SSH_KEY
|
||||||
|
- ✅ STAGING_SSH_PORT
|
||||||
|
- ✅ PRODUCTION_HOST
|
||||||
|
- ✅ PRODUCTION_USER
|
||||||
|
- ✅ PRODUCTION_SSH_KEY
|
||||||
|
- ✅ PRODUCTION_SSH_PORT
|
||||||
|
|
||||||
|
3. Check that secrets show "Last Updated" timestamp
|
||||||
|
|
||||||
|
**Note:** Secret values are masked in the UI for security (you cannot view them after saving).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Manual SSH Connection Test
|
||||||
|
|
||||||
|
Test SSH connections using the same credentials that workflows will use:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /home/michael/dev/michaelschiemer/.gitea/ssh-keys
|
||||||
|
|
||||||
|
# Test staging connection
|
||||||
|
ssh -i gitea-staging-deploy deploy@YOUR_STAGING_HOST \
|
||||||
|
"echo 'Staging SSH test successful'; docker --version"
|
||||||
|
|
||||||
|
# Test production connection
|
||||||
|
ssh -i gitea-production-deploy deploy@YOUR_PRODUCTION_HOST \
|
||||||
|
"echo 'Production SSH test successful'; docker --version"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected Output:**
|
||||||
|
```
|
||||||
|
Staging SSH test successful
|
||||||
|
Docker version 24.0.7, build afdd53b
|
||||||
|
```
|
||||||
|
|
||||||
|
**If Connection Fails:**
|
||||||
|
- Verify hostname/IP is correct
|
||||||
|
- Check SSH port (try specifying: `ssh -p 2222 ...`)
|
||||||
|
- Verify public key is in `~/.ssh/authorized_keys` on server
|
||||||
|
- Check server firewall allows SSH connections
|
||||||
|
- Verify user has Docker permissions: `ssh user@host "docker ps"`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 6: Test Workflow Access to Secrets
|
||||||
|
|
||||||
|
### Trigger Test Workflow
|
||||||
|
|
||||||
|
Create a minimal test workflow to verify secrets are accessible:
|
||||||
|
|
||||||
|
**File:** `.gitea/workflows/test-secrets.yml`
|
||||||
|
```yaml
|
||||||
|
name: Test Secrets Configuration
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test-secrets:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Test staging secrets
|
||||||
|
run: |
|
||||||
|
echo "Testing staging secrets..."
|
||||||
|
echo "STAGING_HOST: ${{ secrets.STAGING_HOST }}"
|
||||||
|
echo "STAGING_USER: ${{ secrets.STAGING_USER }}"
|
||||||
|
echo "STAGING_SSH_PORT: ${{ secrets.STAGING_SSH_PORT }}"
|
||||||
|
echo "STAGING_SSH_KEY length: ${#STAGING_SSH_KEY}"
|
||||||
|
env:
|
||||||
|
STAGING_SSH_KEY: ${{ secrets.STAGING_SSH_KEY }}
|
||||||
|
|
||||||
|
- name: Test production secrets
|
||||||
|
run: |
|
||||||
|
echo "Testing production secrets..."
|
||||||
|
echo "PRODUCTION_HOST: ${{ secrets.PRODUCTION_HOST }}"
|
||||||
|
echo "PRODUCTION_USER: ${{ secrets.PRODUCTION_USER }}"
|
||||||
|
echo "PRODUCTION_SSH_PORT: ${{ secrets.PRODUCTION_SSH_PORT }}"
|
||||||
|
echo "PRODUCTION_SSH_KEY length: ${#PRODUCTION_SSH_KEY}"
|
||||||
|
env:
|
||||||
|
PRODUCTION_SSH_KEY: ${{ secrets.PRODUCTION_SSH_KEY }}
|
||||||
|
|
||||||
|
- name: Test SSH connection to staging
|
||||||
|
uses: appleboy/ssh-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.STAGING_HOST }}
|
||||||
|
username: ${{ secrets.STAGING_USER }}
|
||||||
|
key: ${{ secrets.STAGING_SSH_KEY }}
|
||||||
|
port: ${{ secrets.STAGING_SSH_PORT || 22 }}
|
||||||
|
script: |
|
||||||
|
echo "Staging SSH connection successful"
|
||||||
|
docker --version
|
||||||
|
docker-compose --version
|
||||||
|
|
||||||
|
- name: Test SSH connection to production
|
||||||
|
uses: appleboy/ssh-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.PRODUCTION_HOST }}
|
||||||
|
username: ${{ secrets.PRODUCTION_USER }}
|
||||||
|
key: ${{ secrets.PRODUCTION_SSH_KEY }}
|
||||||
|
port: ${{ secrets.PRODUCTION_SSH_PORT || 22 }}
|
||||||
|
script: |
|
||||||
|
echo "Production SSH connection successful"
|
||||||
|
docker --version
|
||||||
|
docker-compose --version
|
||||||
|
```
|
||||||
|
|
||||||
|
**Run Test:**
|
||||||
|
1. Commit and push test workflow: `git add .gitea/workflows/test-secrets.yml && git commit -m "Add secrets test workflow" && git push`
|
||||||
|
2. Go to Gitea Actions tab
|
||||||
|
3. Select "Test Secrets Configuration" workflow
|
||||||
|
4. Click "Run workflow"
|
||||||
|
5. Monitor execution logs
|
||||||
|
|
||||||
|
**Expected Result:**
|
||||||
|
- All secrets should be accessible
|
||||||
|
- SSH connections should succeed
|
||||||
|
- Docker/docker-compose versions should be displayed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Security Best Practices
|
||||||
|
|
||||||
|
### SSH Key Management
|
||||||
|
|
||||||
|
**✅ Do:**
|
||||||
|
- Use 4096-bit RSA keys (or Ed25519)
|
||||||
|
- Generate separate keys for staging and production
|
||||||
|
- Rotate keys quarterly
|
||||||
|
- Store private keys in secure password manager
|
||||||
|
- Use dedicated deployment user (not root)
|
||||||
|
- Restrict deployment user permissions
|
||||||
|
|
||||||
|
**❌ Don't:**
|
||||||
|
- Commit private keys to git
|
||||||
|
- Share keys between environments
|
||||||
|
- Use personal SSH keys for deployments
|
||||||
|
- Store keys in plain text files
|
||||||
|
- Reuse keys across projects
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Secret Rotation Schedule
|
||||||
|
|
||||||
|
**Quarterly (Every 3 Months):**
|
||||||
|
1. Generate new SSH key pairs
|
||||||
|
2. Add new public keys to servers (keep old keys active)
|
||||||
|
3. Update Gitea secrets with new private keys
|
||||||
|
4. Test deployments with new keys
|
||||||
|
5. Remove old public keys from servers
|
||||||
|
6. Delete old private keys securely
|
||||||
|
|
||||||
|
**Template for Rotation:**
|
||||||
|
```bash
|
||||||
|
# Generate new keys with date suffix
|
||||||
|
ssh-keygen -t rsa -b 4096 -f gitea-staging-deploy-2025-04 \
|
||||||
|
-C "gitea-staging-deployment-2025-04" -N ""
|
||||||
|
|
||||||
|
# Repeat process from Step 2 onwards
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Access Control
|
||||||
|
|
||||||
|
**Gitea Repository Permissions:**
|
||||||
|
- Limit "Secrets" access to repository admins only
|
||||||
|
- Require 2FA for admin accounts
|
||||||
|
- Audit secret access logs regularly
|
||||||
|
|
||||||
|
**Server Access Control:**
|
||||||
|
- Deployment user should have minimal required permissions
|
||||||
|
- Use sudo configuration for Docker commands only
|
||||||
|
- Monitor SSH access logs
|
||||||
|
- Implement IP whitelisting if possible
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Issue: Secret Not Found in Workflow
|
||||||
|
|
||||||
|
**Symptoms:**
|
||||||
|
- Workflow fails with "secret not found" error
|
||||||
|
- Secret value is empty in workflow logs
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
1. Verify secret name matches exactly (case-sensitive)
|
||||||
|
2. Check secret is created at repository level (not organization or user level)
|
||||||
|
3. Ensure workflow has access to repository secrets
|
||||||
|
4. Try re-creating the secret
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Issue: SSH Authentication Failed
|
||||||
|
|
||||||
|
**Symptoms:**
|
||||||
|
- Workflow fails: "Permission denied (publickey)"
|
||||||
|
- Cannot connect to server via SSH
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
1. Verify entire private key content is in secret (including BEGIN/END lines)
|
||||||
|
2. Check public key is in `~/.ssh/authorized_keys` on server
|
||||||
|
3. Verify SSH key format (PEM vs OpenSSH format)
|
||||||
|
4. Check server SSH configuration allows public key authentication
|
||||||
|
5. Test manual connection: `ssh -i key-file user@host`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Issue: Docker Permission Denied
|
||||||
|
|
||||||
|
**Symptoms:**
|
||||||
|
- SSH connection succeeds but Docker commands fail
|
||||||
|
- Error: "permission denied while trying to connect to Docker daemon"
|
||||||
|
|
||||||
|
**Solutions:**
|
||||||
|
```bash
|
||||||
|
# Add deployment user to docker group (on server)
|
||||||
|
ssh deploy@server
|
||||||
|
sudo usermod -aG docker deploy
|
||||||
|
|
||||||
|
# Log out and back in for group changes to take effect
|
||||||
|
exit
|
||||||
|
ssh deploy@server
|
||||||
|
|
||||||
|
# Verify Docker access
|
||||||
|
docker ps
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
After secrets are configured and tested:
|
||||||
|
|
||||||
|
1. ✅ Secrets configured in Gitea
|
||||||
|
2. ⏳ Test staging deployment workflow
|
||||||
|
3. ⏳ Test production deployment workflow
|
||||||
|
4. ⏳ Update main deployment documentation
|
||||||
|
5. ⏳ Set up monitoring and alerting
|
||||||
|
|
||||||
|
**Continue to:** `Testing Deployment Workflows` section in main README.md
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Last Updated:** 2025-01-28
|
||||||
|
**Guide Version:** 1.0
|
||||||
|
**Security Review Date:** 2025-01-28
|
||||||
312
.gitea/workflows/deploy-production.yml
Normal file
312
.gitea/workflows/deploy-production.yml
Normal file
@@ -0,0 +1,312 @@
|
|||||||
|
name: Deploy to Production
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
- production
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
force_rebuild:
|
||||||
|
description: 'Force rebuild Docker image'
|
||||||
|
required: false
|
||||||
|
default: 'false'
|
||||||
|
skip_backup:
|
||||||
|
description: 'Skip database backup (not recommended)'
|
||||||
|
required: false
|
||||||
|
default: 'false'
|
||||||
|
|
||||||
|
env:
|
||||||
|
REGISTRY: localhost:5000
|
||||||
|
IMAGE_NAME: framework
|
||||||
|
IMAGE_TAG: latest
|
||||||
|
COMPOSE_PROJECT_NAME: framework-production
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build-and-deploy:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
|
||||||
|
- name: Build Docker image
|
||||||
|
run: |
|
||||||
|
echo "Building Docker image: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_TAG }}"
|
||||||
|
docker build \
|
||||||
|
--file docker/php/Dockerfile \
|
||||||
|
--tag ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_TAG }} \
|
||||||
|
--build-arg ENV=production \
|
||||||
|
--build-arg COMPOSER_INSTALL_FLAGS="--no-dev --optimize-autoloader --no-interaction" \
|
||||||
|
.
|
||||||
|
|
||||||
|
- name: Push image to private registry
|
||||||
|
run: |
|
||||||
|
echo "Pushing image to registry..."
|
||||||
|
docker push ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_TAG }}
|
||||||
|
|
||||||
|
- name: Prepare deployment files
|
||||||
|
run: |
|
||||||
|
echo "Preparing deployment files..."
|
||||||
|
mkdir -p deployment-production
|
||||||
|
cp docker-compose.base.yml deployment-production/
|
||||||
|
cp docker-compose.prod.yml deployment-production/
|
||||||
|
cp -r docker deployment-production/
|
||||||
|
|
||||||
|
# Create deployment script
|
||||||
|
cat > deployment-production/deploy.sh << 'EOF'
|
||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "=================================================="
|
||||||
|
echo "Starting Production Deployment"
|
||||||
|
echo "=================================================="
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Database backup (unless explicitly skipped)
|
||||||
|
if [ "${SKIP_BACKUP}" != "true" ]; then
|
||||||
|
echo "[0/6] Creating database backup..."
|
||||||
|
BACKUP_FILE="backup_$(date +%Y%m%d_%H%M%S).sql"
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec -T production-app \
|
||||||
|
php console.php db:backup --output="/var/www/html/storage/backups/${BACKUP_FILE}" || {
|
||||||
|
echo "⚠️ Database backup failed - deployment aborted"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
echo "✅ Database backup created: ${BACKUP_FILE}"
|
||||||
|
else
|
||||||
|
echo "⚠️ Database backup skipped (not recommended for production)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Pull latest images
|
||||||
|
echo "[1/6] Pulling latest Docker images..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml pull
|
||||||
|
|
||||||
|
# Stop existing containers gracefully
|
||||||
|
echo "[2/6] Stopping existing containers (graceful shutdown)..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml stop
|
||||||
|
|
||||||
|
# Start new containers
|
||||||
|
echo "[3/6] Starting new containers..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml up -d
|
||||||
|
|
||||||
|
# Wait for services to be healthy (longer timeout for production)
|
||||||
|
echo "[4/6] Waiting for services to be healthy..."
|
||||||
|
sleep 30
|
||||||
|
|
||||||
|
# Run database migrations
|
||||||
|
echo "[5/6] Running database migrations..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec -T production-app \
|
||||||
|
php console.php db:migrate --force || {
|
||||||
|
echo "⚠️ Database migration failed"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Verify deployment
|
||||||
|
echo "[6/6] Verifying deployment..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml ps
|
||||||
|
|
||||||
|
# Cleanup old containers
|
||||||
|
echo "Cleaning up old containers..."
|
||||||
|
docker system prune -f
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "=================================================="
|
||||||
|
echo "Production Deployment Complete"
|
||||||
|
echo "=================================================="
|
||||||
|
EOF
|
||||||
|
|
||||||
|
chmod +x deployment-production/deploy.sh
|
||||||
|
|
||||||
|
- name: Deploy to production server
|
||||||
|
uses: appleboy/ssh-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.PRODUCTION_HOST }}
|
||||||
|
username: ${{ secrets.PRODUCTION_USER }}
|
||||||
|
key: ${{ secrets.PRODUCTION_SSH_KEY }}
|
||||||
|
port: ${{ secrets.PRODUCTION_SSH_PORT || 22 }}
|
||||||
|
script: |
|
||||||
|
# Create deployment directory
|
||||||
|
mkdir -p /opt/framework-production
|
||||||
|
cd /opt/framework-production
|
||||||
|
|
||||||
|
# Backup current deployment
|
||||||
|
if [ -d "current" ]; then
|
||||||
|
echo "Backing up current deployment..."
|
||||||
|
timestamp=$(date +%Y%m%d_%H%M%S)
|
||||||
|
mv current "backup_${timestamp}"
|
||||||
|
# Keep only last 10 backups for production
|
||||||
|
ls -dt backup_* | tail -n +11 | xargs rm -rf
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create new deployment directory
|
||||||
|
mkdir -p current
|
||||||
|
cd current
|
||||||
|
|
||||||
|
- name: Copy deployment files
|
||||||
|
uses: appleboy/scp-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.PRODUCTION_HOST }}
|
||||||
|
username: ${{ secrets.PRODUCTION_USER }}
|
||||||
|
key: ${{ secrets.PRODUCTION_SSH_KEY }}
|
||||||
|
port: ${{ secrets.PRODUCTION_SSH_PORT || 22 }}
|
||||||
|
source: "deployment-production/*"
|
||||||
|
target: "/opt/framework-production/current/"
|
||||||
|
strip_components: 1
|
||||||
|
|
||||||
|
- name: Execute deployment
|
||||||
|
uses: appleboy/ssh-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.PRODUCTION_HOST }}
|
||||||
|
username: ${{ secrets.PRODUCTION_USER }}
|
||||||
|
key: ${{ secrets.PRODUCTION_SSH_KEY }}
|
||||||
|
port: ${{ secrets.PRODUCTION_SSH_PORT || 22 }}
|
||||||
|
script: |
|
||||||
|
cd /opt/framework-production/current
|
||||||
|
|
||||||
|
# Set skip backup flag if provided
|
||||||
|
export SKIP_BACKUP="${{ github.event.inputs.skip_backup || 'false' }}"
|
||||||
|
|
||||||
|
# Execute deployment script
|
||||||
|
./deploy.sh
|
||||||
|
|
||||||
|
- name: Health check
|
||||||
|
uses: appleboy/ssh-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.PRODUCTION_HOST }}
|
||||||
|
username: ${{ secrets.PRODUCTION_USER }}
|
||||||
|
key: ${{ secrets.PRODUCTION_SSH_KEY }}
|
||||||
|
port: ${{ secrets.PRODUCTION_SSH_PORT || 22 }}
|
||||||
|
script: |
|
||||||
|
cd /opt/framework-production/current
|
||||||
|
|
||||||
|
# Wait for services to be fully ready (longer for production)
|
||||||
|
echo "Waiting 60 seconds for services to initialize..."
|
||||||
|
sleep 60
|
||||||
|
|
||||||
|
# Check container status
|
||||||
|
echo "Checking container status..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml ps
|
||||||
|
|
||||||
|
# Check service health
|
||||||
|
echo "Checking service health..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec -T production-app php -v
|
||||||
|
|
||||||
|
# Check PHP-FPM is running
|
||||||
|
echo "Checking PHP-FPM process..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec -T production-app pgrep php-fpm
|
||||||
|
|
||||||
|
# Test HTTP endpoint (via Traefik)
|
||||||
|
echo "Testing production endpoint..."
|
||||||
|
curl -f -k https://michaelschiemer.de/health || {
|
||||||
|
echo "⚠️ Health check endpoint failed"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check Redis connection
|
||||||
|
echo "Checking Redis connection..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml exec -T production-redis redis-cli ping
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "✅ All health checks passed!"
|
||||||
|
|
||||||
|
- name: Smoke tests
|
||||||
|
uses: appleboy/ssh-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.PRODUCTION_HOST }}
|
||||||
|
username: ${{ secrets.PRODUCTION_USER }}
|
||||||
|
key: ${{ secrets.PRODUCTION_SSH_KEY }}
|
||||||
|
port: ${{ secrets.PRODUCTION_SSH_PORT || 22 }}
|
||||||
|
script: |
|
||||||
|
echo "Running production smoke tests..."
|
||||||
|
|
||||||
|
# Test main page
|
||||||
|
curl -f -k https://michaelschiemer.de/ > /dev/null 2>&1 && echo "✅ Main page accessible" || {
|
||||||
|
echo "❌ Main page failed"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Test API health
|
||||||
|
curl -f -k https://michaelschiemer.de/api/health > /dev/null 2>&1 && echo "✅ API health check passed" || {
|
||||||
|
echo "❌ API health check failed"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
echo "✅ Smoke tests completed successfully"
|
||||||
|
|
||||||
|
- name: Rollback on failure
|
||||||
|
if: failure()
|
||||||
|
uses: appleboy/ssh-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.PRODUCTION_HOST }}
|
||||||
|
username: ${{ secrets.PRODUCTION_USER }}
|
||||||
|
key: ${{ secrets.PRODUCTION_SSH_KEY }}
|
||||||
|
port: ${{ secrets.PRODUCTION_SSH_PORT || 22 }}
|
||||||
|
script: |
|
||||||
|
cd /opt/framework-production
|
||||||
|
|
||||||
|
if [ -d "$(ls -dt backup_* 2>/dev/null | head -n1)" ]; then
|
||||||
|
echo "🚨 Rolling back to previous deployment..."
|
||||||
|
latest_backup=$(ls -dt backup_* | head -n1)
|
||||||
|
|
||||||
|
# Stop current broken deployment
|
||||||
|
cd current
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml down
|
||||||
|
cd ..
|
||||||
|
|
||||||
|
# Restore backup
|
||||||
|
rm -rf current
|
||||||
|
cp -r "$latest_backup" current
|
||||||
|
|
||||||
|
# Start restored deployment
|
||||||
|
cd current
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml up -d
|
||||||
|
|
||||||
|
# Wait for services
|
||||||
|
sleep 30
|
||||||
|
|
||||||
|
# Verify rollback
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml ps
|
||||||
|
|
||||||
|
echo "✅ Rollback complete - previous version restored"
|
||||||
|
else
|
||||||
|
echo "❌ No backup available for rollback"
|
||||||
|
echo "⚠️ MANUAL INTERVENTION REQUIRED"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Notify deployment status
|
||||||
|
if: always()
|
||||||
|
uses: appleboy/ssh-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.PRODUCTION_HOST }}
|
||||||
|
username: ${{ secrets.PRODUCTION_USER }}
|
||||||
|
key: ${{ secrets.PRODUCTION_SSH_KEY }}
|
||||||
|
port: ${{ secrets.PRODUCTION_SSH_PORT || 22 }}
|
||||||
|
script: |
|
||||||
|
if [ "${{ job.status }}" == "success" ]; then
|
||||||
|
echo "✅ Production deployment successful"
|
||||||
|
echo "URL: https://michaelschiemer.de"
|
||||||
|
echo "Deployed at: $(date)"
|
||||||
|
|
||||||
|
# Log deployment
|
||||||
|
echo "$(date) - Deployment SUCCESS - Commit: ${{ github.sha }}" >> /opt/framework-production/deployment.log
|
||||||
|
else
|
||||||
|
echo "❌ Production deployment failed - rollback executed"
|
||||||
|
|
||||||
|
# Log deployment failure
|
||||||
|
echo "$(date) - Deployment FAILED - Commit: ${{ github.sha }}" >> /opt/framework-production/deployment.log
|
||||||
|
|
||||||
|
# Send alert (placeholder - implement actual alerting)
|
||||||
|
echo "⚠️ ALERT: Production deployment failed. Manual intervention may be required."
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Clean up build artifacts
|
||||||
|
if: always()
|
||||||
|
run: |
|
||||||
|
echo "Cleaning up deployment artifacts..."
|
||||||
|
rm -rf deployment-production
|
||||||
|
echo "✅ Cleanup complete"
|
||||||
212
.gitea/workflows/deploy-staging.yml
Normal file
212
.gitea/workflows/deploy-staging.yml
Normal file
@@ -0,0 +1,212 @@
|
|||||||
|
name: Deploy to Staging
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- staging
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
force_rebuild:
|
||||||
|
description: 'Force rebuild Docker image'
|
||||||
|
required: false
|
||||||
|
default: 'false'
|
||||||
|
|
||||||
|
env:
|
||||||
|
REGISTRY: localhost:5000
|
||||||
|
IMAGE_NAME: framework
|
||||||
|
IMAGE_TAG: staging
|
||||||
|
COMPOSE_PROJECT_NAME: framework-staging
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build-and-deploy:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
|
||||||
|
- name: Build Docker image
|
||||||
|
run: |
|
||||||
|
echo "Building Docker image: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_TAG }}"
|
||||||
|
docker build \
|
||||||
|
--file docker/php/Dockerfile \
|
||||||
|
--tag ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_TAG }} \
|
||||||
|
--build-arg ENV=staging \
|
||||||
|
--build-arg COMPOSER_INSTALL_FLAGS="--no-dev --optimize-autoloader --no-interaction" \
|
||||||
|
.
|
||||||
|
|
||||||
|
- name: Push image to private registry
|
||||||
|
run: |
|
||||||
|
echo "Pushing image to registry..."
|
||||||
|
docker push ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ env.IMAGE_TAG }}
|
||||||
|
|
||||||
|
- name: Prepare deployment files
|
||||||
|
run: |
|
||||||
|
echo "Preparing deployment files..."
|
||||||
|
mkdir -p deployment-staging
|
||||||
|
cp docker-compose.base.yml deployment-staging/
|
||||||
|
cp docker-compose.staging.yml deployment-staging/
|
||||||
|
cp -r docker deployment-staging/
|
||||||
|
|
||||||
|
# Create deployment script
|
||||||
|
cat > deployment-staging/deploy.sh << 'EOF'
|
||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "=================================================="
|
||||||
|
echo "Starting Staging Deployment"
|
||||||
|
echo "=================================================="
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Pull latest images
|
||||||
|
echo "[1/5] Pulling latest Docker images..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml pull
|
||||||
|
|
||||||
|
# Stop existing containers
|
||||||
|
echo "[2/5] Stopping existing containers..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml down
|
||||||
|
|
||||||
|
# Start new containers
|
||||||
|
echo "[3/5] Starting new containers..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml up -d
|
||||||
|
|
||||||
|
# Wait for services to be healthy
|
||||||
|
echo "[4/5] Waiting for services to be healthy..."
|
||||||
|
sleep 10
|
||||||
|
|
||||||
|
# Verify health
|
||||||
|
echo "[5/5] Verifying deployment..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml ps
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "=================================================="
|
||||||
|
echo "Staging Deployment Complete"
|
||||||
|
echo "=================================================="
|
||||||
|
EOF
|
||||||
|
|
||||||
|
chmod +x deployment-staging/deploy.sh
|
||||||
|
|
||||||
|
- name: Deploy to staging server
|
||||||
|
uses: appleboy/ssh-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.STAGING_HOST }}
|
||||||
|
username: ${{ secrets.STAGING_USER }}
|
||||||
|
key: ${{ secrets.STAGING_SSH_KEY }}
|
||||||
|
port: ${{ secrets.STAGING_SSH_PORT || 22 }}
|
||||||
|
script: |
|
||||||
|
# Create deployment directory
|
||||||
|
mkdir -p /opt/framework-staging
|
||||||
|
cd /opt/framework-staging
|
||||||
|
|
||||||
|
# Backup current deployment
|
||||||
|
if [ -d "current" ]; then
|
||||||
|
echo "Backing up current deployment..."
|
||||||
|
timestamp=$(date +%Y%m%d_%H%M%S)
|
||||||
|
mv current "backup_${timestamp}"
|
||||||
|
# Keep only last 5 backups
|
||||||
|
ls -dt backup_* | tail -n +6 | xargs rm -rf
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create new deployment directory
|
||||||
|
mkdir -p current
|
||||||
|
cd current
|
||||||
|
|
||||||
|
- name: Copy deployment files
|
||||||
|
uses: appleboy/scp-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.STAGING_HOST }}
|
||||||
|
username: ${{ secrets.STAGING_USER }}
|
||||||
|
key: ${{ secrets.STAGING_SSH_KEY }}
|
||||||
|
port: ${{ secrets.STAGING_SSH_PORT || 22 }}
|
||||||
|
source: "deployment-staging/*"
|
||||||
|
target: "/opt/framework-staging/current/"
|
||||||
|
strip_components: 1
|
||||||
|
|
||||||
|
- name: Execute deployment
|
||||||
|
uses: appleboy/ssh-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.STAGING_HOST }}
|
||||||
|
username: ${{ secrets.STAGING_USER }}
|
||||||
|
key: ${{ secrets.STAGING_SSH_KEY }}
|
||||||
|
port: ${{ secrets.STAGING_SSH_PORT || 22 }}
|
||||||
|
script: |
|
||||||
|
cd /opt/framework-staging/current
|
||||||
|
|
||||||
|
# Execute deployment script
|
||||||
|
./deploy.sh
|
||||||
|
|
||||||
|
- name: Health check
|
||||||
|
uses: appleboy/ssh-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.STAGING_HOST }}
|
||||||
|
username: ${{ secrets.STAGING_USER }}
|
||||||
|
key: ${{ secrets.STAGING_SSH_KEY }}
|
||||||
|
port: ${{ secrets.STAGING_SSH_PORT || 22 }}
|
||||||
|
script: |
|
||||||
|
cd /opt/framework-staging/current
|
||||||
|
|
||||||
|
# Wait for services to be fully ready
|
||||||
|
echo "Waiting 30 seconds for services to initialize..."
|
||||||
|
sleep 30
|
||||||
|
|
||||||
|
# Check container status
|
||||||
|
echo "Checking container status..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml ps
|
||||||
|
|
||||||
|
# Check service health
|
||||||
|
echo "Checking service health..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml exec -T staging-app php -v
|
||||||
|
|
||||||
|
# Test HTTP endpoint (via internal network)
|
||||||
|
echo "Testing HTTP endpoint..."
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml exec -T staging-nginx wget -q -O- http://localhost/health || echo "Health check endpoint not yet available"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "Health check complete!"
|
||||||
|
|
||||||
|
- name: Rollback on failure
|
||||||
|
if: failure()
|
||||||
|
uses: appleboy/ssh-action@master
|
||||||
|
with:
|
||||||
|
host: ${{ secrets.STAGING_HOST }}
|
||||||
|
username: ${{ secrets.STAGING_USER }}
|
||||||
|
key: ${{ secrets.STAGING_SSH_KEY }}
|
||||||
|
port: ${{ secrets.STAGING_SSH_PORT || 22 }}
|
||||||
|
script: |
|
||||||
|
cd /opt/framework-staging
|
||||||
|
|
||||||
|
if [ -d "$(ls -dt backup_* | head -n1)" ]; then
|
||||||
|
echo "Rolling back to previous deployment..."
|
||||||
|
latest_backup=$(ls -dt backup_* | head -n1)
|
||||||
|
|
||||||
|
# Stop current broken deployment
|
||||||
|
cd current
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml down
|
||||||
|
cd ..
|
||||||
|
|
||||||
|
# Restore backup
|
||||||
|
rm -rf current
|
||||||
|
cp -r "$latest_backup" current
|
||||||
|
|
||||||
|
# Start restored deployment
|
||||||
|
cd current
|
||||||
|
docker-compose -f docker-compose.base.yml -f docker-compose.staging.yml up -d
|
||||||
|
|
||||||
|
echo "Rollback complete!"
|
||||||
|
else
|
||||||
|
echo "No backup available for rollback"
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Notify deployment status
|
||||||
|
if: always()
|
||||||
|
run: |
|
||||||
|
if [ "${{ job.status }}" == "success" ]; then
|
||||||
|
echo "✅ Staging deployment successful"
|
||||||
|
echo "URL: https://staging.michaelschiemer.de"
|
||||||
|
else
|
||||||
|
echo "❌ Staging deployment failed - rollback executed"
|
||||||
|
fi
|
||||||
484
docker-compose.prod.yml
Normal file
484
docker-compose.prod.yml
Normal file
@@ -0,0 +1,484 @@
|
|||||||
|
# Production Environment Override
|
||||||
|
# Usage: docker-compose -f docker-compose.base.yml -f docker-compose.prod.yml up
|
||||||
|
#
|
||||||
|
# This file configures services for production deployment:
|
||||||
|
# - Production-specific service names (production-*)
|
||||||
|
# - Private registry images (localhost:5000/framework:latest)
|
||||||
|
# - Git-based code deployment from main branch
|
||||||
|
# - Traefik integration for michaelschiemer.de domain
|
||||||
|
# - PostgreSQL connection via postgres-production-internal network
|
||||||
|
# - Production-grade resource limits and security settings
|
||||||
|
# - Docker Secrets for sensitive configuration
|
||||||
|
|
||||||
|
services:
|
||||||
|
production-app:
|
||||||
|
image: localhost:5000/framework:latest
|
||||||
|
container_name: production-app
|
||||||
|
restart: unless-stopped
|
||||||
|
entrypoint: >
|
||||||
|
sh -c '
|
||||||
|
set -e
|
||||||
|
echo "[Production Entrypoint] Starting initialization..."
|
||||||
|
|
||||||
|
# Copy Docker Secrets to /tmp for permission workaround
|
||||||
|
if [ -f /run/secrets/db_user_password ]; then
|
||||||
|
cp /run/secrets/db_user_password /tmp/db_user_password
|
||||||
|
chmod 644 /tmp/db_user_password
|
||||||
|
export DB_PASSWORD_FILE=/tmp/db_user_password
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f /run/secrets/redis_password ]; then
|
||||||
|
cp /run/secrets/redis_password /tmp/redis_password
|
||||||
|
chmod 644 /tmp/redis_password
|
||||||
|
export REDIS_PASSWORD_FILE=/tmp/redis_password
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f /run/secrets/app_key ]; then
|
||||||
|
cp /run/secrets/app_key /tmp/app_key
|
||||||
|
chmod 644 /tmp/app_key
|
||||||
|
export APP_KEY_FILE=/tmp/app_key
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f /run/secrets/vault_encryption_key ]; then
|
||||||
|
cp /run/secrets/vault_encryption_key /tmp/vault_encryption_key
|
||||||
|
chmod 644 /tmp/vault_encryption_key
|
||||||
|
export VAULT_ENCRYPTION_KEY_FILE=/tmp/vault_encryption_key
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f /run/secrets/git_token ]; then
|
||||||
|
cp /run/secrets/git_token /tmp/git_token
|
||||||
|
chmod 644 /tmp/git_token
|
||||||
|
GIT_TOKEN=$(cat /tmp/git_token)
|
||||||
|
export GIT_TOKEN
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Git deployment with authentication
|
||||||
|
if [ -n "$GIT_REPOSITORY_URL" ] && [ -n "$GIT_TOKEN" ]; then
|
||||||
|
echo "[Production Entrypoint] Configuring Git deployment..."
|
||||||
|
|
||||||
|
# Configure Git credentials
|
||||||
|
git config --global credential.helper store
|
||||||
|
echo "https://oauth2:${GIT_TOKEN}@git.michaelschiemer.de" > ~/.git-credentials
|
||||||
|
|
||||||
|
cd /var/www/html
|
||||||
|
|
||||||
|
# Clone repository if not exists
|
||||||
|
if [ ! -d .git ]; then
|
||||||
|
echo "[Production Entrypoint] Cloning repository..."
|
||||||
|
git clone --branch ${GIT_BRANCH:-main} ${GIT_REPOSITORY_URL} /tmp/repo
|
||||||
|
mv /tmp/repo/.git .
|
||||||
|
git reset --hard HEAD
|
||||||
|
else
|
||||||
|
echo "[Production Entrypoint] Pulling latest changes..."
|
||||||
|
git fetch origin ${GIT_BRANCH:-main}
|
||||||
|
git reset --hard origin/${GIT_BRANCH:-main}
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "[Production Entrypoint] Git deployment completed"
|
||||||
|
else
|
||||||
|
echo "[Production Entrypoint] Git deployment skipped (no repository configured)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Install/Update Composer dependencies (production mode)
|
||||||
|
if [ -f composer.json ]; then
|
||||||
|
echo "[Production Entrypoint] Installing Composer dependencies (production mode)..."
|
||||||
|
composer install --no-dev --optimize-autoloader --no-interaction --no-progress
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Run database migrations
|
||||||
|
if [ -f console.php ]; then
|
||||||
|
echo "[Production Entrypoint] Running database migrations..."
|
||||||
|
php console.php db:migrate --force || echo "[Production Entrypoint] Migration failed or no migrations pending"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Warm up caches
|
||||||
|
if [ -f console.php ]; then
|
||||||
|
echo "[Production Entrypoint] Warming up caches..."
|
||||||
|
php console.php cache:warm || echo "[Production Entrypoint] Cache warm-up skipped"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Set proper permissions
|
||||||
|
chown -R www-data:www-data /var/www/html/storage /var/www/html/var || true
|
||||||
|
chmod -R 775 /var/www/html/storage /var/www/html/var || true
|
||||||
|
|
||||||
|
echo "[Production Entrypoint] Initialization complete, starting PHP-FPM..."
|
||||||
|
exec php-fpm
|
||||||
|
'
|
||||||
|
environment:
|
||||||
|
- APP_ENV=production
|
||||||
|
- APP_DEBUG=false
|
||||||
|
- APP_NAME=${APP_NAME:-Michael Schiemer}
|
||||||
|
- APP_TIMEZONE=${APP_TIMEZONE:-Europe/Berlin}
|
||||||
|
- APP_LOCALE=${APP_LOCALE:-de}
|
||||||
|
- APP_URL=https://michaelschiemer.de
|
||||||
|
- FORCE_HTTPS=true
|
||||||
|
- GIT_REPOSITORY_URL=${GIT_REPOSITORY_URL:-https://git.michaelschiemer.de/michael/framework.git}
|
||||||
|
- GIT_BRANCH=${GIT_BRANCH:-main}
|
||||||
|
- DB_DRIVER=pgsql
|
||||||
|
- DB_HOST=${DB_HOST:-postgres-production}
|
||||||
|
- DB_PORT=5432
|
||||||
|
- DB_DATABASE=${DB_DATABASE:-michaelschiemer_production}
|
||||||
|
- DB_USERNAME=${DB_USERNAME:-postgres}
|
||||||
|
- DB_PASSWORD_FILE=/tmp/db_user_password
|
||||||
|
- REDIS_HOST=production-redis
|
||||||
|
- REDIS_PORT=6379
|
||||||
|
- REDIS_PASSWORD_FILE=/tmp/redis_password
|
||||||
|
- APP_KEY_FILE=/tmp/app_key
|
||||||
|
- VAULT_ENCRYPTION_KEY_FILE=/tmp/vault_encryption_key
|
||||||
|
- OPCACHE_ENABLED=true
|
||||||
|
- ANALYTICS_ENABLED=true
|
||||||
|
- ANALYTICS_TRACK_PERFORMANCE=false
|
||||||
|
- SESSION_FINGERPRINT_STRICT=true
|
||||||
|
- ADMIN_ALLOWED_IPS=${ADMIN_ALLOWED_IPS:-127.0.0.1,::1}
|
||||||
|
- COMPOSE_PROJECT_NAME=framework-production
|
||||||
|
volumes:
|
||||||
|
- production-code:/var/www/html
|
||||||
|
- production-storage:/var/www/html/storage
|
||||||
|
- production-logs:/var/www/html/storage/logs
|
||||||
|
- composer-cache:/root/.composer/cache
|
||||||
|
secrets:
|
||||||
|
- db_user_password
|
||||||
|
- redis_password
|
||||||
|
- app_key
|
||||||
|
- vault_encryption_key
|
||||||
|
- git_token
|
||||||
|
networks:
|
||||||
|
- production-internal
|
||||||
|
- postgres-production-internal
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "php", "-v"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
start_period: 60s
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 1G
|
||||||
|
cpus: '2.0'
|
||||||
|
reservations:
|
||||||
|
memory: 512M
|
||||||
|
cpus: '1.0'
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
|
production-nginx:
|
||||||
|
image: nginx:alpine
|
||||||
|
container_name: production-nginx
|
||||||
|
restart: unless-stopped
|
||||||
|
depends_on:
|
||||||
|
production-app:
|
||||||
|
condition: service_healthy
|
||||||
|
volumes:
|
||||||
|
- production-code:/var/www/html:ro
|
||||||
|
- ./docker/nginx/production.conf:/etc/nginx/conf.d/default.conf:ro
|
||||||
|
- production-logs:/var/www/html/storage/logs
|
||||||
|
networks:
|
||||||
|
- production-internal
|
||||||
|
- traefik-public
|
||||||
|
labels:
|
||||||
|
# Traefik Configuration
|
||||||
|
- "traefik.enable=true"
|
||||||
|
- "traefik.docker.network=traefik-public"
|
||||||
|
|
||||||
|
# Primary Domain Router (HTTPS)
|
||||||
|
- "traefik.http.routers.production.rule=Host(`michaelschiemer.de`) || Host(`www.michaelschiemer.de`)"
|
||||||
|
- "traefik.http.routers.production.entrypoints=websecure"
|
||||||
|
- "traefik.http.routers.production.tls=true"
|
||||||
|
- "traefik.http.routers.production.tls.certresolver=letsencrypt"
|
||||||
|
- "traefik.http.routers.production.service=production"
|
||||||
|
|
||||||
|
# HTTP to HTTPS Redirect
|
||||||
|
- "traefik.http.routers.production-http.rule=Host(`michaelschiemer.de`) || Host(`www.michaelschiemer.de`)"
|
||||||
|
- "traefik.http.routers.production-http.entrypoints=web"
|
||||||
|
- "traefik.http.routers.production-http.middlewares=production-redirect-https"
|
||||||
|
|
||||||
|
# Middlewares
|
||||||
|
- "traefik.http.middlewares.production-redirect-https.redirectscheme.scheme=https"
|
||||||
|
- "traefik.http.middlewares.production-redirect-https.redirectscheme.permanent=true"
|
||||||
|
|
||||||
|
# WWW to non-WWW redirect
|
||||||
|
- "traefik.http.middlewares.production-redirect-www.redirectregex.regex=^https://www\\.michaelschiemer\\.de/(.*)"
|
||||||
|
- "traefik.http.middlewares.production-redirect-www.redirectregex.replacement=https://michaelschiemer.de/$${1}"
|
||||||
|
- "traefik.http.middlewares.production-redirect-www.redirectregex.permanent=true"
|
||||||
|
|
||||||
|
# Security Headers
|
||||||
|
- "traefik.http.middlewares.production-security-headers.headers.stsSeconds=31536000"
|
||||||
|
- "traefik.http.middlewares.production-security-headers.headers.stsIncludeSubdomains=true"
|
||||||
|
- "traefik.http.middlewares.production-security-headers.headers.stsPreload=true"
|
||||||
|
- "traefik.http.middlewares.production-security-headers.headers.forceSTSHeader=true"
|
||||||
|
- "traefik.http.middlewares.production-security-headers.headers.frameDeny=true"
|
||||||
|
- "traefik.http.middlewares.production-security-headers.headers.contentTypeNosniff=true"
|
||||||
|
- "traefik.http.middlewares.production-security-headers.headers.browserXssFilter=true"
|
||||||
|
- "traefik.http.middlewares.production-security-headers.headers.referrerPolicy=strict-origin-when-cross-origin"
|
||||||
|
|
||||||
|
# Apply middleware chain
|
||||||
|
- "traefik.http.routers.production.middlewares=production-redirect-www,production-security-headers,production-rate-limit"
|
||||||
|
|
||||||
|
# Rate Limiting
|
||||||
|
- "traefik.http.middlewares.production-rate-limit.ratelimit.average=100"
|
||||||
|
- "traefik.http.middlewares.production-rate-limit.ratelimit.burst=200"
|
||||||
|
- "traefik.http.middlewares.production-rate-limit.ratelimit.period=1s"
|
||||||
|
|
||||||
|
# Service Configuration
|
||||||
|
- "traefik.http.services.production.loadbalancer.server.port=80"
|
||||||
|
- "traefik.http.services.production.loadbalancer.healthcheck.path=/health"
|
||||||
|
- "traefik.http.services.production.loadbalancer.healthcheck.interval=30s"
|
||||||
|
- "traefik.http.services.production.loadbalancer.healthcheck.timeout=5s"
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost/health"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 3
|
||||||
|
start_period: 10s
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 512M
|
||||||
|
cpus: '1.0'
|
||||||
|
reservations:
|
||||||
|
memory: 256M
|
||||||
|
cpus: '0.5'
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "5"
|
||||||
|
|
||||||
|
production-redis:
|
||||||
|
image: redis:7-alpine
|
||||||
|
container_name: production-redis
|
||||||
|
restart: unless-stopped
|
||||||
|
entrypoint: >
|
||||||
|
sh -c '
|
||||||
|
set -e
|
||||||
|
if [ -f /run/secrets/redis_password ]; then
|
||||||
|
cp /run/secrets/redis_password /tmp/redis_password
|
||||||
|
chmod 644 /tmp/redis_password
|
||||||
|
REDIS_PASSWORD=$(cat /tmp/redis_password)
|
||||||
|
exec redis-server \
|
||||||
|
--requirepass "$REDIS_PASSWORD" \
|
||||||
|
--maxmemory 512mb \
|
||||||
|
--maxmemory-policy allkeys-lru \
|
||||||
|
--save 900 1 \
|
||||||
|
--save 300 10 \
|
||||||
|
--save 60 10000 \
|
||||||
|
--appendonly yes \
|
||||||
|
--appendfsync everysec
|
||||||
|
else
|
||||||
|
echo "ERROR: Redis password secret not found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
'
|
||||||
|
volumes:
|
||||||
|
- production-redis-data:/data
|
||||||
|
secrets:
|
||||||
|
- redis_password
|
||||||
|
networks:
|
||||||
|
- production-internal
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "sh", "-c", "redis-cli --no-auth-warning -a $(cat /tmp/redis_password 2>/dev/null || echo '') ping || exit 1"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 3
|
||||||
|
start_period: 10s
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 512M
|
||||||
|
cpus: '1.0'
|
||||||
|
reservations:
|
||||||
|
memory: 256M
|
||||||
|
cpus: '0.5'
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "3"
|
||||||
|
|
||||||
|
production-queue-worker:
|
||||||
|
image: localhost:5000/framework:latest
|
||||||
|
container_name: production-queue-worker
|
||||||
|
restart: unless-stopped
|
||||||
|
entrypoint: >
|
||||||
|
sh -c '
|
||||||
|
set -e
|
||||||
|
echo "[Queue Worker] Starting initialization..."
|
||||||
|
|
||||||
|
# Copy Docker Secrets
|
||||||
|
if [ -f /run/secrets/db_user_password ]; then
|
||||||
|
cp /run/secrets/db_user_password /tmp/db_user_password
|
||||||
|
chmod 644 /tmp/db_user_password
|
||||||
|
export DB_PASSWORD_FILE=/tmp/db_user_password
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f /run/secrets/redis_password ]; then
|
||||||
|
cp /run/secrets/redis_password /tmp/redis_password
|
||||||
|
chmod 644 /tmp/redis_password
|
||||||
|
export REDIS_PASSWORD_FILE=/tmp/redis_password
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "[Queue Worker] Starting worker process..."
|
||||||
|
exec php /var/www/html/worker.php
|
||||||
|
'
|
||||||
|
environment:
|
||||||
|
- APP_ENV=production
|
||||||
|
- APP_DEBUG=false
|
||||||
|
- DB_HOST=${DB_HOST:-postgres-production}
|
||||||
|
- DB_PORT=5432
|
||||||
|
- DB_DATABASE=${DB_DATABASE:-michaelschiemer_production}
|
||||||
|
- DB_USERNAME=${DB_USERNAME:-postgres}
|
||||||
|
- DB_PASSWORD_FILE=/tmp/db_user_password
|
||||||
|
- REDIS_HOST=production-redis
|
||||||
|
- REDIS_PORT=6379
|
||||||
|
- REDIS_PASSWORD_FILE=/tmp/redis_password
|
||||||
|
- WORKER_SLEEP_TIME=${WORKER_SLEEP_TIME:-100000}
|
||||||
|
- WORKER_MAX_JOBS=${WORKER_MAX_JOBS:-10000}
|
||||||
|
volumes:
|
||||||
|
- production-code:/var/www/html:ro
|
||||||
|
- production-storage:/var/www/html/storage
|
||||||
|
- production-logs:/var/www/html/storage/logs
|
||||||
|
secrets:
|
||||||
|
- db_user_password
|
||||||
|
- redis_password
|
||||||
|
networks:
|
||||||
|
- production-internal
|
||||||
|
- postgres-production-internal
|
||||||
|
depends_on:
|
||||||
|
production-app:
|
||||||
|
condition: service_healthy
|
||||||
|
production-redis:
|
||||||
|
condition: service_healthy
|
||||||
|
stop_grace_period: 30s
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 1G
|
||||||
|
cpus: '1.0'
|
||||||
|
reservations:
|
||||||
|
memory: 512M
|
||||||
|
cpus: '0.5'
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "3"
|
||||||
|
|
||||||
|
production-scheduler:
|
||||||
|
image: localhost:5000/framework:latest
|
||||||
|
container_name: production-scheduler
|
||||||
|
restart: unless-stopped
|
||||||
|
entrypoint: >
|
||||||
|
sh -c '
|
||||||
|
set -e
|
||||||
|
echo "[Scheduler] Starting initialization..."
|
||||||
|
|
||||||
|
# Copy Docker Secrets
|
||||||
|
if [ -f /run/secrets/db_user_password ]; then
|
||||||
|
cp /run/secrets/db_user_password /tmp/db_user_password
|
||||||
|
chmod 644 /tmp/db_user_password
|
||||||
|
export DB_PASSWORD_FILE=/tmp/db_user_password
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -f /run/secrets/redis_password ]; then
|
||||||
|
cp /run/secrets/redis_password /tmp/redis_password
|
||||||
|
chmod 644 /tmp/redis_password
|
||||||
|
export REDIS_PASSWORD_FILE=/tmp/redis_password
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "[Scheduler] Starting scheduler process..."
|
||||||
|
exec php /var/www/html/scheduler.php
|
||||||
|
'
|
||||||
|
environment:
|
||||||
|
- APP_ENV=production
|
||||||
|
- APP_DEBUG=false
|
||||||
|
- DB_HOST=${DB_HOST:-postgres-production}
|
||||||
|
- DB_PORT=5432
|
||||||
|
- DB_DATABASE=${DB_DATABASE:-michaelschiemer_production}
|
||||||
|
- DB_USERNAME=${DB_USERNAME:-postgres}
|
||||||
|
- DB_PASSWORD_FILE=/tmp/db_user_password
|
||||||
|
- REDIS_HOST=production-redis
|
||||||
|
- REDIS_PORT=6379
|
||||||
|
- REDIS_PASSWORD_FILE=/tmp/redis_password
|
||||||
|
volumes:
|
||||||
|
- production-code:/var/www/html:ro
|
||||||
|
- production-storage:/var/www/html/storage
|
||||||
|
- production-logs:/var/www/html/storage/logs
|
||||||
|
secrets:
|
||||||
|
- db_user_password
|
||||||
|
- redis_password
|
||||||
|
networks:
|
||||||
|
- production-internal
|
||||||
|
- postgres-production-internal
|
||||||
|
depends_on:
|
||||||
|
production-app:
|
||||||
|
condition: service_healthy
|
||||||
|
production-redis:
|
||||||
|
condition: service_healthy
|
||||||
|
stop_grace_period: 30s
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 512M
|
||||||
|
cpus: '0.5'
|
||||||
|
reservations:
|
||||||
|
memory: 256M
|
||||||
|
cpus: '0.25'
|
||||||
|
logging:
|
||||||
|
driver: "json-file"
|
||||||
|
options:
|
||||||
|
max-size: "10m"
|
||||||
|
max-file: "3"
|
||||||
|
|
||||||
|
# Disable base services (override from docker-compose.base.yml)
|
||||||
|
web:
|
||||||
|
profiles: [never]
|
||||||
|
|
||||||
|
php:
|
||||||
|
profiles: [never]
|
||||||
|
|
||||||
|
php-test:
|
||||||
|
profiles: [never]
|
||||||
|
|
||||||
|
redis:
|
||||||
|
profiles: [never]
|
||||||
|
|
||||||
|
queue-worker:
|
||||||
|
profiles: [never]
|
||||||
|
|
||||||
|
minio:
|
||||||
|
profiles: [never]
|
||||||
|
|
||||||
|
# Networks
|
||||||
|
networks:
|
||||||
|
production-internal:
|
||||||
|
driver: bridge
|
||||||
|
internal: false
|
||||||
|
|
||||||
|
postgres-production-internal:
|
||||||
|
external: true
|
||||||
|
name: postgres-production-internal
|
||||||
|
|
||||||
|
traefik-public:
|
||||||
|
external: true
|
||||||
|
name: traefik-public
|
||||||
|
|
||||||
|
# Volumes
|
||||||
|
volumes:
|
||||||
|
production-code:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
production-storage:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
production-logs:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
production-redis-data:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
composer-cache:
|
||||||
|
driver: local
|
||||||
Reference in New Issue
Block a user