fix: DockerSecretsResolver - don't normalize absolute paths like /var/www/html/...
Some checks failed
Deploy Application / deploy (push) Has been cancelled
Some checks failed
Deploy Application / deploy (push) Has been cancelled
This commit is contained in:
176
deployment/legacy/ARCHITECTURE_ANALYSIS.md
Normal file
176
deployment/legacy/ARCHITECTURE_ANALYSIS.md
Normal file
@@ -0,0 +1,176 @@
|
||||
# Legacy Deployment Architecture Analysis
|
||||
|
||||
**Created**: 2025-01-24
|
||||
**Status**: Archived - System being redesigned
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document analyzes the existing deployment architecture that led to the decision to rebuild from scratch.
|
||||
|
||||
## Discovered Issues
|
||||
|
||||
### 1. Docker Swarm vs Docker Compose Confusion
|
||||
|
||||
**Problem**: System designed for Docker Swarm but running with Docker Compose
|
||||
- Stack files reference Swarm features (secrets, configs)
|
||||
- Docker Swarm not initialized on target server
|
||||
- Local development uses Docker Compose
|
||||
- Production deployment unclear which to use
|
||||
|
||||
**Impact**: Container startup failures, service discovery issues
|
||||
|
||||
### 2. Distributed Stack Files
|
||||
|
||||
**Current Structure**:
|
||||
```
|
||||
deployment/stacks/
|
||||
├── traefik/ # Reverse proxy
|
||||
├── postgresql-production/
|
||||
├── postgresql-staging/
|
||||
├── gitea/ # Git server
|
||||
├── redis/
|
||||
├── minio/
|
||||
├── monitoring/
|
||||
├── registry/
|
||||
└── semaphore/
|
||||
```
|
||||
|
||||
**Problems**:
|
||||
- No clear dependency graph between stacks
|
||||
- Unclear startup order
|
||||
- Volume mounts across stacks
|
||||
- Network configuration scattered
|
||||
|
||||
### 3. Ansible Deployment Confusion
|
||||
|
||||
**Ansible Usage**:
|
||||
- Server provisioning (install-docker.yml)
|
||||
- Application deployment (sync-application-code.yml)
|
||||
- Container recreation (recreate-containers-with-env.yml)
|
||||
- Stack synchronization (sync-stacks.yml)
|
||||
|
||||
**Problem**: Ansible used for BOTH provisioning AND deployment
|
||||
- Should only provision servers
|
||||
- Deployment should be via CI/CD
|
||||
- Creates unclear responsibilities
|
||||
|
||||
### 4. Environment-Specific Issues
|
||||
|
||||
**Environments Identified**:
|
||||
- `local` - Developer machines (Docker Compose)
|
||||
- `staging` - Hetzner server (unclear Docker Compose vs Swarm)
|
||||
- `production` - Hetzner server (unclear Docker Compose vs Swarm)
|
||||
|
||||
**Problems**:
|
||||
- No unified docker-compose files per environment
|
||||
- Environment variables scattered (.env, secrets, Ansible vars)
|
||||
- SSL certificates managed differently per environment
|
||||
|
||||
### 5. Specific Container Failures
|
||||
|
||||
**postgres-production-backup**:
|
||||
- Container doesn't exist (was in restart loop)
|
||||
- Volume mounts not accessible: `/scripts/backup-entrypoint.sh`
|
||||
- Exit code 255 (file not found)
|
||||
- Restart policy causing loop
|
||||
|
||||
**Root Causes**:
|
||||
- Relative volume paths in docker-compose.yml
|
||||
- Container running from different working directory
|
||||
- Stack not properly initialized
|
||||
|
||||
### 6. Network Architecture Unclear
|
||||
|
||||
**Networks Found**:
|
||||
- `traefik-public` (external)
|
||||
- `app-internal` (external, for PostgreSQL)
|
||||
- `backend`, `cache`, `postgres-production-internal`
|
||||
|
||||
**Problems**:
|
||||
- Which stacks share which networks?
|
||||
- How do services discover each other?
|
||||
- Traefik routing configuration scattered
|
||||
|
||||
## Architecture Diagram (Current State)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Server (Docker Compose? Docker Swarm? Unclear) │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Traefik │───▶│ App │───▶│ PostgreSQL │ │
|
||||
│ │ Stack │ │ Stack │ │ Stack │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │ │ │ │
|
||||
│ │ │ │ │
|
||||
│ ┌──────▼──────┐ ┌───────▼────┐ ┌──────────▼─────┐ │
|
||||
│ │ Gitea │ │ Redis │ │ MinIO │ │
|
||||
│ │ Stack │ │ Stack │ │ Stack │ │
|
||||
│ └─────────────┘ └────────────┘ └────────────────┘ │
|
||||
│ │
|
||||
│ Networks: traefik-public, app-internal, backend, cache │
|
||||
│ Volumes: Relative paths, absolute paths, mixed │
|
||||
│ Secrets: Docker secrets (Swarm), .env files, Ansible vars│
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
|
||||
▲
|
||||
│ Deployment via?
|
||||
│ - docker-compose up?
|
||||
│ - docker stack deploy?
|
||||
│ - Ansible playbooks?
|
||||
│ UNCLEAR
|
||||
│
|
||||
┌───┴────────────────────────────────────────────────┐
|
||||
│ Developer Machine / CI/CD (Gitea) │
|
||||
│ - Ansible playbooks in deployment/ansible/ │
|
||||
│ - Stack files in deployment/stacks/ │
|
||||
│ - Application code in src/ │
|
||||
└─────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Decision Rationale: Rebuild vs Repair
|
||||
|
||||
### Why Rebuild?
|
||||
|
||||
1. **Architectural Clarity**: Current system mixes concepts (Swarm/Compose, provisioning/deployment)
|
||||
2. **Environment Separation**: Clean separation of local/staging/prod configurations
|
||||
3. **CI/CD Integration**: Design for Gitea Actions from start
|
||||
4. **Maintainability**: Single source of truth per environment
|
||||
5. **Debugging Difficulty**: Current issues are symptoms of architectural problems
|
||||
|
||||
### What to Keep?
|
||||
|
||||
- ✅ Traefik configuration (reverse proxy setup is solid)
|
||||
- ✅ PostgreSQL backup scripts (logic is good, just needs proper mounting)
|
||||
- ✅ SSL certificate generation (Let's Encrypt integration works)
|
||||
- ✅ Ansible server provisioning playbooks (keep for initial setup)
|
||||
|
||||
### What to Redesign?
|
||||
|
||||
- ❌ Stack organization (too fragmented)
|
||||
- ❌ Deployment method (unclear Ansible vs CI/CD)
|
||||
- ❌ Environment configuration (scattered variables)
|
||||
- ❌ Volume mount strategy (relative paths causing issues)
|
||||
- ❌ Network architecture (unclear dependencies)
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
1. **Consistency is Key**: Choose Docker Compose OR Docker Swarm, not both
|
||||
2. **Environment Files**: One docker-compose.{env}.yml per environment
|
||||
3. **Ansible Scope**: Only for server provisioning, NOT deployment
|
||||
4. **CI/CD First**: Gitea Actions should handle deployment
|
||||
5. **Volume Paths**: Always use absolute paths or named volumes
|
||||
6. **Network Clarity**: Explicit network definitions, clear service discovery
|
||||
|
||||
## Next Steps
|
||||
|
||||
See `deployment/NEW_ARCHITECTURE.md` for the redesigned system.
|
||||
|
||||
## Archive Contents
|
||||
|
||||
This `deployment/legacy/` directory contains:
|
||||
- Original stack files (archived)
|
||||
- Ansible playbooks (reference only)
|
||||
- This analysis document
|
||||
|
||||
**DO NOT USE THESE FILES FOR NEW DEPLOYMENTS**
|
||||
738
deployment/legacy/NEW_ARCHITECTURE.md
Normal file
738
deployment/legacy/NEW_ARCHITECTURE.md
Normal file
@@ -0,0 +1,738 @@
|
||||
# New Deployment Architecture
|
||||
|
||||
**Created**: 2025-11-24
|
||||
**Status**: Design Phase - Implementation Pending
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document defines the redesigned deployment architecture using Docker Compose for all environments (local, staging, production). The architecture addresses all issues identified in `legacy/ARCHITECTURE_ANALYSIS.md` and provides a clear, maintainable deployment strategy.
|
||||
|
||||
## Architecture Principles
|
||||
|
||||
### 1. Docker Compose for All Environments
|
||||
- **No Docker Swarm**: Use Docker Compose exclusively for simplicity
|
||||
- **Environment-Specific Files**: One `docker-compose.{env}.yml` per environment
|
||||
- **Shared Base**: Common configuration in `docker-compose.base.yml`
|
||||
- **Override Pattern**: Environment files override base configuration
|
||||
|
||||
### 2. Clear Separation of Concerns
|
||||
- **Ansible**: Server provisioning ONLY (install Docker, setup users, configure firewall)
|
||||
- **Gitea Actions**: Application deployment via CI/CD pipelines
|
||||
- **Docker Compose**: Runtime orchestration and service management
|
||||
|
||||
### 3. Explicit Configuration
|
||||
- **Absolute Paths**: No relative paths in volume mounts
|
||||
- **Named Volumes**: For persistent data (databases, caches)
|
||||
- **Environment Variables**: Clear `.env.{environment}` files
|
||||
- **Docker Secrets**: File-based secrets via `*_FILE` pattern
|
||||
|
||||
### 4. Network Isolation
|
||||
- **traefik-public**: External network for Traefik ingress
|
||||
- **backend**: Internal network for application services
|
||||
- **cache**: Isolated network for Redis
|
||||
- **app-internal**: External network for shared PostgreSQL
|
||||
|
||||
## Service Architecture
|
||||
|
||||
### Core Services
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Internet │
|
||||
└───────────────────────────┬─────────────────────────────────┘
|
||||
│
|
||||
┌───────▼────────┐
|
||||
│ Traefik │ (traefik-public)
|
||||
│ Reverse Proxy │
|
||||
└───────┬────────┘
|
||||
│
|
||||
┌───────────────────┼───────────────────┐
|
||||
│ │ │
|
||||
┌───▼────┐ ┌──────▼──────┐ ┌──────▼──────┐
|
||||
│ Web │ │ PHP │ │ Queue │
|
||||
│ Nginx │◄─────│ PHP-FPM │ │ Worker │
|
||||
└────────┘ └──────┬──────┘ └──────┬──────┘
|
||||
│ │
|
||||
(backend network) │ │
|
||||
│ │
|
||||
┌──────────────────┼───────────────────┤
|
||||
│ │ │
|
||||
┌───▼──────┐ ┌──────▼──────┐ ┌──────▼──────┐
|
||||
│ Redis │ │ PostgreSQL │ │ MinIO │
|
||||
│ Cache │ │ (External) │ │ Storage │
|
||||
└──────────┘ └─────────────┘ └─────────────┘
|
||||
```
|
||||
|
||||
### Service Responsibilities
|
||||
|
||||
**web** (Nginx):
|
||||
- Static file serving
|
||||
- PHP-FPM proxy
|
||||
- HTTPS termination (via Traefik)
|
||||
- Security headers
|
||||
|
||||
**php** (PHP-FPM):
|
||||
- Application runtime
|
||||
- Framework code execution
|
||||
- Database connections
|
||||
- Queue job dispatching
|
||||
|
||||
**postgres** (PostgreSQL):
|
||||
- Primary database
|
||||
- **External Stack**: Shared across environments via `app-internal` network
|
||||
- Backup automation via separate container
|
||||
|
||||
**redis** (Redis):
|
||||
- Session storage
|
||||
- Cache layer
|
||||
- Queue backend
|
||||
|
||||
**queue-worker** (PHP CLI):
|
||||
- Background job processing
|
||||
- Scheduled task execution
|
||||
- Async operations
|
||||
|
||||
**minio** (S3-compatible storage):
|
||||
- File uploads
|
||||
- Asset storage
|
||||
- Backup storage
|
||||
|
||||
**traefik** (Reverse Proxy):
|
||||
- Dynamic routing
|
||||
- SSL/TLS termination
|
||||
- Let's Encrypt automation
|
||||
- Load balancing
|
||||
|
||||
## Environment Specifications
|
||||
|
||||
### docker-compose.local.yml (Development)
|
||||
|
||||
**Purpose**: Fast local development with debugging enabled
|
||||
|
||||
**Key Features**:
|
||||
- Development ports: 8888:80, 443:443, 5433:5432
|
||||
- Host volume mounts for live code editing: `./ → /var/www/html`
|
||||
- Xdebug enabled: `XDEBUG_MODE=debug`
|
||||
- Debug flags: `APP_DEBUG=true`
|
||||
- Docker socket access: `/var/run/docker.sock` (for Docker management)
|
||||
- Relaxed resource limits
|
||||
|
||||
**Services**:
|
||||
```yaml
|
||||
services:
|
||||
web:
|
||||
ports:
|
||||
- "8888:80"
|
||||
- "443:443"
|
||||
environment:
|
||||
- APP_ENV=development
|
||||
volumes:
|
||||
- ./:/var/www/html:cached
|
||||
restart: unless-stopped
|
||||
|
||||
php:
|
||||
volumes:
|
||||
- ./:/var/www/html:cached
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
environment:
|
||||
- APP_DEBUG=true
|
||||
- XDEBUG_MODE=debug
|
||||
- DB_HOST=postgres # External PostgreSQL Stack
|
||||
- DB_PASSWORD_FILE=/run/secrets/db_user_password
|
||||
secrets:
|
||||
- db_user_password
|
||||
- redis_password
|
||||
- app_key
|
||||
networks:
|
||||
- backend
|
||||
- app-internal # External PostgreSQL Stack
|
||||
|
||||
redis:
|
||||
command: redis-server --requirepass $(cat /run/secrets/redis_password)
|
||||
secrets:
|
||||
- redis_password
|
||||
```
|
||||
|
||||
**Networks**:
|
||||
- `backend`: Internal communication (web ↔ php)
|
||||
- `cache`: Redis isolation
|
||||
- `app-internal`: **External** - connects to PostgreSQL Stack
|
||||
|
||||
**Secrets**: File-based in `./secrets/` directory (gitignored)
|
||||
|
||||
### docker-compose.staging.yml (Staging)
|
||||
|
||||
**Purpose**: Production-like environment for testing deployments
|
||||
|
||||
**Key Features**:
|
||||
- Traefik with Let's Encrypt **staging** certificates
|
||||
- Production-like resource limits (moderate)
|
||||
- External PostgreSQL via `app-internal` network
|
||||
- No host mounts - code baked into Docker image
|
||||
- Moderate logging (JSON format)
|
||||
|
||||
**Services**:
|
||||
```yaml
|
||||
services:
|
||||
web:
|
||||
image: registry.michaelschiemer.de/web:${GIT_COMMIT}
|
||||
networks:
|
||||
- traefik-public
|
||||
- backend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.web-staging.rule=Host(`staging.michaelschiemer.de`)"
|
||||
- "traefik.http.routers.web-staging.entrypoints=websecure"
|
||||
- "traefik.http.routers.web-staging.tls.certresolver=letsencrypt-staging"
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 256M
|
||||
cpus: "0.5"
|
||||
reservations:
|
||||
memory: 128M
|
||||
|
||||
php:
|
||||
image: registry.michaelschiemer.de/php:${GIT_COMMIT}
|
||||
environment:
|
||||
- APP_ENV=staging
|
||||
- APP_DEBUG=false
|
||||
- XDEBUG_MODE=off
|
||||
- DB_HOST=postgres
|
||||
- DB_PASSWORD_FILE=/run/secrets/db_user_password_staging
|
||||
secrets:
|
||||
- db_user_password_staging
|
||||
- redis_password_staging
|
||||
- app_key_staging
|
||||
networks:
|
||||
- backend
|
||||
- app-internal
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
cpus: "1.0"
|
||||
|
||||
traefik:
|
||||
image: traefik:v3.0
|
||||
command:
|
||||
- "--certificatesresolvers.letsencrypt-staging.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
|
||||
networks:
|
||||
- traefik-public
|
||||
```
|
||||
|
||||
**Networks**:
|
||||
- `traefik-public`: **External** - shared Traefik network
|
||||
- `backend`: Internal application network
|
||||
- `app-internal`: **External** - shared PostgreSQL network
|
||||
|
||||
**Image Strategy**: Pre-built images from Gitea registry, tagged with Git commit SHA
|
||||
|
||||
### docker-compose.prod.yml (Production)
|
||||
|
||||
**Purpose**: Hardened production environment with full security
|
||||
|
||||
**Key Features**:
|
||||
- Production SSL certificates (Let's Encrypt production CA)
|
||||
- Strict security: `APP_DEBUG=false`, `XDEBUG_MODE=off`
|
||||
- Resource limits: production-grade (higher than staging)
|
||||
- Health checks for all services
|
||||
- Read-only root filesystem where possible
|
||||
- No-new-privileges security option
|
||||
- Comprehensive logging
|
||||
|
||||
**Services**:
|
||||
```yaml
|
||||
services:
|
||||
web:
|
||||
image: registry.michaelschiemer.de/web:${GIT_TAG}
|
||||
read_only: true
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
networks:
|
||||
- traefik-public
|
||||
- backend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.web-prod.rule=Host(`michaelschiemer.de`) || Host(`www.michaelschiemer.de`)"
|
||||
- "traefik.http.routers.web-prod.entrypoints=websecure"
|
||||
- "traefik.http.routers.web-prod.tls.certresolver=letsencrypt"
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
cpus: "1.0"
|
||||
reservations:
|
||||
memory: 256M
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
|
||||
php:
|
||||
image: registry.michaelschiemer.de/php:${GIT_TAG}
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
environment:
|
||||
- APP_ENV=production
|
||||
- APP_DEBUG=false
|
||||
- XDEBUG_MODE=off
|
||||
- DB_HOST=postgres
|
||||
- DB_PASSWORD_FILE=/run/secrets/db_user_password_prod
|
||||
secrets:
|
||||
- db_user_password_prod
|
||||
- redis_password_prod
|
||||
- app_key_prod
|
||||
networks:
|
||||
- backend
|
||||
- app-internal
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 1G
|
||||
cpus: "2.0"
|
||||
reservations:
|
||||
memory: 512M
|
||||
healthcheck:
|
||||
test: ["CMD", "php-fpm-healthcheck"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
|
||||
traefik:
|
||||
image: traefik:v3.0
|
||||
command:
|
||||
- "--certificatesresolvers.letsencrypt.acme.caserver=https://acme-v02.api.letsencrypt.org/directory"
|
||||
networks:
|
||||
- traefik-public
|
||||
```
|
||||
|
||||
**Image Strategy**: Release-tagged images from Gitea registry (semantic versioning)
|
||||
|
||||
**Security Hardening**:
|
||||
- Read-only root filesystem
|
||||
- No privilege escalation
|
||||
- AppArmor/SELinux profiles
|
||||
- Resource quotas enforced
|
||||
|
||||
## Volume Strategy
|
||||
|
||||
### Named Volumes (Persistent Data)
|
||||
|
||||
**Database Volumes**:
|
||||
```yaml
|
||||
volumes:
|
||||
postgres-data:
|
||||
driver: local
|
||||
redis-data:
|
||||
driver: local
|
||||
minio-data:
|
||||
driver: local
|
||||
```
|
||||
|
||||
**Characteristics**:
|
||||
- Managed by Docker
|
||||
- Persisted across container restarts
|
||||
- Backed up regularly
|
||||
|
||||
### Bind Mounts (Development Only)
|
||||
|
||||
**Local Development**:
|
||||
```yaml
|
||||
volumes:
|
||||
- /absolute/path/to/project:/var/www/html:cached
|
||||
- /absolute/path/to/storage/logs:/var/www/html/storage/logs:rw
|
||||
```
|
||||
|
||||
**Rules**:
|
||||
- **Absolute paths ONLY** - no relative paths
|
||||
- Development environment only
|
||||
- Not used in staging/production
|
||||
|
||||
### Volume Mount Patterns
|
||||
|
||||
**Application Code**:
|
||||
- **Local**: Bind mount (`./:/var/www/html`) for live editing
|
||||
- **Staging/Prod**: Baked into Docker image (no mount)
|
||||
|
||||
**Logs**:
|
||||
- **All Environments**: Named volume or bind mount to host for persistence
|
||||
|
||||
**Uploads/Assets**:
|
||||
- **All Environments**: MinIO for S3-compatible storage
|
||||
|
||||
## Secret Management
|
||||
|
||||
### Docker Secrets via File Pattern
|
||||
|
||||
**Framework Support**: Custom PHP Framework supports `*_FILE` environment variable pattern
|
||||
|
||||
**Example**:
|
||||
```yaml
|
||||
# Environment variable points to secret file
|
||||
environment:
|
||||
- DB_PASSWORD_FILE=/run/secrets/db_password
|
||||
|
||||
# Secret definition
|
||||
secrets:
|
||||
db_password:
|
||||
file: ./secrets/db_password.txt
|
||||
```
|
||||
|
||||
### Secret Files Structure
|
||||
|
||||
```
|
||||
deployment/
|
||||
├── secrets/ # Gitignored!
|
||||
│ ├── local/
|
||||
│ │ ├── db_password.txt
|
||||
│ │ ├── redis_password.txt
|
||||
│ │ └── app_key.txt
|
||||
│ ├── staging/
|
||||
│ │ ├── db_password.txt
|
||||
│ │ ├── redis_password.txt
|
||||
│ │ └── app_key.txt
|
||||
│ └── production/
|
||||
│ ├── db_password.txt
|
||||
│ ├── redis_password.txt
|
||||
│ └── app_key.txt
|
||||
```
|
||||
|
||||
**Security**:
|
||||
- **NEVER commit secrets** to version control
|
||||
- Add `secrets/` to `.gitignore`
|
||||
- Use Ansible Vault or external secret manager for production secrets
|
||||
- Rotate secrets regularly
|
||||
|
||||
### Framework Integration
|
||||
|
||||
Framework automatically loads secrets via `EncryptedEnvLoader`:
|
||||
|
||||
```php
|
||||
// Framework automatically resolves *_FILE variables
|
||||
$dbPassword = $env->get('DB_PASSWORD'); // Reads from DB_PASSWORD_FILE
|
||||
$redisPassword = $env->get('REDIS_PASSWORD'); // Reads from REDIS_PASSWORD_FILE
|
||||
```
|
||||
|
||||
## Environment Variables Strategy
|
||||
|
||||
### .env Files per Environment
|
||||
|
||||
**Structure**:
|
||||
```
|
||||
deployment/
|
||||
├── .env.local # Local development
|
||||
├── .env.staging # Staging environment
|
||||
├── .env.production # Production environment
|
||||
└── .env.example # Template (committed to git)
|
||||
```
|
||||
|
||||
**Composition Command**:
|
||||
```bash
|
||||
# Local
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.local.yml --env-file .env.local up
|
||||
|
||||
# Staging
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.staging.yml --env-file .env.staging up
|
||||
|
||||
# Production
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.prod.yml --env-file .env.production up
|
||||
```
|
||||
|
||||
### Variable Categories
|
||||
|
||||
**Application**:
|
||||
```bash
|
||||
APP_ENV=production
|
||||
APP_DEBUG=false
|
||||
APP_NAME="Michael Schiemer"
|
||||
APP_URL=https://michaelschiemer.de
|
||||
```
|
||||
|
||||
**Database**:
|
||||
```bash
|
||||
DB_HOST=postgres
|
||||
DB_PORT=5432
|
||||
DB_DATABASE=michaelschiemer
|
||||
DB_USERNAME=postgres
|
||||
# DB_PASSWORD via secrets: DB_PASSWORD_FILE=/run/secrets/db_password
|
||||
```
|
||||
|
||||
**Cache**:
|
||||
```bash
|
||||
REDIS_HOST=redis
|
||||
REDIS_PORT=6379
|
||||
# REDIS_PASSWORD via secrets: REDIS_PASSWORD_FILE=/run/secrets/redis_password
|
||||
```
|
||||
|
||||
**Image Tags** (Staging/Production):
|
||||
```bash
|
||||
GIT_COMMIT=abc123def456 # Staging
|
||||
GIT_TAG=v2.1.0 # Production
|
||||
```
|
||||
|
||||
## Service Dependencies and Startup Order
|
||||
|
||||
### Dependency Graph
|
||||
|
||||
```
|
||||
traefik (independent)
|
||||
↓
|
||||
postgres (external stack)
|
||||
↓
|
||||
redis (independent)
|
||||
↓
|
||||
php (depends: postgres, redis)
|
||||
↓
|
||||
web (depends: php)
|
||||
↓
|
||||
queue-worker (depends: postgres, redis)
|
||||
↓
|
||||
minio (independent)
|
||||
```
|
||||
|
||||
### docker-compose.yml Dependency Specification
|
||||
|
||||
```yaml
|
||||
services:
|
||||
php:
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_started
|
||||
|
||||
web:
|
||||
depends_on:
|
||||
php:
|
||||
condition: service_started
|
||||
|
||||
queue-worker:
|
||||
depends_on:
|
||||
php:
|
||||
condition: service_started
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_started
|
||||
```
|
||||
|
||||
**Health Checks**:
|
||||
- PostgreSQL: `pg_isready` check
|
||||
- Redis: `redis-cli PING` check
|
||||
- PHP-FPM: Custom health check script
|
||||
- Nginx: `curl http://localhost/health`
|
||||
|
||||
## CI/CD Pipeline Design
|
||||
|
||||
### Gitea Actions Workflows
|
||||
|
||||
**Directory Structure**:
|
||||
```
|
||||
.gitea/
|
||||
└── workflows/
|
||||
├── build-app.yml # Build & Test
|
||||
├── deploy-staging.yml # Deploy to Staging
|
||||
└── deploy-production.yml # Deploy to Production
|
||||
```
|
||||
|
||||
### Workflow 1: Build & Test (`build-app.yml`)
|
||||
|
||||
**Triggers**:
|
||||
- Push to any branch
|
||||
- Pull request to `develop` or `main`
|
||||
|
||||
**Steps**:
|
||||
1. Checkout code
|
||||
2. Setup PHP 8.5, Node.js
|
||||
3. Install dependencies (`composer install`, `npm install`)
|
||||
4. Run PHP tests (`./vendor/bin/pest`)
|
||||
5. Run JS tests (`npm test`)
|
||||
6. Build frontend assets (`npm run build`)
|
||||
7. Build Docker images (`docker build -t registry.michaelschiemer.de/php:${COMMIT_SHA} .`)
|
||||
8. Push to Gitea registry
|
||||
9. Security scan (Trivy)
|
||||
|
||||
**Artifacts**: Docker images tagged with Git commit SHA
|
||||
|
||||
### Workflow 2: Deploy to Staging (`deploy-staging.yml`)
|
||||
|
||||
**Triggers**:
|
||||
- Merge to `develop` branch (automatic)
|
||||
- Manual trigger via Gitea UI
|
||||
|
||||
**Steps**:
|
||||
1. Checkout code
|
||||
2. Pull Docker images from registry (`registry.michaelschiemer.de/php:${COMMIT_SHA}`)
|
||||
3. SSH to staging server
|
||||
4. Export environment variables (`GIT_COMMIT=${COMMIT_SHA}`)
|
||||
5. Run docker compose: `docker compose -f docker-compose.base.yml -f docker-compose.staging.yml --env-file .env.staging up -d`
|
||||
6. Wait for health checks
|
||||
7. Run smoke tests
|
||||
8. Notify via webhook (success/failure)
|
||||
|
||||
**Rollback**: Keep previous image tag, redeploy on failure
|
||||
|
||||
### Workflow 3: Deploy to Production (`deploy-production.yml`)
|
||||
|
||||
**Triggers**:
|
||||
- Git tag push (e.g., `v2.1.0`) - **manual approval required**
|
||||
- Manual trigger via Gitea UI
|
||||
|
||||
**Steps**:
|
||||
1. **Manual Approval Gate** - require approval from maintainer
|
||||
2. Checkout code at tag
|
||||
3. Pull Docker images from registry (`registry.michaelschiemer.de/php:${GIT_TAG}`)
|
||||
4. SSH to production server
|
||||
5. Create backup of current deployment
|
||||
6. Export environment variables (`GIT_TAG=${TAG}`)
|
||||
7. Run docker compose: `docker compose -f docker-compose.base.yml -f docker-compose.prod.yml --env-file .env.production up -d`
|
||||
8. Wait for health checks (extended timeout)
|
||||
9. Run smoke tests
|
||||
10. Monitor metrics for 5 minutes
|
||||
11. Notify via webhook (success/failure)
|
||||
|
||||
**Rollback Procedure**:
|
||||
1. Detect deployment failure (health checks fail)
|
||||
2. Automatically revert to previous Git tag
|
||||
3. Run deployment with previous image
|
||||
4. Notify team of rollback
|
||||
|
||||
### Deployment Safety
|
||||
|
||||
**Blue-Green Deployment** (Future Enhancement):
|
||||
- Run new version alongside old version
|
||||
- Switch traffic via Traefik routing
|
||||
- Instant rollback by switching back
|
||||
|
||||
**Canary Deployment** (Future Enhancement):
|
||||
- Route 10% traffic to new version
|
||||
- Monitor error rates
|
||||
- Gradually increase to 100%
|
||||
|
||||
## Network Architecture
|
||||
|
||||
### Network Definitions
|
||||
|
||||
```yaml
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
name: traefik-public
|
||||
|
||||
backend:
|
||||
internal: true
|
||||
driver: bridge
|
||||
|
||||
cache:
|
||||
internal: true
|
||||
driver: bridge
|
||||
|
||||
app-internal:
|
||||
external: true
|
||||
name: app-internal
|
||||
```
|
||||
|
||||
### Network Isolation
|
||||
|
||||
**traefik-public** (External):
|
||||
- Services: traefik, web
|
||||
- Purpose: Ingress from internet
|
||||
- Isolation: Public-facing only
|
||||
|
||||
**backend** (Internal):
|
||||
- Services: web, php, queue-worker
|
||||
- Purpose: Application communication
|
||||
- Isolation: No external access
|
||||
|
||||
**cache** (Internal):
|
||||
- Services: redis
|
||||
- Purpose: Cache isolation
|
||||
- Isolation: Only accessible via backend network bridge
|
||||
|
||||
**app-internal** (External):
|
||||
- Services: php, queue-worker, postgres (external stack)
|
||||
- Purpose: Shared PostgreSQL access across environments
|
||||
- Isolation: Multi-environment shared resource
|
||||
|
||||
### Service Discovery
|
||||
|
||||
Docker DNS automatically resolves service names:
|
||||
- `php` resolves to PHP-FPM container IP
|
||||
- `redis` resolves to Redis container IP
|
||||
- `postgres` resolves to external PostgreSQL stack IP
|
||||
|
||||
No manual IP configuration required.
|
||||
|
||||
## Migration from Legacy System
|
||||
|
||||
### Migration Steps
|
||||
|
||||
1. ✅ **COMPLETED** - Archive legacy deployment to `deployment/legacy/`
|
||||
2. ✅ **COMPLETED** - Document legacy issues in `ARCHITECTURE_ANALYSIS.md`
|
||||
3. ✅ **COMPLETED** - Design new architecture (this document)
|
||||
4. ⏳ **NEXT** - Implement `docker-compose.base.yml`
|
||||
5. ⏳ **NEXT** - Implement `docker-compose.local.yml`
|
||||
6. ⏳ **NEXT** - Test local environment
|
||||
7. ⏳ **PENDING** - Implement `docker-compose.staging.yml`
|
||||
8. ⏳ **PENDING** - Deploy to staging server
|
||||
9. ⏳ **PENDING** - Implement `docker-compose.prod.yml`
|
||||
10. ⏳ **PENDING** - Setup Gitea Actions workflows
|
||||
11. ⏳ **PENDING** - Deploy to production via CI/CD
|
||||
|
||||
### Data Migration
|
||||
|
||||
**Database**:
|
||||
- Export from legacy PostgreSQL: `pg_dump`
|
||||
- Import to new PostgreSQL: `pg_restore`
|
||||
- Verify data integrity
|
||||
|
||||
**Secrets**:
|
||||
- Extract secrets from legacy Ansible Vault
|
||||
- Create new secret files in `deployment/secrets/`
|
||||
- Update environment variables
|
||||
|
||||
**SSL Certificates**:
|
||||
- Reuse existing Let's Encrypt certificates (copy `acme.json`)
|
||||
- Or regenerate via Traefik ACME
|
||||
|
||||
## Comparison: Legacy vs New
|
||||
|
||||
| Aspect | Legacy System | New Architecture |
|
||||
|--------|---------------|------------------|
|
||||
| **Orchestration** | Docker Swarm + Docker Compose (confused) | Docker Compose only |
|
||||
| **Deployment** | Ansible playbooks (unclear responsibility) | Gitea Actions CI/CD |
|
||||
| **Environment Files** | Scattered stack files (9+ directories) | 3 environment files (local/staging/prod) |
|
||||
| **Volume Mounts** | Relative paths (causing failures) | Absolute paths + named volumes |
|
||||
| **Secrets** | Docker Swarm secrets (not working) | File-based secrets via `*_FILE` |
|
||||
| **Networks** | Unclear dependencies | Explicit network definitions |
|
||||
| **SSL** | Let's Encrypt (working) | Let's Encrypt (preserved) |
|
||||
| **PostgreSQL** | Embedded in each stack | External shared stack |
|
||||
|
||||
## Benefits of New Architecture
|
||||
|
||||
1. **Clarity**: Single source of truth per environment
|
||||
2. **Maintainability**: Clear separation of concerns (Ansible vs CI/CD)
|
||||
3. **Debuggability**: Explicit configuration, no hidden magic
|
||||
4. **Scalability**: Easy to add new environments or services
|
||||
5. **Security**: File-based secrets, network isolation
|
||||
6. **CI/CD Integration**: Automated deployments via Gitea Actions
|
||||
7. **Rollback Safety**: Git-tagged releases, health checks
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Implement Base Configuration**: Create `docker-compose.base.yml`
|
||||
2. **Test Local Environment**: Verify `docker-compose.local.yml` works
|
||||
3. **Setup Staging**: Deploy to staging server, test deployment pipeline
|
||||
4. **Production Deployment**: Manual approval, monitoring
|
||||
5. **Documentation**: Update README with new deployment procedures
|
||||
|
||||
---
|
||||
|
||||
**References**:
|
||||
- Legacy system analysis: `deployment/legacy/ARCHITECTURE_ANALYSIS.md`
|
||||
- Docker Compose documentation: https://docs.docker.com/compose/
|
||||
- Traefik v3 documentation: https://doc.traefik.io/traefik/
|
||||
- Gitea Actions: https://docs.gitea.com/usage/actions/overview
|
||||
2
deployment/legacy/ansible/ansible/.gitignore
vendored
Normal file
2
deployment/legacy/ansible/ansible/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
# Ansible temporary directory
|
||||
.ansible/
|
||||
378
deployment/legacy/ansible/ansible/README.md
Normal file
378
deployment/legacy/ansible/ansible/README.md
Normal file
@@ -0,0 +1,378 @@
|
||||
# Ansible Deployment Configuration
|
||||
|
||||
This directory contains Ansible playbooks and configuration for deploying the Custom PHP Framework to production.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
deployment/ansible/
|
||||
├── ansible.cfg # Ansible configuration
|
||||
├── inventory/
|
||||
│ ├── production.yml # Production server inventory
|
||||
│ └── local.yml # Local testing inventory
|
||||
├── playbooks/
|
||||
│ ├── deploy-update.yml # Deploy application updates
|
||||
│ ├── system-maintenance.yml # Betriebssystem-Updates & Wartung
|
||||
│ ├── rollback.yml # Rollback deployments
|
||||
│ ├── setup-infrastructure.yml # Provision core stacks
|
||||
│ ├── setup-production-secrets.yml # Deploy secrets
|
||||
│ ├── setup-wireguard.yml # Setup WireGuard VPN server
|
||||
│ ├── add-wireguard-client.yml # Add WireGuard client
|
||||
│ ├── sync-code.yml # Git-based code sync
|
||||
│ └── README-WIREGUARD.md # WireGuard documentation
|
||||
├── scripts/ # Helper scripts for secrets & credentials
|
||||
├── roles/ # Reusable roles (e.g. application stack)
|
||||
├── secrets/
|
||||
│ ├── .gitignore # Prevent committing secrets
|
||||
│ └── production.vault.yml.example # Example vault file
|
||||
└── templates/
|
||||
├── application.env.j2 # Application stack environment
|
||||
├── gitea-app.ini.j2 # Gitea configuration
|
||||
├── minio.env.j2 # MinIO environment
|
||||
├── monitoring.env.j2 # Monitoring stack environment
|
||||
├── wireguard-client.conf.j2 # WireGuard client config
|
||||
└── wireguard-server.conf.j2 # WireGuard server config
|
||||
|
||||
## Roles
|
||||
|
||||
Stack-spezifische Aufgaben liegen in `roles/` (z. B. `application`, `traefik`, `registry`). Playbooks wie `setup-infrastructure.yml` importieren diese Rollen direkt. Die Application-Rolle kann mit Variablen wie `application_sync_files=false` oder `application_compose_recreate="always"` konfiguriert werden (siehe `playbooks/deploy-update.yml` als Beispiel). Die neue `system`-Rolle hält Betriebssystem-Pakete aktuell und konfiguriert optionale Unattended-Upgrades bevor Docker-Stacks neu gestartet werden.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Ansible Installed**:
|
||||
```bash
|
||||
pip install ansible
|
||||
```
|
||||
|
||||
2. **SSH Access**:
|
||||
- SSH key configured at `~/.ssh/production`
|
||||
- Key added to production server's authorized_keys for `deploy` user
|
||||
|
||||
3. **Ansible Vault Password**:
|
||||
- Create `.vault_pass` file in `secrets/` directory
|
||||
- Add vault password to this file (one line)
|
||||
- File is gitignored for security
|
||||
- 📖 **Detaillierte Dokumentation:** [docs/guides/vault-password.md](../docs/guides/vault-password.md)
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
### 1. Create Production Secrets
|
||||
|
||||
```bash
|
||||
cd deployment/ansible/secrets
|
||||
|
||||
# Copy example file
|
||||
cp production.vault.yml.example production.vault.yml
|
||||
|
||||
# Edit with your actual secrets
|
||||
nano production.vault.yml
|
||||
|
||||
# Encrypt the file
|
||||
ansible-vault encrypt production.vault.yml
|
||||
# Enter vault password when prompted
|
||||
```
|
||||
|
||||
### 2. Store Vault Password
|
||||
|
||||
```bash
|
||||
# Create vault password file
|
||||
echo "your-vault-password-here" > secrets/.vault_pass
|
||||
|
||||
# Secure the file
|
||||
chmod 600 secrets/.vault_pass
|
||||
```
|
||||
|
||||
**📖 Für detaillierte Informationen:** Siehe [docs/guides/vault-password.md](../docs/guides/vault-password.md)
|
||||
|
||||
### 3. Configure SSH Key
|
||||
|
||||
```bash
|
||||
# Generate SSH key if needed
|
||||
ssh-keygen -t ed25519 -f ~/.ssh/production -C "ansible-deploy"
|
||||
|
||||
# Copy public key to production server
|
||||
ssh-copy-id -i ~/.ssh/production.pub deploy@94.16.110.151
|
||||
```
|
||||
|
||||
## Running Playbooks
|
||||
|
||||
### Deploy Production Secrets
|
||||
|
||||
**First-time setup** - Deploy secrets to production server:
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/setup-production-secrets.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### Deploy Application Update
|
||||
|
||||
**Automated via Gitea Actions** - Or run manually:
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/deploy-update.yml \
|
||||
-e "image_tag=sha-abc123" \
|
||||
-e "git_commit_sha=abc123" \
|
||||
-e "deployment_timestamp=$(date -u +%Y-%m-%dT%H:%M:%SZ)" \
|
||||
-e "docker_registry_username=gitea-user" \
|
||||
-e "docker_registry_password=your-registry-password"
|
||||
```
|
||||
|
||||
### Rollback Deployment
|
||||
|
||||
**Rollback to previous version**:
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/rollback.yml
|
||||
```
|
||||
|
||||
**Rollback to specific version**:
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/rollback.yml \
|
||||
-e "rollback_to_version=2025-01-28T15-30-00"
|
||||
```
|
||||
|
||||
### Setup WireGuard VPN
|
||||
|
||||
**First-time setup** - Install WireGuard VPN server:
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-wireguard.yml
|
||||
```
|
||||
|
||||
**Add a client**:
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml playbooks/add-wireguard-client.yml \
|
||||
-e "client_name=myclient"
|
||||
```
|
||||
|
||||
Siehe [playbooks/README-WIREGUARD.md](playbooks/README-WIREGUARD.md) für detaillierte Anleitung.
|
||||
|
||||
### System Maintenance ausführen
|
||||
|
||||
Führt die `system`-Rolle aus, aktualisiert Paketquellen, führt OS-Upgrades durch und aktiviert optional Unattended-Upgrades.
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/system-maintenance.yml
|
||||
```
|
||||
|
||||
Tipp: Mit `--check` lässt sich zunächst ein Dry-Run starten, um anstehende Updates zu prüfen.
|
||||
|
||||
## Ansible Vault Operations
|
||||
|
||||
### View Encrypted File
|
||||
|
||||
```bash
|
||||
ansible-vault view secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### Edit Encrypted File
|
||||
|
||||
```bash
|
||||
ansible-vault edit secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### Change Vault Password
|
||||
|
||||
```bash
|
||||
ansible-vault rekey secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### Decrypt File (Temporarily)
|
||||
|
||||
```bash
|
||||
ansible-vault decrypt secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
|
||||
# DO NOT COMMIT DECRYPTED FILE!
|
||||
|
||||
# Re-encrypt when done
|
||||
ansible-vault encrypt secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
## Testing Playbooks
|
||||
|
||||
### Test with Check Mode (Dry Run)
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/deploy-update.yml \
|
||||
--check \
|
||||
-e "image_tag=test"
|
||||
```
|
||||
|
||||
### Test Connection
|
||||
|
||||
```bash
|
||||
ansible production -m ping
|
||||
```
|
||||
|
||||
### Verify Inventory
|
||||
|
||||
```bash
|
||||
ansible-inventory --list -y
|
||||
```
|
||||
|
||||
### System Maintenance Dry Run
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/system-maintenance.yml \
|
||||
--check
|
||||
```
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
1. **Never commit unencrypted secrets**
|
||||
- `production.vault.yml` must be encrypted
|
||||
- `.vault_pass` is gitignored
|
||||
- Use `.example` files for documentation
|
||||
|
||||
2. **Rotate secrets regularly**
|
||||
- Update vault file
|
||||
- Re-run `setup-production-secrets.yml`
|
||||
- Restart affected services
|
||||
|
||||
3. **Limit SSH key access**
|
||||
- Use separate SSH key for Ansible
|
||||
- Limit key to `deploy` user only
|
||||
- Consider IP restrictions
|
||||
|
||||
4. **Vault password security**
|
||||
- Store vault password in secure password manager
|
||||
- Don't share via insecure channels
|
||||
- Use different passwords for dev/staging/prod
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Vault Decryption Failed
|
||||
|
||||
**Error**: `Decryption failed (no vault secrets were found)`
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
# Verify vault password is correct
|
||||
ansible-vault view secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
|
||||
# If password is wrong, you'll need the original password to decrypt
|
||||
```
|
||||
|
||||
### SSH Connection Failed
|
||||
|
||||
**Error**: `Failed to connect to the host`
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Test SSH connection manually
|
||||
ssh -i ~/.ssh/production deploy@94.16.110.151
|
||||
|
||||
# Check SSH key permissions
|
||||
chmod 600 ~/.ssh/production
|
||||
chmod 644 ~/.ssh/production.pub
|
||||
|
||||
# Verify SSH key is added to server
|
||||
ssh-copy-id -i ~/.ssh/production.pub deploy@94.16.110.151
|
||||
```
|
||||
|
||||
### Docker Registry Authentication Failed
|
||||
|
||||
**Error**: `unauthorized: authentication required`
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
# Verify registry credentials in vault file
|
||||
ansible-vault view secrets/production.vault.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
|
||||
# Test registry login manually on production server
|
||||
docker login registry.michaelschiemer.de
|
||||
```
|
||||
|
||||
### Service Not Starting
|
||||
|
||||
**Check service logs**:
|
||||
```bash
|
||||
# SSH to production server
|
||||
ssh -i ~/.ssh/production deploy@94.16.110.151
|
||||
|
||||
# Check Docker Compose service logs
|
||||
docker compose -f {{ app_stack_path }}/docker-compose.yml logs app
|
||||
docker compose -f {{ app_stack_path }}/docker-compose.yml logs nginx
|
||||
|
||||
# Check stack status
|
||||
docker compose -f {{ app_stack_path }}/docker-compose.yml ps
|
||||
```
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
These playbooks are automatically executed by Gitea Actions workflows:
|
||||
|
||||
- **`.gitea/workflows/production-deploy.yml`** - Calls `deploy-update.yml` on push to main
|
||||
- **`.gitea/workflows/update-production-secrets.yml`** - Calls `setup-production-secrets.yml` on manual trigger
|
||||
- **`.gitea/workflows/system-maintenance.yml`** - Führt `system-maintenance.yml` geplant oder manuell aus, um Pakete aktuell zu halten
|
||||
|
||||
Vault password is stored as Gitea Actions secret: `ANSIBLE_VAULT_PASSWORD`
|
||||
|
||||
## Inventory Variables
|
||||
|
||||
All zentralen Variablen werden in `group_vars/production.yml` gepflegt und können bei Bedarf im Inventory überschrieben werden. Häufig verwendete Werte:
|
||||
|
||||
| Variable | Beschreibung | Standardwert |
|
||||
|----------|--------------|--------------|
|
||||
| `deploy_user_home` | Home-Verzeichnis des Deploy-Users | `/home/deploy` |
|
||||
| `stacks_base_path` | Basispfad für Docker Compose Stacks | `/home/deploy/deployment/stacks` |
|
||||
| `app_stack_path` | Pfad zum Application Stack | `/home/deploy/deployment/stacks/production` |
|
||||
| `backups_path` | Ablageort für Deployment-Backups | `/home/deploy/deployment/backups` |
|
||||
| `docker_registry` | Interner Registry-Endpunkt (lokal) | `localhost:5000` |
|
||||
| `docker_registry_external` | Externer Registry-Endpunkt | `registry.michaelschiemer.de` |
|
||||
| `app_domain` | Produktions-Domain | `michaelschiemer.de` |
|
||||
| `health_check_url` | Health-Check Endpoint | `https://michaelschiemer.de/health` |
|
||||
| `max_rollback_versions` | Anzahl vorgehaltener Backups | `5` |
|
||||
| `system_update_packages` | Aktiviert OS-Paketupdates via `system`-Rolle | `true` |
|
||||
| `system_apt_upgrade` | Wert für `apt upgrade` (z. B. `dist`) | `dist` |
|
||||
| `system_enable_unattended_upgrades` | Aktiviert `unattended-upgrades` | `true` |
|
||||
| `system_enable_unattended_reboot` | Steuert automatische Reboots nach Updates | `false` |
|
||||
| `system_unattended_reboot_time` | Reboot-Zeitfenster (wenn aktiviert) | `02:00` |
|
||||
| `system_enable_unattended_timer` | Aktiviert Systemd-Timer für apt | `true` |
|
||||
| `system_enable_docker_prune` | Führt nach Updates `docker system prune` aus | `false` |
|
||||
|
||||
## Backup Management
|
||||
|
||||
Backups are automatically created before each deployment:
|
||||
|
||||
- **Location**: `/home/deploy/backups/`
|
||||
- **Retention**: Last 5 versions kept
|
||||
- **Contents**:
|
||||
- `current_image.txt` - Previously deployed image
|
||||
- `stack_status.txt` - Stack status before deployment
|
||||
- `deployment_metadata.txt` - Deployment details
|
||||
|
||||
### List Available Backups
|
||||
|
||||
```bash
|
||||
ssh -i ~/.ssh/production deploy@94.16.110.151 \
|
||||
"ls -lh /home/deploy/backups/"
|
||||
```
|
||||
|
||||
### Manual Backup
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/deploy-update.yml \
|
||||
--tags backup \
|
||||
-e "image_tag=current"
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For issues with:
|
||||
- **Playbooks**: Check this README and playbook comments
|
||||
- **Vault**: See Ansible Vault documentation
|
||||
- **Deployment**: Review Gitea Actions logs
|
||||
- **Production**: SSH to server and check Docker logs
|
||||
18
deployment/legacy/ansible/ansible/ansible.cfg
Normal file
18
deployment/legacy/ansible/ansible/ansible.cfg
Normal file
@@ -0,0 +1,18 @@
|
||||
[defaults]
|
||||
inventory = inventory/production.yml
|
||||
host_key_checking = False
|
||||
remote_user = deploy
|
||||
private_key_file = ~/.ssh/production
|
||||
timeout = 30
|
||||
retry_files_enabled = False
|
||||
gathering = smart
|
||||
fact_caching = jsonfile
|
||||
fact_caching_connection = /tmp/ansible_facts
|
||||
fact_caching_timeout = 3600
|
||||
roles_path = roles
|
||||
stdout_callback = default_with_clean_msg
|
||||
callback_plugins = ./callback_plugins
|
||||
|
||||
[ssh_connection]
|
||||
pipelining = True
|
||||
control_path = /tmp/ansible-ssh-%%h-%%p-%%r
|
||||
282
deployment/legacy/ansible/ansible/callback_plugins/README.md
Normal file
282
deployment/legacy/ansible/ansible/callback_plugins/README.md
Normal file
@@ -0,0 +1,282 @@
|
||||
# Ansible Callback Plugin - default_with_clean_msg
|
||||
|
||||
**Stand:** 2025-11-07
|
||||
**Status:** Dokumentation des Custom Callback Plugins
|
||||
|
||||
---
|
||||
|
||||
## Übersicht
|
||||
|
||||
Das `default_with_clean_msg` Callback Plugin erweitert Ansible's Standard-Output mit verbesserter Formatierung für multiline `msg` Felder. Multiline Nachrichten werden als lesbare Blöcke mit Borders angezeigt, anstatt als escaped Newline-Zeichen.
|
||||
|
||||
**Datei:** `deployment/ansible/callback_plugins/default_with_clean_msg.py`
|
||||
|
||||
---
|
||||
|
||||
## Zweck
|
||||
|
||||
### Problem
|
||||
|
||||
Ansible's Standard Callback Plugin zeigt multiline `msg` Felder so an:
|
||||
```
|
||||
"msg": "Line 1\nLine 2\nLine 3"
|
||||
```
|
||||
|
||||
Dies macht es schwierig, multiline Debug-Ausgaben zu lesen und zu kopieren.
|
||||
|
||||
### Lösung
|
||||
|
||||
Das Custom Plugin formatiert multiline Nachrichten als lesbare Blöcke:
|
||||
```
|
||||
================================================================================
|
||||
Line 1
|
||||
Line 2
|
||||
Line 3
|
||||
================================================================================
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Funktionalität
|
||||
|
||||
### Multiline Message Formatting
|
||||
|
||||
**Automatische Erkennung:**
|
||||
- Nur Nachrichten mit mehr als einer Zeile werden formatiert
|
||||
- Einzeilige Nachrichten bleiben unverändert
|
||||
|
||||
**Format:**
|
||||
- Border oben und unten (aus `=` Zeichen)
|
||||
- Maximale Border-Breite: 80 Zeichen
|
||||
- Farbcodierung entsprechend Task-Status
|
||||
|
||||
### Method Overrides
|
||||
|
||||
Das Plugin überschreibt folgende Methoden des Default Callbacks:
|
||||
|
||||
- `v2_playbook_on_task_start` - Task Start
|
||||
- `v2_runner_on_start` - Runner Start
|
||||
- `v2_runner_on_ok` - Erfolgreiche Tasks
|
||||
- `v2_runner_on_failed` - Fehlgeschlagene Tasks
|
||||
- `v2_runner_on_skipped` - Übersprungene Tasks
|
||||
- `v2_runner_on_unreachable` - Unerreichbare Hosts
|
||||
|
||||
**Grund:** Diese Methoden werden überschrieben, um Warnings zu vermeiden, die auftreten, wenn `get_option()` vor der vollständigen Initialisierung aufgerufen wird.
|
||||
|
||||
---
|
||||
|
||||
## Konfiguration
|
||||
|
||||
### ansible.cfg
|
||||
|
||||
**Datei:** `deployment/ansible/ansible.cfg`
|
||||
|
||||
```ini
|
||||
[defaults]
|
||||
stdout_callback = default_with_clean_msg
|
||||
callback_plugins = ./callback_plugins
|
||||
```
|
||||
|
||||
**Wichtig:**
|
||||
- `stdout_callback` aktiviert das Plugin als Standard-Output
|
||||
- `callback_plugins` gibt den Pfad zu den Plugin-Dateien an
|
||||
|
||||
### Plugin-Datei
|
||||
|
||||
**Pfad:** `deployment/ansible/callback_plugins/default_with_clean_msg.py`
|
||||
|
||||
**Struktur:**
|
||||
- Erbt von `ansible.plugins.callback.default.CallbackModule`
|
||||
- Überschreibt spezifische Methoden
|
||||
- Fügt `_print_clean_msg()` Methode hinzu
|
||||
|
||||
---
|
||||
|
||||
## Verwendung
|
||||
|
||||
### Automatisch
|
||||
|
||||
Das Plugin wird automatisch verwendet, wenn `ansible.cfg` korrekt konfiguriert ist:
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml
|
||||
```
|
||||
|
||||
### Manuell
|
||||
|
||||
Falls das Plugin nicht automatisch geladen wird:
|
||||
|
||||
```bash
|
||||
ansible-playbook \
|
||||
--callback-plugin ./callback_plugins \
|
||||
--stdout-callback default_with_clean_msg \
|
||||
-i inventory/production.yml \
|
||||
playbooks/setup-infrastructure.yml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Beispiel-Ausgabe
|
||||
|
||||
### Vorher (Standard Callback)
|
||||
|
||||
```
|
||||
ok: [server] => {
|
||||
"msg": "Container Status:\nNAME IMAGE COMMAND SERVICE CREATED STATUS PORTS\nproduction-php-1 localhost:5000/framework:latest \"/usr/local/bin/entr…\" php About a minute ago Restarting (255) 13 seconds ago"
|
||||
}
|
||||
```
|
||||
|
||||
### Nachher (Custom Callback)
|
||||
|
||||
```
|
||||
ok: [server] => {
|
||||
================================================================================
|
||||
Container Status:
|
||||
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
|
||||
production-php-1 localhost:5000/framework:latest "/usr/local/bin/entr…" php About a minute ago Restarting (255) 13 seconds ago
|
||||
================================================================================
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bekannte Limitationen
|
||||
|
||||
### Warnings bei Option Access
|
||||
|
||||
**Problem:** Frühere Versionen des Plugins riefen `get_option()` auf, bevor Ansible's Optionen vollständig initialisiert waren, was zu Warnings führte:
|
||||
|
||||
```
|
||||
[WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin: 'display_skipped_hosts'
|
||||
```
|
||||
|
||||
**Lösung:** Das Plugin überschreibt die problematischen Methoden direkt, ohne `get_option()` aufzurufen.
|
||||
|
||||
### Option-Checks
|
||||
|
||||
**Aktuell:** Das Plugin zeigt immer alle Hosts an (ok, changed, skipped), ohne Option-Checks.
|
||||
|
||||
**Grund:** Option-Checks würden `get_option()` erfordern, was Warnings verursacht.
|
||||
|
||||
**Workaround:** Falls Option-Checks benötigt werden, können sie nach vollständiger Initialisierung implementiert werden.
|
||||
|
||||
---
|
||||
|
||||
## Technische Details
|
||||
|
||||
### Inheritance
|
||||
|
||||
```python
|
||||
from ansible.plugins.callback.default import CallbackModule as DefaultCallbackModule
|
||||
|
||||
class CallbackModule(DefaultCallbackModule):
|
||||
CALLBACK_NAME = 'default_with_clean_msg'
|
||||
|
||||
def _print_clean_msg(self, result, color=C.COLOR_VERBOSE):
|
||||
# Custom formatting logic
|
||||
```
|
||||
|
||||
**Vorteil:** Erbt alle Standard-Funktionalität und erweitert nur die Formatierung.
|
||||
|
||||
### Method Overrides
|
||||
|
||||
**Warum Overrides?**
|
||||
|
||||
Die Standard-Methoden rufen `get_option()` auf, um zu prüfen, ob bestimmte Hosts angezeigt werden sollen. Dies schlägt fehl, wenn Optionen noch nicht initialisiert sind.
|
||||
|
||||
**Lösung:** Direkte Implementierung ohne Option-Checks:
|
||||
|
||||
```python
|
||||
def v2_runner_on_ok(self, result):
|
||||
# Eigene Implementierung ohne get_option()
|
||||
# ...
|
||||
self._print_clean_msg(result, color=color)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Entwicklung
|
||||
|
||||
### Plugin testen
|
||||
|
||||
```bash
|
||||
# Plugin-Verzeichnis
|
||||
cd deployment/ansible/callback_plugins
|
||||
|
||||
# Syntax prüfen
|
||||
python3 -m py_compile default_with_clean_msg.py
|
||||
|
||||
# Mit Ansible testen
|
||||
cd ..
|
||||
ansible-playbook -i inventory/production.yml playbooks/check-container-status.yml
|
||||
```
|
||||
|
||||
### Plugin erweitern
|
||||
|
||||
**Neue Formatierung hinzufügen:**
|
||||
|
||||
1. `_print_clean_msg()` Methode erweitern
|
||||
2. Neue Formatierungslogik implementieren
|
||||
3. Tests durchführen
|
||||
|
||||
**Beispiel:**
|
||||
```python
|
||||
def _print_clean_msg(self, result, color=C.COLOR_VERBOSE):
|
||||
msg_body = result._result.get('msg')
|
||||
if isinstance(msg_body, str) and msg_body.strip():
|
||||
# Custom formatting logic here
|
||||
# ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Plugin wird nicht geladen
|
||||
|
||||
**Problem:** Plugin wird nicht verwendet, Standard-Output bleibt
|
||||
|
||||
**Lösung:**
|
||||
1. `ansible.cfg` prüfen:
|
||||
```ini
|
||||
stdout_callback = default_with_clean_msg
|
||||
callback_plugins = ./callback_plugins
|
||||
```
|
||||
|
||||
2. Plugin-Datei prüfen:
|
||||
```bash
|
||||
ls -la deployment/ansible/callback_plugins/default_with_clean_msg.py
|
||||
```
|
||||
|
||||
3. Syntax prüfen:
|
||||
```bash
|
||||
python3 -m py_compile default_with_clean_msg.py
|
||||
```
|
||||
|
||||
### Warnings erscheinen
|
||||
|
||||
**Problem:** Warnings wie `'display_skipped_hosts'`
|
||||
|
||||
**Lösung:** Plugin-Version prüfen - sollte Method Overrides ohne `get_option()` verwenden.
|
||||
|
||||
**Aktueller Stand:** Plugin verwendet direkte Overrides ohne Option-Checks.
|
||||
|
||||
---
|
||||
|
||||
## Referenz
|
||||
|
||||
- [Ansible Callback Plugins Documentation](https://docs.ansible.com/ansible/latest/plugins/callback.html)
|
||||
- [Ansible Callback Development Guide](https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html#callback-plugins)
|
||||
- [Initial Deployment Troubleshooting](../docs/troubleshooting/initial-deployment-issues.md) - Problem 8: Ansible Debug Messages
|
||||
|
||||
---
|
||||
|
||||
## Changelog
|
||||
|
||||
### 2025-11-07
|
||||
- Initial Version erstellt
|
||||
- Multiline Message Formatting implementiert
|
||||
- Method Overrides ohne Option-Checks
|
||||
- Warnings behoben
|
||||
|
||||
Binary file not shown.
@@ -0,0 +1,181 @@
|
||||
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
|
||||
# (c) 2017 Ansible Project
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
# Modified to add clean multiline msg formatting
|
||||
#
|
||||
# This plugin extends the default callback with enhanced multiline message formatting.
|
||||
# It suppresses warnings by implementing its own versions of methods that would
|
||||
# otherwise call get_option() before options are initialized.
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
DOCUMENTATION = '''
|
||||
name: default_with_clean_msg
|
||||
type: stdout
|
||||
short_description: default Ansible screen output with clean multiline messages
|
||||
version_added: historical
|
||||
description:
|
||||
This is the default output callback for ansible-playbook with enhanced
|
||||
multiline message formatting. Multiline messages in msg fields are
|
||||
displayed as clean, readable blocks instead of escaped newline characters.
|
||||
'''
|
||||
|
||||
from ansible import constants as C
|
||||
from ansible.plugins.callback.default import CallbackModule as DefaultCallbackModule
|
||||
|
||||
|
||||
class CallbackModule(DefaultCallbackModule):
|
||||
'''
|
||||
This extends the default callback with enhanced multiline message formatting.
|
||||
Multiline messages in 'msg' fields are displayed as clean, readable blocks.
|
||||
'''
|
||||
|
||||
CALLBACK_NAME = 'default_with_clean_msg'
|
||||
|
||||
def _print_clean_msg(self, result, color=C.COLOR_VERBOSE):
|
||||
'''
|
||||
Print multiline messages in a clean, readable format with borders.
|
||||
This makes it easy to read and copy multiline content from debug outputs.
|
||||
'''
|
||||
msg_body = result._result.get('msg')
|
||||
if isinstance(msg_body, str) and msg_body.strip():
|
||||
lines = msg_body.strip().splitlines()
|
||||
if len(lines) > 1: # Only format if multiline
|
||||
max_len = max(len(line) for line in lines if line.strip())
|
||||
if max_len > 0:
|
||||
border = "=" * min(max_len, 80) # Limit border width
|
||||
self._display.display("\n" + border, color=color)
|
||||
for line in lines:
|
||||
self._display.display(line, color=color)
|
||||
self._display.display(border + "\n", color=color)
|
||||
|
||||
def v2_playbook_on_task_start(self, task, is_conditional):
|
||||
# Suppress warnings by implementing our own version that doesn't call get_option early
|
||||
# Initialize state if needed
|
||||
if not hasattr(self, '_play'):
|
||||
self._play = None
|
||||
if not hasattr(self, '_last_task_banner'):
|
||||
self._last_task_banner = None
|
||||
if not hasattr(self, '_task_type_cache'):
|
||||
self._task_type_cache = {}
|
||||
|
||||
# Cache task prefix
|
||||
self._task_type_cache[task._uuid] = 'TASK'
|
||||
|
||||
# Store task name
|
||||
if self._play and hasattr(self._play, 'strategy'):
|
||||
from ansible.utils.fqcn import add_internal_fqcns
|
||||
if self._play.strategy in add_internal_fqcns(('free', 'host_pinned')):
|
||||
self._last_task_name = None
|
||||
else:
|
||||
self._last_task_name = task.get_name().strip()
|
||||
else:
|
||||
self._last_task_name = task.get_name().strip()
|
||||
|
||||
# Print task banner (only if we should display it)
|
||||
# We skip the parent's check for display_skipped_hosts/display_ok_hosts to avoid warnings
|
||||
if self._play and hasattr(self._play, 'strategy'):
|
||||
from ansible.utils.fqcn import add_internal_fqcns
|
||||
if self._play.strategy not in add_internal_fqcns(('free', 'host_pinned')):
|
||||
self._last_task_banner = task._uuid
|
||||
self._display.banner('TASK [%s]' % task.get_name().strip())
|
||||
|
||||
def v2_runner_on_start(self, host, task):
|
||||
# Suppress warnings by not calling parent if options aren't ready
|
||||
# This method is optional and only shows per-host start messages
|
||||
# We can safely skip it to avoid warnings
|
||||
pass
|
||||
|
||||
def v2_runner_on_ok(self, result):
|
||||
# Suppress warnings by implementing our own version that doesn't call get_option
|
||||
host_label = self.host_label(result)
|
||||
|
||||
# Handle TaskInclude separately
|
||||
from ansible.playbook.task_include import TaskInclude
|
||||
if isinstance(result._task, TaskInclude):
|
||||
if self._last_task_banner != result._task._uuid:
|
||||
self.v2_playbook_on_task_start(result._task, False)
|
||||
return
|
||||
|
||||
# Clean results and handle warnings
|
||||
self._clean_results(result._result, result._task.action)
|
||||
self._handle_warnings(result._result)
|
||||
|
||||
# Handle loop results
|
||||
if result._task.loop and 'results' in result._result:
|
||||
self._process_items(result)
|
||||
return
|
||||
|
||||
# Determine status and color
|
||||
if result._result.get('changed', False):
|
||||
if self._last_task_banner != result._task._uuid:
|
||||
self.v2_playbook_on_task_start(result._task, False)
|
||||
self._display.display("changed: [%s]" % host_label, color=C.COLOR_CHANGED)
|
||||
color = C.COLOR_CHANGED
|
||||
else:
|
||||
# Always display ok hosts (skip get_option check to avoid warnings)
|
||||
if self._last_task_banner != result._task._uuid:
|
||||
self.v2_playbook_on_task_start(result._task, False)
|
||||
self._display.display("ok: [%s]" % host_label, color=C.COLOR_OK)
|
||||
color = C.COLOR_OK
|
||||
|
||||
# Add our clean message formatting
|
||||
self._print_clean_msg(result, color=color)
|
||||
|
||||
def v2_runner_on_failed(self, result, ignore_errors=False):
|
||||
# Suppress warnings by implementing our own version
|
||||
host_label = self.host_label(result)
|
||||
self._clean_results(result._result, result._task.action)
|
||||
|
||||
if self._last_task_banner != result._task._uuid:
|
||||
self.v2_playbook_on_task_start(result._task, False)
|
||||
|
||||
self._handle_exception(result._result, use_stderr=False)
|
||||
self._handle_warnings(result._result)
|
||||
|
||||
if result._task.loop and 'results' in result._result:
|
||||
self._process_items(result)
|
||||
else:
|
||||
msg = "fatal: [%s]: FAILED! => %s" % (host_label, self._dump_results(result._result))
|
||||
self._display.display(msg, color=C.COLOR_ERROR)
|
||||
|
||||
if ignore_errors:
|
||||
self._display.display("...ignoring", color=C.COLOR_SKIP)
|
||||
|
||||
# Add our clean message formatting
|
||||
self._print_clean_msg(result, color=C.COLOR_ERROR)
|
||||
|
||||
def v2_runner_on_skipped(self, result):
|
||||
# Suppress warnings by implementing our own version
|
||||
# Always display skipped hosts (skip get_option check to avoid warnings)
|
||||
self._clean_results(result._result, result._task.action)
|
||||
|
||||
if self._last_task_banner != result._task._uuid:
|
||||
self.v2_playbook_on_task_start(result._task, False)
|
||||
|
||||
if result._task.loop is not None and 'results' in result._result:
|
||||
self._process_items(result)
|
||||
else:
|
||||
msg = "skipping: [%s]" % result._host.get_name()
|
||||
if self._run_is_verbose(result):
|
||||
msg += " => %s" % self._dump_results(result._result)
|
||||
self._display.display(msg, color=C.COLOR_SKIP)
|
||||
|
||||
# Add our clean message formatting
|
||||
self._print_clean_msg(result, color=C.COLOR_SKIP)
|
||||
|
||||
def v2_runner_on_unreachable(self, result):
|
||||
# Suppress warnings by implementing our own version
|
||||
if self._last_task_banner != result._task._uuid:
|
||||
self.v2_playbook_on_task_start(result._task, False)
|
||||
|
||||
host_label = self.host_label(result)
|
||||
msg = "fatal: [%s]: UNREACHABLE! => %s" % (host_label, self._dump_results(result._result))
|
||||
self._display.display(msg, color=C.COLOR_UNREACHABLE)
|
||||
|
||||
if result._task.ignore_unreachable:
|
||||
self._display.display("...ignoring", color=C.COLOR_SKIP)
|
||||
|
||||
# Add our clean message formatting
|
||||
self._print_clean_msg(result, color=C.COLOR_UNREACHABLE)
|
||||
@@ -0,0 +1,110 @@
|
||||
---
|
||||
# Production Deployment - Centralized Variables
|
||||
# These variables are used across all playbooks
|
||||
|
||||
# System Maintenance
|
||||
system_update_packages: true
|
||||
system_apt_upgrade: dist
|
||||
system_enable_unattended_upgrades: true
|
||||
system_enable_unattended_reboot: false
|
||||
system_unattended_reboot_time: "02:00"
|
||||
system_enable_unattended_timer: true
|
||||
system_enable_docker_prune: false
|
||||
|
||||
# Deployment Paths
|
||||
deploy_user_home: "/home/deploy"
|
||||
stacks_base_path: "/home/deploy/deployment/stacks"
|
||||
postgresql_production_stack_path: "{{ stacks_base_path }}/postgresql-production"
|
||||
app_stack_path: "{{ stacks_base_path }}/production"
|
||||
backups_path: "{{ deploy_user_home }}/deployment/backups"
|
||||
|
||||
# Docker Registry
|
||||
docker_registry: "localhost:5000"
|
||||
docker_registry_url: "localhost:5000"
|
||||
docker_registry_external: "registry.michaelschiemer.de"
|
||||
docker_registry_username_default: "admin"
|
||||
# docker_registry_password_default should be set in vault as vault_docker_registry_password
|
||||
# If not using vault, override via -e docker_registry_password_default="your-password"
|
||||
docker_registry_password_default: ""
|
||||
registry_auth_path: "{{ stacks_base_path }}/registry/auth"
|
||||
|
||||
# Application Configuration
|
||||
app_name: "framework"
|
||||
app_domain: "michaelschiemer.de"
|
||||
app_image: "{{ docker_registry }}/{{ app_name }}"
|
||||
app_image_external: "{{ docker_registry_external }}/{{ app_name }}"
|
||||
|
||||
# Domain Configuration
|
||||
gitea_domain: "git.michaelschiemer.de"
|
||||
|
||||
# Email Configuration
|
||||
mail_from_address: "noreply@{{ app_domain }}"
|
||||
acme_email: "kontakt@{{ app_domain }}"
|
||||
|
||||
# SSL Certificate Domains
|
||||
ssl_domains:
|
||||
- "{{ gitea_domain }}"
|
||||
- "{{ app_domain }}"
|
||||
|
||||
# Health Check Configuration
|
||||
health_check_url: "https://{{ app_domain }}/health"
|
||||
health_check_retries: 10
|
||||
health_check_delay: 10
|
||||
|
||||
# Rollback Configuration
|
||||
max_rollback_versions: 5
|
||||
rollback_timeout: 300
|
||||
|
||||
# Wait Timeouts
|
||||
wait_timeout: 60
|
||||
|
||||
# Git Configuration (for sync-code.yml)
|
||||
git_repository_url_default: "https://{{ gitea_domain }}/michael/michaelschiemer.git"
|
||||
git_branch_default: "main"
|
||||
git_token: "{{ vault_git_token | default('') }}"
|
||||
git_username: "{{ vault_git_username | default('') }}"
|
||||
git_password: "{{ vault_git_password | default('') }}"
|
||||
|
||||
# Database Configuration
|
||||
db_user_default: "postgres"
|
||||
db_name_default: "michaelschiemer"
|
||||
db_host_default: "postgres-production"
|
||||
|
||||
# MinIO Object Storage Configuration
|
||||
minio_root_user: "{{ vault_minio_root_user | default('minioadmin') }}"
|
||||
minio_root_password: "{{ vault_minio_root_password | default('') }}"
|
||||
minio_api_domain: "minio-api.michaelschiemer.de"
|
||||
minio_console_domain: "minio.michaelschiemer.de"
|
||||
|
||||
# WireGuard Configuration
|
||||
wireguard_interface: "wg0"
|
||||
wireguard_config_path: "/etc/wireguard"
|
||||
wireguard_port_default: 51820
|
||||
wireguard_network_default: "10.8.0.0/24"
|
||||
wireguard_server_ip_default: "10.8.0.1"
|
||||
wireguard_enable_ip_forwarding: true
|
||||
wireguard_config_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}.conf"
|
||||
wireguard_private_key_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}_private.key"
|
||||
wireguard_public_key_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}_public.key"
|
||||
wireguard_client_configs_path: "{{ wireguard_config_path }}/clients"
|
||||
|
||||
# WireGuard DNS Configuration
|
||||
# DNS server for VPN clients (points to VPN server IP)
|
||||
# This ensures internal services are resolved to VPN IPs
|
||||
wireguard_dns_servers:
|
||||
- "{{ wireguard_server_ip_default }}"
|
||||
|
||||
# Traefik Configuration
|
||||
# Disable automatic restarts after config deployment to prevent restart loops
|
||||
# Set to true only when explicitly needed (e.g., after major config changes)
|
||||
traefik_auto_restart: false
|
||||
|
||||
# Traefik SSL Configuration
|
||||
# Disable automatic restarts during SSL certificate setup to prevent restart loops
|
||||
traefik_ssl_restart: false
|
||||
|
||||
# Gitea Auto-Restart Configuration
|
||||
# Set to false to prevent automatic restarts when healthcheck fails
|
||||
# This prevents restart loops when Gitea is temporarily unavailable (e.g., during Traefik restarts)
|
||||
# Set to true only when explicitly needed for remediation
|
||||
gitea_auto_restart: false
|
||||
@@ -0,0 +1 @@
|
||||
../../../secrets/production.vault.yml
|
||||
@@ -0,0 +1,113 @@
|
||||
---
|
||||
# Staging Deployment - Centralized Variables
|
||||
# These variables are used across all staging playbooks
|
||||
|
||||
# System Maintenance
|
||||
system_update_packages: true
|
||||
system_apt_upgrade: dist
|
||||
system_enable_unattended_upgrades: true
|
||||
system_enable_unattended_reboot: false
|
||||
system_unattended_reboot_time: "02:00"
|
||||
system_enable_unattended_timer: true
|
||||
system_enable_docker_prune: false
|
||||
|
||||
# Deployment Paths
|
||||
deploy_user_home: "/home/deploy"
|
||||
stacks_base_path: "/home/deploy/deployment/stacks"
|
||||
staging_stack_path: "{{ stacks_base_path }}/staging"
|
||||
postgresql_staging_stack_path: "{{ stacks_base_path }}/postgresql-staging"
|
||||
backups_path: "{{ deploy_user_home }}/deployment/backups"
|
||||
|
||||
# Docker Registry
|
||||
docker_registry: "localhost:5000"
|
||||
docker_registry_url: "localhost:5000"
|
||||
docker_registry_external: "registry.michaelschiemer.de"
|
||||
docker_registry_username_default: "admin"
|
||||
# docker_registry_password_default should be set in vault as vault_docker_registry_password
|
||||
# If not using vault, override via -e docker_registry_password_default="your-password"
|
||||
docker_registry_password_default: ""
|
||||
registry_auth_path: "{{ stacks_base_path }}/registry/auth"
|
||||
|
||||
# Application Configuration
|
||||
app_name: "framework"
|
||||
app_domain: "staging.michaelschiemer.de"
|
||||
staging_domain: "{{ app_domain }}"
|
||||
app_image: "{{ docker_registry }}/{{ app_name }}"
|
||||
app_image_external: "{{ docker_registry_external }}/{{ app_name }}"
|
||||
|
||||
# Domain Configuration
|
||||
gitea_domain: "git.michaelschiemer.de"
|
||||
|
||||
# Email Configuration
|
||||
mail_from_address: "noreply@{{ app_domain }}"
|
||||
acme_email: "kontakt@michaelschiemer.de"
|
||||
|
||||
# SSL Certificate Domains
|
||||
ssl_domains:
|
||||
- "{{ gitea_domain }}"
|
||||
- "{{ app_domain }}"
|
||||
- "michaelschiemer.de"
|
||||
|
||||
# Health Check Configuration
|
||||
health_check_url: "https://{{ app_domain }}/health"
|
||||
health_check_retries: 10
|
||||
health_check_delay: 10
|
||||
|
||||
# Rollback Configuration
|
||||
max_rollback_versions: 3
|
||||
rollback_timeout: 300
|
||||
|
||||
# Wait Timeouts
|
||||
wait_timeout: 60
|
||||
|
||||
# Git Configuration (for sync-code.yml)
|
||||
git_repository_url_default: "https://{{ gitea_domain }}/michael/michaelschiemer.git"
|
||||
git_branch_default: "staging"
|
||||
git_token: "{{ vault_git_token | default('') }}"
|
||||
git_username: "{{ vault_git_username | default('') }}"
|
||||
git_password: "{{ vault_git_password | default('') }}"
|
||||
|
||||
# Database Configuration
|
||||
db_user_default: "postgres"
|
||||
db_name_default: "michaelschiemer_staging"
|
||||
db_host_default: "postgres-staging"
|
||||
|
||||
# MinIO Object Storage Configuration
|
||||
minio_root_user: "{{ vault_minio_root_user | default('minioadmin') }}"
|
||||
minio_root_password: "{{ vault_minio_root_password | default('') }}"
|
||||
minio_api_domain: "minio-api.michaelschiemer.de"
|
||||
minio_console_domain: "minio.michaelschiemer.de"
|
||||
|
||||
# WireGuard Configuration
|
||||
wireguard_interface: "wg0"
|
||||
wireguard_config_path: "/etc/wireguard"
|
||||
wireguard_port_default: 51820
|
||||
wireguard_network_default: "10.8.0.0/24"
|
||||
wireguard_server_ip_default: "10.8.0.1"
|
||||
wireguard_enable_ip_forwarding: true
|
||||
wireguard_config_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}.conf"
|
||||
wireguard_private_key_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}_private.key"
|
||||
wireguard_public_key_file: "{{ wireguard_config_path }}/{{ wireguard_interface }}_public.key"
|
||||
wireguard_client_configs_path: "{{ wireguard_config_path }}/clients"
|
||||
|
||||
# WireGuard DNS Configuration
|
||||
# DNS server for VPN clients (points to VPN server IP)
|
||||
# This ensures internal services are resolved to VPN IPs
|
||||
wireguard_dns_servers:
|
||||
- "{{ wireguard_server_ip_default }}"
|
||||
|
||||
# Traefik Configuration
|
||||
# Disable automatic restarts after config deployment to prevent restart loops
|
||||
# Set to true only when explicitly needed (e.g., after major config changes)
|
||||
traefik_auto_restart: false
|
||||
|
||||
# Traefik SSL Configuration
|
||||
# Disable automatic restarts during SSL certificate setup to prevent restart loops
|
||||
traefik_ssl_restart: false
|
||||
|
||||
# Gitea Auto-Restart Configuration
|
||||
# Set to false to prevent automatic restarts when healthcheck fails
|
||||
# This prevents restart loops when Gitea is temporarily unavailable (e.g., during Traefik restarts)
|
||||
# Set to true only when explicitly needed for remediation
|
||||
gitea_auto_restart: false
|
||||
|
||||
11
deployment/legacy/ansible/ansible/inventory/local.yml
Normal file
11
deployment/legacy/ansible/ansible/inventory/local.yml
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
# Local inventory for running Ansible playbooks on localhost
|
||||
all:
|
||||
hosts:
|
||||
localhost:
|
||||
ansible_connection: local
|
||||
ansible_python_interpreter: "{{ ansible_playbook_python }}"
|
||||
children:
|
||||
local:
|
||||
hosts:
|
||||
localhost:
|
||||
18
deployment/legacy/ansible/ansible/inventory/production.yml
Normal file
18
deployment/legacy/ansible/ansible/inventory/production.yml
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
all:
|
||||
children:
|
||||
production:
|
||||
hosts:
|
||||
server:
|
||||
ansible_host: 94.16.110.151
|
||||
ansible_user: deploy
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
ansible_ssh_private_key_file: ~/.ssh/production
|
||||
vars:
|
||||
# Note: Centralized variables are defined in group_vars/production.yml
|
||||
# Only override-specific variables should be here
|
||||
# Override system_* defaults here when Wartungsfenster abweichen
|
||||
|
||||
# Legacy compose_file reference (deprecated)
|
||||
# Production now uses docker-compose.base.yml + docker-compose.production.yml from repository root
|
||||
# compose_file: "{{ stacks_base_path }}/application/docker-compose.yml"
|
||||
@@ -0,0 +1,81 @@
|
||||
# Build Initial Image - Anleitung
|
||||
|
||||
## Übersicht
|
||||
|
||||
Dieses Playbook baut das initiale Docker Image für das Framework und pusht es ins lokale Registry (`localhost:5000`).
|
||||
|
||||
## Voraussetzungen
|
||||
|
||||
1. **Registry muss laufen**: Das Registry muss bereits deployed sein (via `setup-infrastructure.yml`)
|
||||
2. **Vault-Passwort**: `vault_docker_registry_password` muss im Vault-File gesetzt sein
|
||||
3. **Git-Zugriff**: Der Server muss Zugriff auf das Git-Repository haben
|
||||
|
||||
## Verwendung
|
||||
|
||||
### Standard (main branch)
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/build-initial-image.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### Mit spezifischem Branch
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/build-initial-image.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "build_repo_branch=staging"
|
||||
```
|
||||
|
||||
### Mit spezifischem Image-Tag
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/build-initial-image.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "build_image_tag=v1.0.0"
|
||||
```
|
||||
|
||||
## Was das Playbook macht
|
||||
|
||||
1. ✅ Lädt Vault-Secrets (Registry-Credentials)
|
||||
2. ✅ Klont/aktualisiert das Git-Repository
|
||||
3. ✅ Prüft, ob `Dockerfile.production` existiert
|
||||
4. ✅ Loggt sich beim Registry ein
|
||||
5. ✅ Baut das Docker Image
|
||||
6. ✅ Pusht das Image ins Registry
|
||||
7. ✅ Verifiziert, dass das Image existiert
|
||||
|
||||
## Nach dem Build
|
||||
|
||||
Nach erfolgreichem Build kannst du das Application-Stack deployen:
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/setup-infrastructure.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Registry-Login schlägt fehl
|
||||
|
||||
- Prüfe, ob `vault_docker_registry_password` im Vault-File gesetzt ist
|
||||
- Prüfe, ob das Registry läuft: `docker ps | grep registry`
|
||||
- Prüfe, ob der Registry erreichbar ist: `curl http://localhost:5000/v2/`
|
||||
|
||||
### Dockerfile.production nicht gefunden
|
||||
|
||||
- Prüfe, ob der Branch existiert: `git ls-remote --heads <repo-url>`
|
||||
- Prüfe, ob `Dockerfile.production` im Repository existiert
|
||||
|
||||
### Build schlägt fehl
|
||||
|
||||
- Prüfe Docker-Logs auf dem Server
|
||||
- Prüfe, ob genug Speicherplatz vorhanden ist: `df -h`
|
||||
- Prüfe, ob Docker Buildx installiert ist: `docker buildx version`
|
||||
|
||||
213
deployment/legacy/ansible/ansible/playbooks/CLEANUP_SUMMARY.md
Normal file
213
deployment/legacy/ansible/ansible/playbooks/CLEANUP_SUMMARY.md
Normal file
@@ -0,0 +1,213 @@
|
||||
# Playbook Cleanup & Server Redeploy - Summary
|
||||
|
||||
## Completed Tasks
|
||||
|
||||
### Phase 1: Playbook Cleanup ✅
|
||||
|
||||
#### 1.1 Redundante Diagnose-Playbooks konsolidiert
|
||||
- ✅ Created `diagnose/gitea.yml` - Consolidates:
|
||||
- `diagnose-gitea-timeouts.yml`
|
||||
- `diagnose-gitea-timeout-deep.yml`
|
||||
- `diagnose-gitea-timeout-live.yml`
|
||||
- `diagnose-gitea-timeouts-complete.yml`
|
||||
- `comprehensive-gitea-diagnosis.yml`
|
||||
- ✅ Uses tags: `deep`, `complete` for selective execution
|
||||
- ✅ Removed redundant playbooks
|
||||
|
||||
#### 1.2 Redundante Fix-Playbooks konsolidiert
|
||||
- ✅ Created `manage/gitea.yml` - Consolidates:
|
||||
- `fix-gitea-timeouts.yml`
|
||||
- `fix-gitea-traefik-connection.yml`
|
||||
- `fix-gitea-ssl-routing.yml`
|
||||
- `fix-gitea-servers-transport.yml`
|
||||
- `fix-gitea-complete.yml`
|
||||
- `restart-gitea-complete.yml`
|
||||
- `restart-gitea-with-cache.yml`
|
||||
- ✅ Uses tags: `restart`, `fix-timeouts`, `fix-ssl`, `fix-servers-transport`, `complete`
|
||||
- ✅ Removed redundant playbooks
|
||||
|
||||
#### 1.3 Traefik-Diagnose/Fix-Playbooks konsolidiert
|
||||
- ✅ Created `diagnose/traefik.yml` - Consolidates:
|
||||
- `diagnose-traefik-restarts.yml`
|
||||
- `find-traefik-restart-source.yml`
|
||||
- `monitor-traefik-restarts.yml`
|
||||
- `monitor-traefik-continuously.yml`
|
||||
- `verify-traefik-fix.yml`
|
||||
- ✅ Created `manage/traefik.yml` - Consolidates:
|
||||
- `stabilize-traefik.yml`
|
||||
- `disable-traefik-auto-restarts.yml`
|
||||
- ✅ Uses tags: `restart-source`, `monitor`, `stabilize`, `disable-auto-restart`
|
||||
- ✅ Removed redundant playbooks
|
||||
|
||||
#### 1.4 Veraltete/Redundante Playbooks entfernt
|
||||
- ✅ Removed `update-gitea-traefik-service.yml` (deprecated)
|
||||
- ✅ Removed `ensure-gitea-traefik-discovery.yml` (redundant)
|
||||
- ✅ Removed `test-gitea-after-fix.yml` (temporär)
|
||||
- ✅ Removed `find-ansible-automation-source.yml` (temporär)
|
||||
|
||||
#### 1.5 Neue Verzeichnisstruktur erstellt
|
||||
- ✅ Created `playbooks/diagnose/` directory
|
||||
- ✅ Created `playbooks/manage/` directory
|
||||
- ✅ Created `playbooks/setup/` directory
|
||||
- ✅ Created `playbooks/maintenance/` directory
|
||||
- ✅ Created `playbooks/deploy/` directory
|
||||
|
||||
#### 1.6 Playbooks verschoben
|
||||
- ✅ `setup-infrastructure.yml` → `setup/infrastructure.yml`
|
||||
- ✅ `deploy-complete.yml` → `deploy/complete.yml`
|
||||
- ✅ `deploy-image.yml` → `deploy/image.yml`
|
||||
- ✅ `deploy-application-code.yml` → `deploy/code.yml`
|
||||
- ✅ `setup-ssl-certificates.yml` → `setup/ssl.yml`
|
||||
- ✅ `setup-gitea-initial-config.yml` → `setup/gitea.yml`
|
||||
- ✅ `cleanup-all-containers.yml` → `maintenance/cleanup.yml`
|
||||
|
||||
#### 1.7 README aktualisiert
|
||||
- ✅ Updated `playbooks/README.md` with new structure
|
||||
- ✅ Documented consolidated playbooks
|
||||
- ✅ Added usage examples with tags
|
||||
- ✅ Listed removed/consolidated playbooks
|
||||
|
||||
### Phase 2: Server Neustart-Vorbereitung ✅
|
||||
|
||||
#### 2.1 Backup-Script erstellt
|
||||
- ✅ Created `maintenance/backup-before-redeploy.yml`
|
||||
- ✅ Backs up:
|
||||
- Gitea data (volumes)
|
||||
- SSL certificates (acme.json)
|
||||
- Gitea configuration (app.ini)
|
||||
- Traefik configuration
|
||||
- PostgreSQL data (if applicable)
|
||||
- ✅ Includes backup verification
|
||||
|
||||
#### 2.2 Neustart-Playbook erstellt
|
||||
- ✅ Created `setup/redeploy-traefik-gitea-clean.yml`
|
||||
- ✅ Features:
|
||||
- Automatic backup (optional)
|
||||
- Stop and remove containers (preserves volumes/acme.json)
|
||||
- Sync configurations
|
||||
- Redeploy stacks
|
||||
- Restore Gitea configuration
|
||||
- Verify service discovery
|
||||
- Final tests
|
||||
|
||||
#### 2.3 Neustart-Anleitung erstellt
|
||||
- ✅ Created `setup/REDEPLOY_GUIDE.md`
|
||||
- ✅ Includes:
|
||||
- Step-by-step guide
|
||||
- Prerequisites
|
||||
- Backup verification
|
||||
- Rollback procedure
|
||||
- Troubleshooting
|
||||
- Common issues
|
||||
|
||||
#### 2.4 Rollback-Playbook erstellt
|
||||
- ✅ Created `maintenance/rollback-redeploy.yml`
|
||||
- ✅ Features:
|
||||
- Restore from backup
|
||||
- Restore volumes, configurations, SSL certificates
|
||||
- Restart stacks
|
||||
- Verification
|
||||
|
||||
## New Playbook Structure
|
||||
|
||||
```
|
||||
playbooks/
|
||||
├── setup/ # Initial Setup
|
||||
│ ├── infrastructure.yml
|
||||
│ ├── gitea.yml
|
||||
│ ├── ssl.yml
|
||||
│ ├── redeploy-traefik-gitea-clean.yml
|
||||
│ └── REDEPLOY_GUIDE.md
|
||||
├── deploy/ # Deployment
|
||||
│ ├── complete.yml
|
||||
│ ├── image.yml
|
||||
│ └── code.yml
|
||||
├── manage/ # Management (konsolidiert)
|
||||
│ ├── traefik.yml
|
||||
│ └── gitea.yml
|
||||
├── diagnose/ # Diagnose (konsolidiert)
|
||||
│ ├── gitea.yml
|
||||
│ └── traefik.yml
|
||||
└── maintenance/ # Wartung
|
||||
├── backup.yml
|
||||
├── backup-before-redeploy.yml
|
||||
├── cleanup.yml
|
||||
├── rollback-redeploy.yml
|
||||
└── system.yml
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Gitea Diagnosis
|
||||
```bash
|
||||
# Basic
|
||||
ansible-playbook -i inventory/production.yml playbooks/diagnose/gitea.yml
|
||||
|
||||
# Deep
|
||||
ansible-playbook -i inventory/production.yml playbooks/diagnose/gitea.yml --tags deep
|
||||
|
||||
# Complete
|
||||
ansible-playbook -i inventory/production.yml playbooks/diagnose/gitea.yml --tags complete
|
||||
```
|
||||
|
||||
### Gitea Management
|
||||
```bash
|
||||
# Restart
|
||||
ansible-playbook -i inventory/production.yml playbooks/manage/gitea.yml --tags restart
|
||||
|
||||
# Fix timeouts
|
||||
ansible-playbook -i inventory/production.yml playbooks/manage/gitea.yml --tags fix-timeouts
|
||||
|
||||
# Complete fix
|
||||
ansible-playbook -i inventory/production.yml playbooks/manage/gitea.yml --tags complete
|
||||
```
|
||||
|
||||
### Redeploy
|
||||
```bash
|
||||
# With automatic backup
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup/redeploy-traefik-gitea-clean.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
|
||||
# With existing backup
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup/redeploy-traefik-gitea-clean.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name=redeploy-backup-1234567890" \
|
||||
-e "skip_backup=true"
|
||||
```
|
||||
|
||||
### Rollback
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml playbooks/maintenance/rollback-redeploy.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name=redeploy-backup-1234567890"
|
||||
```
|
||||
|
||||
## Statistics
|
||||
|
||||
- **Consolidated playbooks created**: 4 (diagnose/gitea.yml, diagnose/traefik.yml, manage/gitea.yml, manage/traefik.yml)
|
||||
- **Redeploy playbooks created**: 3 (redeploy-traefik-gitea-clean.yml, backup-before-redeploy.yml, rollback-redeploy.yml)
|
||||
- **Redundant playbooks removed**: ~20+
|
||||
- **Playbooks moved to new structure**: 7
|
||||
- **Documentation created**: 2 (README.md updated, REDEPLOY_GUIDE.md)
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. ✅ Test consolidated playbooks (dry-run where possible)
|
||||
2. ✅ Verify redeploy playbook works correctly
|
||||
3. ✅ Update CI/CD workflows to use new playbook paths if needed
|
||||
4. ⏳ Perform actual server redeploy when ready
|
||||
|
||||
## Notes
|
||||
|
||||
- All consolidated playbooks use tags for selective execution
|
||||
- Old wrapper playbooks (e.g., `restart-traefik.yml`) still exist and work
|
||||
- Backup playbook preserves all critical data
|
||||
- Redeploy playbook includes comprehensive verification
|
||||
- Rollback playbook allows quick recovery if needed
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
419
deployment/legacy/ansible/ansible/playbooks/README-WIREGUARD.md
Normal file
419
deployment/legacy/ansible/ansible/playbooks/README-WIREGUARD.md
Normal file
@@ -0,0 +1,419 @@
|
||||
# WireGuard VPN Setup
|
||||
|
||||
WireGuard VPN-Server Installation und Konfiguration via Ansible.
|
||||
|
||||
## Übersicht
|
||||
|
||||
Dieses Ansible Setup installiert und konfiguriert einen WireGuard VPN-Server auf dem Production-Server, um sicheren Zugriff auf interne Services zu ermöglichen.
|
||||
|
||||
## Playbooks
|
||||
|
||||
### 1. setup-wireguard.yml
|
||||
|
||||
Installiert und konfiguriert den WireGuard VPN-Server.
|
||||
|
||||
**Features:**
|
||||
- Installiert WireGuard und Tools
|
||||
- Generiert Server-Keys (falls nicht vorhanden)
|
||||
- Konfiguriert WireGuard-Server
|
||||
- Aktiviert IP Forwarding
|
||||
- Konfiguriert NAT (Masquerading)
|
||||
- Öffnet Firewall-Port (51820/udp)
|
||||
- Startet WireGuard-Service
|
||||
|
||||
**Verwendung:**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-wireguard.yml
|
||||
```
|
||||
|
||||
**Variablen:**
|
||||
- `wireguard_port`: Port für WireGuard (Standard: 51820)
|
||||
- `wireguard_network`: VPN-Netzwerk (Standard: 10.8.0.0/24)
|
||||
- `wireguard_server_ip`: Server-IP im VPN (Standard: 10.8.0.1)
|
||||
|
||||
**Beispiel mit Custom-Parametern:**
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-wireguard.yml \
|
||||
-e "wireguard_port=51820" \
|
||||
-e "wireguard_network=10.8.0.0/24" \
|
||||
-e "wireguard_server_ip=10.8.0.1"
|
||||
```
|
||||
|
||||
### 2. add-wireguard-client.yml
|
||||
|
||||
Fügt einen neuen Client zum WireGuard-Server hinzu.
|
||||
|
||||
**Features:**
|
||||
- Generiert Client-Keys
|
||||
- Fügt Client zur Server-Config hinzu
|
||||
- Erstellt Client-Konfigurationsdatei
|
||||
- Generiert QR-Code (falls qrencode installiert)
|
||||
- Restartet WireGuard-Service
|
||||
|
||||
**Verwendung:**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/add-wireguard-client.yml \
|
||||
-e "client_name=myclient"
|
||||
```
|
||||
|
||||
**Optionale Parameter:**
|
||||
- `client_ip`: Spezifische Client-IP (Standard: automatisch berechnet)
|
||||
- `allowed_ips`: Erlaubte IP-Ranges (Standard: gesamtes VPN-Netzwerk)
|
||||
|
||||
**Beispiel mit spezifischer IP:**
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml playbooks/add-wireguard-client.yml \
|
||||
-e "client_name=myclient" \
|
||||
-e "client_ip=10.8.0.2"
|
||||
```
|
||||
|
||||
## Wichtige Sicherheitshinweise
|
||||
|
||||
### SSH-Zugriff bleibt verfügbar
|
||||
|
||||
**WICHTIG**: Die WireGuard-Konfiguration ändert NICHT die SSH-Zugriffsmöglichkeiten:
|
||||
|
||||
- ✅ SSH über die normale Server-IP bleibt vollständig funktionsfähig
|
||||
- ✅ WireGuard routet standardmäßig nur das VPN-Netzwerk (10.8.0.0/24)
|
||||
- ✅ Normale Internet-Routen werden nicht geändert
|
||||
- ✅ Firewall-Regeln für SSH (Port 22) werden NICHT entfernt oder blockiert
|
||||
|
||||
Die Client-Konfiguration verwendet standardmäßig `AllowedIPs = 10.8.0.0/24`, was bedeutet, dass nur Traffic für das VPN-Netzwerk über WireGuard geroutet wird. Alle anderen Verbindungen (inkl. SSH) nutzen weiterhin die normale Internet-Verbindung.
|
||||
|
||||
**Um SSH komplett über VPN zu routen** (nicht empfohlen für die erste Installation):
|
||||
```bash
|
||||
ansible-playbook ... -e "allowed_ips=0.0.0.0/0"
|
||||
```
|
||||
|
||||
## Verzeichnisstruktur
|
||||
|
||||
Nach der Installation:
|
||||
|
||||
```
|
||||
/etc/wireguard/
|
||||
├── wg0.conf # Server-Konfiguration
|
||||
├── wg0_private.key # Server-Private-Key (600)
|
||||
├── wg0_public.key # Server-Public-Key (644)
|
||||
└── clients/ # Client-Konfigurationen
|
||||
├── client1.conf # Client 1 Config
|
||||
└── client2.conf # Client 2 Config
|
||||
```
|
||||
|
||||
## Client-Konfiguration verwenden
|
||||
|
||||
### 1. Config-Datei auf Client kopieren
|
||||
|
||||
```bash
|
||||
# Von Ansible Control Machine
|
||||
scp -i ~/.ssh/production \
|
||||
deploy@94.16.110.151:/etc/wireguard/clients/myclient.conf \
|
||||
~/myclient.conf
|
||||
```
|
||||
|
||||
### 2. WireGuard auf Client installieren
|
||||
|
||||
**Linux:**
|
||||
```bash
|
||||
sudo apt install wireguard wireguard-tools # Ubuntu/Debian
|
||||
# oder
|
||||
sudo yum install wireguard-tools # CentOS/RHEL
|
||||
```
|
||||
|
||||
**macOS:**
|
||||
```bash
|
||||
brew install wireguard-tools
|
||||
```
|
||||
|
||||
**Windows:**
|
||||
Download von https://www.wireguard.com/install/
|
||||
|
||||
### 3. VPN verbinden
|
||||
|
||||
**Linux/macOS:**
|
||||
```bash
|
||||
sudo wg-quick up ~/myclient.conf
|
||||
# oder
|
||||
sudo wg-quick up myclient
|
||||
```
|
||||
|
||||
**Windows:**
|
||||
Importiere die `.conf`-Datei in die WireGuard-App.
|
||||
|
||||
### 4. Verbindung testen
|
||||
|
||||
```bash
|
||||
# Ping zum Server
|
||||
ping 10.8.0.1
|
||||
|
||||
# Status prüfen
|
||||
sudo wg show
|
||||
|
||||
# VPN trennen
|
||||
sudo wg-quick down myclient
|
||||
```
|
||||
|
||||
## QR-Code für Mobile Client
|
||||
|
||||
Falls `qrencode` installiert ist, wird beim Hinzufügen eines Clients automatisch ein QR-Code angezeigt:
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml playbooks/add-wireguard-client.yml \
|
||||
-e "client_name=myphone"
|
||||
```
|
||||
|
||||
Der QR-Code kann mit der WireGuard Mobile App (iOS/Android) gescannt werden.
|
||||
|
||||
## Firewall-Konfiguration
|
||||
|
||||
Das Playbook öffnet automatisch den WireGuard-Port (51820/udp) in UFW, falls installiert.
|
||||
|
||||
**Manuelle Firewall-Regeln:**
|
||||
|
||||
```bash
|
||||
# UFW
|
||||
sudo ufw allow 51820/udp comment 'WireGuard VPN'
|
||||
|
||||
# iptables direkt
|
||||
sudo iptables -A INPUT -p udp --dport 51820 -j ACCEPT
|
||||
```
|
||||
|
||||
## Split-Tunnel Routing & NAT Fix
|
||||
|
||||
### A. Quick Fix Commands (manuell auf dem Server)
|
||||
```bash
|
||||
WAN_IF=${WAN_IF:-eth0}
|
||||
WG_IF=${WG_IF:-wg0}
|
||||
WG_NET=${WG_NET:-10.8.0.0/24}
|
||||
WG_PORT=${WG_PORT:-51820}
|
||||
EXTRA_NETS=${EXTRA_NETS:-"192.168.178.0/24 172.20.0.0/16"}
|
||||
|
||||
sudo sysctl -w net.ipv4.ip_forward=1
|
||||
sudo tee /etc/sysctl.d/99-${WG_IF}-forward.conf >/dev/null <<'EOF'
|
||||
# WireGuard Forwarding
|
||||
net.ipv4.ip_forward=1
|
||||
EOF
|
||||
sudo sysctl --system
|
||||
|
||||
# iptables Variante
|
||||
sudo iptables -t nat -C POSTROUTING -s ${WG_NET} -o ${WAN_IF} -j MASQUERADE 2>/dev/null \
|
||||
|| sudo iptables -t nat -A POSTROUTING -s ${WG_NET} -o ${WAN_IF} -j MASQUERADE
|
||||
sudo iptables -C FORWARD -i ${WG_IF} -s ${WG_NET} -o ${WAN_IF} -j ACCEPT 2>/dev/null \
|
||||
|| sudo iptables -A FORWARD -i ${WG_IF} -s ${WG_NET} -o ${WAN_IF} -j ACCEPT
|
||||
sudo iptables -C FORWARD -o ${WG_IF} -d ${WG_NET} -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 2>/dev/null \
|
||||
|| sudo iptables -A FORWARD -o ${WG_IF} -d ${WG_NET} -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
|
||||
for NET in ${EXTRA_NETS}; do
|
||||
sudo iptables -C FORWARD -i ${WG_IF} -d ${NET} -j ACCEPT 2>/dev/null || sudo iptables -A FORWARD -i ${WG_IF} -d ${NET} -j ACCEPT
|
||||
done
|
||||
|
||||
# nftables Variante
|
||||
sudo nft list table inet wireguard_${WG_IF} >/dev/null 2>&1 || sudo nft add table inet wireguard_${WG_IF}
|
||||
sudo nft list chain inet wireguard_${WG_IF} postrouting >/dev/null 2>&1 \
|
||||
|| sudo nft add chain inet wireguard_${WG_IF} postrouting '{ type nat hook postrouting priority srcnat; }'
|
||||
sudo nft list chain inet wireguard_${WG_IF} forward >/dev/null 2>&1 \
|
||||
|| sudo nft add chain inet wireguard_${WG_IF} forward '{ type filter hook forward priority filter; policy accept; }'
|
||||
sudo nft list chain inet wireguard_${WG_IF} postrouting | grep -q "${WAN_IF}" \
|
||||
|| sudo nft add rule inet wireguard_${WG_IF} postrouting oifname "${WAN_IF}" ip saddr ${WG_NET} masquerade
|
||||
sudo nft list chain inet wireguard_${WG_IF} forward | grep -q "iifname \"${WG_IF}\"" \
|
||||
|| sudo nft add rule inet wireguard_${WG_IF} forward iifname "${WG_IF}" ip saddr ${WG_NET} counter accept
|
||||
sudo nft list chain inet wireguard_${WG_IF} forward | grep -q "oifname \"${WG_IF}\"" \
|
||||
|| sudo nft add rule inet wireguard_${WG_IF} forward oifname "${WG_IF}" ip daddr ${WG_NET} ct state established,related counter accept
|
||||
for NET in ${EXTRA_NETS}; do
|
||||
sudo nft list chain inet wireguard_${WG_IF} forward | grep -q "${NET}" \
|
||||
|| sudo nft add rule inet wireguard_${WG_IF} forward iifname "${WG_IF}" ip daddr ${NET} counter accept
|
||||
done
|
||||
|
||||
# Firewall Hooks
|
||||
if command -v ufw >/dev/null && sudo ufw status | grep -iq "Status: active"; then
|
||||
sudo sed -i 's/^DEFAULT_FORWARD_POLICY=.*/DEFAULT_FORWARD_POLICY="ACCEPT"/' /etc/default/ufw
|
||||
sudo ufw allow ${WG_PORT}/udp
|
||||
sudo ufw route allow in on ${WG_IF} out on ${WAN_IF} to any
|
||||
fi
|
||||
if command -v firewall-cmd >/dev/null && sudo firewall-cmd --state >/dev/null 2>&1; then
|
||||
sudo firewall-cmd --permanent --zone=${FIREWALLD_ZONE:-public} --add-port=${WG_PORT}/udp
|
||||
sudo firewall-cmd --permanent --zone=${FIREWALLD_ZONE:-public} --add-masquerade
|
||||
sudo firewall-cmd --reload
|
||||
fi
|
||||
|
||||
sudo systemctl enable --now wg-quick@${WG_IF}
|
||||
sudo wg show
|
||||
```
|
||||
|
||||
### B. Skript: `deployment/ansible/scripts/setup-wireguard-routing.sh`
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
sudo WAN_IF=eth0 WG_IF=wg0 WG_NET=10.8.0.0/24 EXTRA_NETS="192.168.178.0/24 172.20.0.0/16" \
|
||||
./scripts/setup-wireguard-routing.sh
|
||||
```
|
||||
*Erkennt automatisch iptables/nftables und konfiguriert optional UFW/Firewalld.*
|
||||
|
||||
### C. Ansible Playbook: `playbooks/wireguard-routing.yml`
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/wireguard-routing.yml \
|
||||
-e "wg_interface=wg0 wg_addr=10.8.0.1/24 wg_net=10.8.0.0/24 wan_interface=eth0" \
|
||||
-e '{"extra_nets":["192.168.178.0/24","172.20.0.0/16"],"firewall_backend":"iptables","manage_ufw":true}'
|
||||
```
|
||||
*Variablen:* `wg_interface`, `wg_addr`, `wg_net`, `wan_interface`, `extra_nets`, `firewall_backend` (`iptables|nftables`), `manage_ufw`, `manage_firewalld`, `firewalld_zone`.
|
||||
|
||||
### D. Beispiel `wg0.conf` Ausschnitt
|
||||
```ini
|
||||
[Interface]
|
||||
Address = 10.8.0.1/24
|
||||
ListenPort = 51820
|
||||
PrivateKey = <ServerPrivateKey>
|
||||
|
||||
# iptables
|
||||
PostUp = iptables -t nat -C POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE 2>/dev/null || iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
|
||||
PostUp = iptables -C FORWARD -i wg0 -s 10.8.0.0/24 -j ACCEPT 2>/dev/null || iptables -A FORWARD -i wg0 -s 10.8.0.0/24 -j ACCEPT
|
||||
PostUp = iptables -C FORWARD -o wg0 -d 10.8.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 2>/dev/null || iptables -A FORWARD -o wg0 -d 10.8.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
|
||||
PostDown = iptables -t nat -D POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE 2>/dev/null || true
|
||||
PostDown = iptables -D FORWARD -i wg0 -s 10.8.0.0/24 -j ACCEPT 2>/dev/null || true
|
||||
PostDown = iptables -D FORWARD -o wg0 -d 10.8.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 2>/dev/null || true
|
||||
|
||||
# nftables (stattdessen)
|
||||
# PostUp = nft -f /etc/nftables.d/wireguard-wg0.nft
|
||||
# PostDown = nft delete table inet wireguard_wg0 2>/dev/null || true
|
||||
|
||||
[Peer]
|
||||
PublicKey = <ClientPublicKey>
|
||||
AllowedIPs = 10.8.0.5/32, 192.168.178.0/24, 172.20.0.0/16
|
||||
PersistentKeepalive = 25
|
||||
```
|
||||
|
||||
### E. Windows Client (AllowedIPs & Tests)
|
||||
```ini
|
||||
[Interface]
|
||||
Address = 10.8.0.5/32
|
||||
DNS = 10.8.0.1 # optional
|
||||
|
||||
[Peer]
|
||||
PublicKey = <ServerPublicKey>
|
||||
Endpoint = vpn.example.com:51820
|
||||
AllowedIPs = 10.8.0.0/24, 192.168.178.0/24, 172.20.0.0/16
|
||||
PersistentKeepalive = 25
|
||||
```
|
||||
PowerShell:
|
||||
```powershell
|
||||
wg show
|
||||
Test-Connection -Source 10.8.0.5 -ComputerName 10.8.0.1
|
||||
Test-Connection 192.168.178.1
|
||||
Test-NetConnection -ComputerName 192.168.178.10 -Port 22
|
||||
```
|
||||
Optional: `Set-DnsClientNrptRule -Namespace "internal.lan" -NameServers 10.8.0.1`.
|
||||
|
||||
### F. Troubleshooting & Rollback
|
||||
- Checks: `ip r`, `ip route get <target>`, `iptables -t nat -S`, `nft list ruleset`, `sysctl net.ipv4.ip_forward`, `wg show`, `tcpdump -i wg0`, `tcpdump -i eth0 host 10.8.0.5`.
|
||||
- Häufige Fehler: falsches WAN-Interface, Forwarding/NAT fehlt, doppelte Firewalls (iptables + nftables), Docker-NAT kollidiert, Policy-Routing aktiv.
|
||||
- Rollback:
|
||||
- `sudo rm /etc/sysctl.d/99-wg0-forward.conf && sudo sysctl -w net.ipv4.ip_forward=0`
|
||||
- iptables: Regeln mit `iptables -D` entfernen (siehe oben).
|
||||
- nftables: `sudo nft delete table inet wireguard_wg0`.
|
||||
- UFW: `sudo ufw delete allow 51820/udp`, Route-Regeln entfernen, `DEFAULT_FORWARD_POLICY` zurücksetzen.
|
||||
- Firewalld: `firewall-cmd --permanent --remove-port=51820/udp`, `--remove-masquerade`, `--reload`.
|
||||
- Dienst: `sudo systemctl disable --now wg-quick@wg0`.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### WireGuard startet nicht
|
||||
|
||||
```bash
|
||||
# Status prüfen
|
||||
sudo systemctl status wg-quick@wg0
|
||||
|
||||
# Logs anzeigen
|
||||
sudo journalctl -u wg-quick@wg0 -f
|
||||
|
||||
# Manuell starten
|
||||
sudo wg-quick up wg0
|
||||
```
|
||||
|
||||
### Client kann nicht verbinden
|
||||
|
||||
1. **Firewall prüfen:**
|
||||
```bash
|
||||
sudo ufw status
|
||||
sudo iptables -L -n | grep 51820
|
||||
```
|
||||
|
||||
2. **Server-Logs prüfen:**
|
||||
```bash
|
||||
sudo journalctl -u wg-quick@wg0 -f
|
||||
```
|
||||
|
||||
3. **Server-Status prüfen:**
|
||||
```bash
|
||||
sudo wg show
|
||||
```
|
||||
|
||||
4. **Routing prüfen:**
|
||||
```bash
|
||||
sudo ip route show
|
||||
```
|
||||
|
||||
### IP Forwarding nicht aktiv
|
||||
|
||||
```bash
|
||||
# Manuell aktivieren
|
||||
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
|
||||
|
||||
# Permanent machen
|
||||
echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf
|
||||
sudo sysctl -p
|
||||
```
|
||||
|
||||
## Client entfernen
|
||||
|
||||
Um einen Client zu entfernen:
|
||||
|
||||
```bash
|
||||
# Auf dem Server
|
||||
sudo nano /etc/wireguard/wg0.conf
|
||||
# Entferne den [Peer] Block für den Client
|
||||
|
||||
sudo wg-quick down wg0
|
||||
sudo wg-quick up wg0
|
||||
|
||||
# Optional: Client-Config löschen
|
||||
sudo rm /etc/wireguard/clients/clientname.conf
|
||||
```
|
||||
|
||||
## Server-Public-Key abrufen
|
||||
|
||||
```bash
|
||||
# Auf dem Server
|
||||
cat /etc/wireguard/wg0_public.key
|
||||
# oder
|
||||
sudo cat /etc/wireguard/wg0_private.key | wg pubkey
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Backup der Keys**: Speichere Server-Keys sicher:
|
||||
```bash
|
||||
sudo tar czf wireguard-backup.tar.gz /etc/wireguard/
|
||||
```
|
||||
|
||||
2. **Regelmäßige Updates:**
|
||||
```bash
|
||||
sudo apt update && sudo apt upgrade wireguard wireguard-tools
|
||||
```
|
||||
|
||||
3. **Monitoring**: Überwache VPN-Verbindungen:
|
||||
```bash
|
||||
sudo wg show
|
||||
```
|
||||
|
||||
4. **Sicherheit**:
|
||||
- Verwalte Client-Keys sicher
|
||||
- Entferne nicht genutzte Clients
|
||||
- Nutze starke Passwörter für Server-Zugriff
|
||||
|
||||
## Support
|
||||
|
||||
Bei Problemen:
|
||||
1. Prüfe Logs: `sudo journalctl -u wg-quick@wg0`
|
||||
2. Prüfe Status: `sudo wg show`
|
||||
3. Prüfe Firewall: `sudo ufw status`
|
||||
4. Teste Connectivity: `ping 10.8.0.1` (vom Client)
|
||||
301
deployment/legacy/ansible/ansible/playbooks/README.md
Normal file
301
deployment/legacy/ansible/ansible/playbooks/README.md
Normal file
@@ -0,0 +1,301 @@
|
||||
# Ansible Playbooks - Übersicht
|
||||
|
||||
## Neue Struktur
|
||||
|
||||
Die Playbooks wurden reorganisiert in eine klare Verzeichnisstruktur:
|
||||
|
||||
```
|
||||
playbooks/
|
||||
├── setup/ # Initial Setup
|
||||
│ ├── infrastructure.yml
|
||||
│ ├── gitea.yml
|
||||
│ └── ssl.yml
|
||||
├── deploy/ # Deployment
|
||||
│ ├── complete.yml
|
||||
│ ├── image.yml
|
||||
│ └── code.yml
|
||||
├── manage/ # Management (konsolidiert)
|
||||
│ ├── traefik.yml
|
||||
│ ├── gitea.yml
|
||||
│ └── application.yml
|
||||
├── diagnose/ # Diagnose (konsolidiert)
|
||||
│ ├── gitea.yml
|
||||
│ ├── traefik.yml
|
||||
│ └── application.yml
|
||||
└── maintenance/ # Wartung
|
||||
├── backup.yml
|
||||
├── backup-before-redeploy.yml
|
||||
├── cleanup.yml
|
||||
├── rollback-redeploy.yml
|
||||
└── system.yml
|
||||
```
|
||||
|
||||
## Verfügbare Playbooks
|
||||
|
||||
> **Hinweis**: Die meisten Playbooks wurden in wiederverwendbare Roles refactored. Die Playbooks sind jetzt Wrapper, die die entsprechenden Role-Tasks aufrufen. Dies verbessert Wiederverwendbarkeit, Wartbarkeit und folgt Ansible Best Practices.
|
||||
|
||||
### Setup (Initial Setup)
|
||||
|
||||
- **`setup/infrastructure.yml`** - Deployed alle Stacks (Traefik, PostgreSQL, Redis, Registry, Gitea, Monitoring, Production)
|
||||
- **`setup/gitea.yml`** - Setup Gitea Initial Configuration (Wrapper für `gitea` Role, `tasks_from: setup`)
|
||||
- **`setup/ssl.yml`** - SSL Certificate Setup (Wrapper für `traefik` Role, `tasks_from: ssl`)
|
||||
- **`setup/redeploy-traefik-gitea-clean.yml`** - Clean redeployment of Traefik and Gitea stacks
|
||||
- **`setup/REDEPLOY_GUIDE.md`** - Step-by-step guide for redeployment
|
||||
|
||||
### Deployment
|
||||
|
||||
- **`deploy/complete.yml`** - Complete deployment (code + image + dependencies)
|
||||
- **`deploy/image.yml`** - Docker Image Deployment (wird von CI/CD Workflows verwendet)
|
||||
- **`deploy/code.yml`** - Deploy Application Code via Git (Wrapper für `application` Role, `tasks_from: deploy_code`)
|
||||
|
||||
### Management (Konsolidiert)
|
||||
|
||||
#### Traefik Management
|
||||
- **`manage/traefik.yml`** - Consolidated Traefik management
|
||||
- `--tags stabilize`: Fix acme.json, ensure running, monitor stability
|
||||
- `--tags disable-auto-restart`: Check and document auto-restart mechanisms
|
||||
- **`restart-traefik.yml`** - Restart Traefik Container (Wrapper für `traefik` Role, `tasks_from: restart`)
|
||||
- **`recreate-traefik.yml`** - Recreate Traefik Container (Wrapper für `traefik` Role, `tasks_from: restart` mit `traefik_restart_action: recreate`)
|
||||
- **`deploy-traefik-config.yml`** - Deploy Traefik Configuration Files (Wrapper für `traefik` Role, `tasks_from: config`)
|
||||
- **`check-traefik-acme-logs.yml`** - Check Traefik ACME Challenge Logs (Wrapper für `traefik` Role, `tasks_from: logs`)
|
||||
|
||||
#### Gitea Management
|
||||
- **`manage/gitea.yml`** - Consolidated Gitea management
|
||||
- `--tags restart`: Restart Gitea container
|
||||
- `--tags fix-timeouts`: Restart Gitea and Traefik to fix timeouts
|
||||
- `--tags fix-ssl`: Fix SSL/routing issues
|
||||
- `--tags fix-servers-transport`: Update ServersTransport configuration
|
||||
- `--tags complete`: Complete fix (stop runner, restart services, verify)
|
||||
- **`check-and-restart-gitea.yml`** - Check and Restart Gitea if Unhealthy (Wrapper für `gitea` Role, `tasks_from: restart`)
|
||||
- **`fix-gitea-runner-config.yml`** - Fix Gitea Runner Configuration (Wrapper für `gitea` Role, `tasks_from: runner` mit `gitea_runner_action: fix`)
|
||||
- **`register-gitea-runner.yml`** - Register Gitea Runner (Wrapper für `gitea` Role, `tasks_from: runner` mit `gitea_runner_action: register`)
|
||||
- **`update-gitea-config.yml`** - Update Gitea Configuration (Wrapper für `gitea` Role, `tasks_from: config`)
|
||||
- **`setup-gitea-repository.yml`** - Setup Gitea Repository (Wrapper für `gitea` Role, `tasks_from: repository`)
|
||||
|
||||
#### Application Management
|
||||
- **`manage/application.yml`** - Consolidated application management (to be created)
|
||||
- **`sync-application-code.yml`** - Synchronize Application Code via Rsync (Wrapper für `application` Role, `tasks_from: deploy_code` mit `application_deployment_method: rsync`)
|
||||
- **`install-composer-dependencies.yml`** - Install Composer Dependencies (Wrapper für `application` Role, `tasks_from: composer`)
|
||||
- **`check-container-status.yml`** - Check Container Status (Wrapper für `application` Role, `tasks_from: health_check`)
|
||||
- **`check-container-logs.yml`** - Check Container Logs (Wrapper für `application` Role, `tasks_from: logs`)
|
||||
- **`check-worker-logs.yml`** - Check Worker and Scheduler Logs (Wrapper für `application` Role, `tasks_from: logs` mit `application_logs_check_vendor: true`)
|
||||
- **`check-final-status.yml`** - Check Final Container Status (Wrapper für `application` Role, `tasks_from: health_check` mit `application_health_check_final: true`)
|
||||
- **`fix-container-issues.yml`** - Fix Container Issues (Wrapper für `application` Role, `tasks_from: containers` mit `application_container_action: fix`)
|
||||
- **`fix-web-container.yml`** - Fix Web Container Permissions (Wrapper für `application` Role, `tasks_from: containers` mit `application_container_action: fix-web`)
|
||||
- **`recreate-containers-with-env.yml`** - Recreate Containers with Environment Variables (Wrapper für `application` Role, `tasks_from: containers` mit `application_container_action: recreate-with-env`)
|
||||
- **`sync-and-recreate-containers.yml`** - Sync and Recreate Containers (Wrapper für `application` Role, `tasks_from: containers` mit `application_container_action: sync-recreate`)
|
||||
|
||||
### Diagnose (Konsolidiert)
|
||||
|
||||
#### Gitea Diagnose
|
||||
- **`diagnose/gitea.yml`** - Consolidated Gitea diagnosis
|
||||
- Basic checks (always): Container status, health endpoints, network connectivity, service discovery
|
||||
- `--tags deep`: Resource usage, multiple connection tests, log analysis
|
||||
- `--tags complete`: All checks including app.ini, ServersTransport, etc.
|
||||
|
||||
#### Traefik Diagnose
|
||||
- **`diagnose/traefik.yml`** - Consolidated Traefik diagnosis
|
||||
- Basic checks (always): Container status, restart count, recent logs
|
||||
- `--tags restart-source`: Find source of restart loops (cronjobs, systemd, scripts)
|
||||
- `--tags monitor`: Monitor for restarts over time
|
||||
|
||||
### Maintenance
|
||||
|
||||
- **`maintenance/backup.yml`** - Erstellt Backups von PostgreSQL, Application Data, Gitea, Registry
|
||||
- **`maintenance/backup-before-redeploy.yml`** - Backup before redeploy (Gitea data, SSL certificates, configurations)
|
||||
- **`maintenance/rollback-redeploy.yml`** - Rollback from redeploy backup
|
||||
- **`maintenance/cleanup.yml`** - Stoppt und entfernt alle Container, bereinigt Netzwerke und Volumes (für vollständigen Server-Reset)
|
||||
- **`maintenance/system.yml`** - System-Updates, Unattended-Upgrades, Docker-Pruning
|
||||
- **`rollback.yml`** - Rollback zu vorheriger Version
|
||||
|
||||
### WireGuard
|
||||
|
||||
- **`generate-wireguard-client.yml`** - Generiert WireGuard Client-Config
|
||||
- **`wireguard-routing.yml`** - Konfiguriert WireGuard Routing
|
||||
- **`setup-wireguard-host.yml`** - WireGuard VPN Setup
|
||||
|
||||
### Initial Deployment
|
||||
|
||||
- **`build-initial-image.yml`** - Build und Push des initialen Docker Images (für erstes Deployment)
|
||||
|
||||
### CI/CD & Development
|
||||
|
||||
- **`setup-gitea-runner-ci.yml`** - Gitea Runner CI Setup
|
||||
- **`install-docker.yml`** - Docker Installation auf Server
|
||||
|
||||
## Entfernte/Konsolidierte Playbooks
|
||||
|
||||
Die folgenden Playbooks wurden konsolidiert oder entfernt:
|
||||
|
||||
### Konsolidiert in `diagnose/gitea.yml`:
|
||||
- ~~`diagnose-gitea-timeouts.yml`~~
|
||||
- ~~`diagnose-gitea-timeout-deep.yml`~~
|
||||
- ~~`diagnose-gitea-timeout-live.yml`~~
|
||||
- ~~`diagnose-gitea-timeouts-complete.yml`~~
|
||||
- ~~`comprehensive-gitea-diagnosis.yml`~~
|
||||
|
||||
### Konsolidiert in `manage/gitea.yml`:
|
||||
- ~~`fix-gitea-timeouts.yml`~~
|
||||
- ~~`fix-gitea-traefik-connection.yml`~~
|
||||
- ~~`fix-gitea-ssl-routing.yml`~~
|
||||
- ~~`fix-gitea-servers-transport.yml`~~
|
||||
- ~~`fix-gitea-complete.yml`~~
|
||||
- ~~`restart-gitea-complete.yml`~~
|
||||
- ~~`restart-gitea-with-cache.yml`~~
|
||||
|
||||
### Konsolidiert in `diagnose/traefik.yml`:
|
||||
- ~~`diagnose-traefik-restarts.yml`~~
|
||||
- ~~`find-traefik-restart-source.yml`~~
|
||||
- ~~`monitor-traefik-restarts.yml`~~
|
||||
- ~~`monitor-traefik-continuously.yml`~~
|
||||
- ~~`verify-traefik-fix.yml`~~
|
||||
|
||||
### Konsolidiert in `manage/traefik.yml`:
|
||||
- ~~`stabilize-traefik.yml`~~
|
||||
- ~~`disable-traefik-auto-restarts.yml`~~
|
||||
|
||||
### Entfernt (veraltet/redundant):
|
||||
- ~~`update-gitea-traefik-service.yml`~~ - Deprecated (wie in Code dokumentiert)
|
||||
- ~~`ensure-gitea-traefik-discovery.yml`~~ - Redundant
|
||||
- ~~`test-gitea-after-fix.yml`~~ - Temporär
|
||||
- ~~`find-ansible-automation-source.yml`~~ - Temporär
|
||||
|
||||
### Verschoben:
|
||||
- `setup-infrastructure.yml` → `setup/infrastructure.yml`
|
||||
- `deploy-complete.yml` → `deploy/complete.yml`
|
||||
- `deploy-image.yml` → `deploy/image.yml`
|
||||
- `deploy-application-code.yml` → `deploy/code.yml`
|
||||
- `setup-ssl-certificates.yml` → `setup/ssl.yml`
|
||||
- `setup-gitea-initial-config.yml` → `setup/gitea.yml`
|
||||
- `cleanup-all-containers.yml` → `maintenance/cleanup.yml`
|
||||
|
||||
## Verwendung
|
||||
|
||||
### Standard-Verwendung
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml playbooks/<playbook>.yml --vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### Konsolidierte Playbooks mit Tags
|
||||
|
||||
**Gitea Diagnose:**
|
||||
```bash
|
||||
# Basic diagnosis (default)
|
||||
ansible-playbook -i inventory/production.yml playbooks/diagnose/gitea.yml --vault-password-file secrets/.vault_pass
|
||||
|
||||
# Deep diagnosis
|
||||
ansible-playbook -i inventory/production.yml playbooks/diagnose/gitea.yml --tags deep --vault-password-file secrets/.vault_pass
|
||||
|
||||
# Complete diagnosis
|
||||
ansible-playbook -i inventory/production.yml playbooks/diagnose/gitea.yml --tags complete --vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
**Gitea Management:**
|
||||
```bash
|
||||
# Restart Gitea
|
||||
ansible-playbook -i inventory/production.yml playbooks/manage/gitea.yml --tags restart --vault-password-file secrets/.vault_pass
|
||||
|
||||
# Fix timeouts
|
||||
ansible-playbook -i inventory/production.yml playbooks/manage/gitea.yml --tags fix-timeouts --vault-password-file secrets/.vault_pass
|
||||
|
||||
# Complete fix
|
||||
ansible-playbook -i inventory/production.yml playbooks/manage/gitea.yml --tags complete --vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
**Traefik Diagnose:**
|
||||
```bash
|
||||
# Basic diagnosis
|
||||
ansible-playbook -i inventory/production.yml playbooks/diagnose/traefik.yml --vault-password-file secrets/.vault_pass
|
||||
|
||||
# Find restart source
|
||||
ansible-playbook -i inventory/production.yml playbooks/diagnose/traefik.yml --tags restart-source --vault-password-file secrets/.vault_pass
|
||||
|
||||
# Monitor restarts
|
||||
ansible-playbook -i inventory/production.yml playbooks/diagnose/traefik.yml --tags monitor --vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
**Traefik Management:**
|
||||
```bash
|
||||
# Stabilize Traefik
|
||||
ansible-playbook -i inventory/production.yml playbooks/manage/traefik.yml --tags stabilize --vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
**Redeploy:**
|
||||
```bash
|
||||
# With automatic backup
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup/redeploy-traefik-gitea-clean.yml --vault-password-file secrets/.vault_pass
|
||||
|
||||
# With existing backup
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup/redeploy-traefik-gitea-clean.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name=redeploy-backup-1234567890" \
|
||||
-e "skip_backup=true"
|
||||
```
|
||||
|
||||
**Rollback:**
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml playbooks/maintenance/rollback-redeploy.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name=redeploy-backup-1234567890"
|
||||
```
|
||||
|
||||
### Role-basierte Playbooks
|
||||
|
||||
Die meisten Playbooks sind jetzt Wrapper, die Roles verwenden. Die Funktionalität bleibt gleich, aber die Implementierung ist jetzt in wiederverwendbaren Roles organisiert:
|
||||
|
||||
**Beispiel: Traefik Restart**
|
||||
```bash
|
||||
# Alte Methode (funktioniert noch, ruft jetzt aber die Role auf):
|
||||
ansible-playbook -i inventory/production.yml playbooks/restart-traefik.yml --vault-password-file secrets/.vault_pass
|
||||
|
||||
# Direkte Role-Verwendung (alternative Methode):
|
||||
ansible-playbook -i inventory/production.yml -e "traefik_restart_action=restart" -e "traefik_show_status=true" playbooks/restart-traefik.yml
|
||||
```
|
||||
|
||||
**Beispiel: Gitea Runner Fix**
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml playbooks/fix-gitea-runner-config.yml --vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
**Beispiel: Application Code Deployment**
|
||||
```bash
|
||||
# Git-basiert (Standard):
|
||||
ansible-playbook -i inventory/production.yml playbooks/deploy/code.yml \
|
||||
-e "deployment_environment=staging" \
|
||||
-e "git_branch=staging" \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
|
||||
# Rsync-basiert (für Initial Deployment):
|
||||
ansible-playbook -i inventory/production.yml playbooks/sync-application-code.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
## Role-Struktur
|
||||
|
||||
Die Playbooks verwenden jetzt folgende Roles:
|
||||
|
||||
### `traefik` Role
|
||||
- **Tasks**: `restart`, `config`, `logs`, `ssl`
|
||||
- **Location**: `roles/traefik/tasks/`
|
||||
- **Defaults**: `roles/traefik/defaults/main.yml`
|
||||
|
||||
### `gitea` Role
|
||||
- **Tasks**: `restart`, `runner`, `config`, `setup`, `repository`
|
||||
- **Location**: `roles/gitea/tasks/`
|
||||
- **Defaults**: `roles/gitea/defaults/main.yml`
|
||||
|
||||
### `application` Role
|
||||
- **Tasks**: `deploy_code`, `composer`, `containers`, `health_check`, `logs`, `deploy`
|
||||
- **Location**: `roles/application/tasks/`
|
||||
- **Defaults**: `roles/application/defaults/main.yml`
|
||||
|
||||
## Vorteile der neuen Struktur
|
||||
|
||||
1. **Klarheit**: Klare Verzeichnisstruktur nach Funktion
|
||||
2. **Konsolidierung**: Redundante Playbooks zusammengeführt
|
||||
3. **Tags**: Selektive Ausführung mit Tags
|
||||
4. **Wiederverwendbarkeit**: Tasks können in mehreren Playbooks genutzt werden
|
||||
5. **Wartbarkeit**: Änderungen zentral in Roles
|
||||
6. **Best Practices**: Folgt Ansible-Empfehlungen
|
||||
@@ -0,0 +1,157 @@
|
||||
# Traefik Restart Loop - Diagnose & Lösung
|
||||
|
||||
## Problem
|
||||
|
||||
Traefik wird regelmäßig gestoppt mit den Meldungen:
|
||||
- "I have to go..."
|
||||
- "Stopping server gracefully"
|
||||
|
||||
Dies führt zu:
|
||||
- ACME-Challenge-Fehlern
|
||||
- Externen Timeouts
|
||||
- Unterbrechungen der SSL-Zertifikats-Erneuerung
|
||||
|
||||
## Durchgeführte Diagnose
|
||||
|
||||
### 1. Erweiterte Diagnose (`diagnose-traefik-restarts.yml`)
|
||||
|
||||
**Geprüfte Bereiche:**
|
||||
- ✅ Systemd-Timer (keine gefunden die Traefik stoppen)
|
||||
- ✅ Alle User-Crontabs (keine gefunden)
|
||||
- ✅ System-wide Cronjobs (keine gefunden)
|
||||
- ✅ Gitea Workflows (gefunden: `build-image.yml`, `manual-deploy.yml` - rufen aber keine Traefik-Restarts auf)
|
||||
- ✅ Custom Systemd Services/Timers (keine gefunden)
|
||||
- ✅ At Jobs (keine gefunden)
|
||||
- ✅ Docker Compose Watch Mode (nicht aktiviert)
|
||||
- ✅ Ansible `traefik_auto_restart` Einstellung (prüfbar)
|
||||
- ✅ Port-Konfiguration (Ports 80/443 korrekt auf Traefik gemappt)
|
||||
- ✅ Netzwerk-Konfiguration (geprüft)
|
||||
|
||||
**Ergebnisse:**
|
||||
- ❌ Keine automatischen Restart-Mechanismen gefunden
|
||||
- ✅ Ports 80/443 sind korrekt konfiguriert
|
||||
- ✅ Traefik läuft stabil (keine Restarts während 2-minütiger Überwachung)
|
||||
|
||||
### 2. acme.json Berechtigungen (`fix-traefik-acme-permissions.yml`)
|
||||
|
||||
**Ergebnisse:**
|
||||
- ✅ acme.json hat korrekte Berechtigungen (chmod 600)
|
||||
- ✅ Owner/Group korrekt (deploy:deploy)
|
||||
- ✅ Traefik Container kann auf acme.json schreiben
|
||||
|
||||
### 3. Auto-Restart-Mechanismen (`disable-traefik-auto-restarts.yml`)
|
||||
|
||||
**Ergebnisse:**
|
||||
- ❌ Keine Cronjobs gefunden die Traefik restarten
|
||||
- ❌ Keine Systemd Timers/Services gefunden
|
||||
- ℹ️ Ansible `traefik_auto_restart` kann in group_vars überschrieben werden
|
||||
|
||||
### 4. Traefik Stabilisierung (`stabilize-traefik.yml`)
|
||||
|
||||
**Ergebnisse:**
|
||||
- ✅ Traefik läuft stabil (41 Minuten Uptime)
|
||||
- ✅ Keine Restarts während 2-minütiger Überwachung
|
||||
- ✅ Traefik ist healthy
|
||||
- ✅ Ports 80/443 korrekt konfiguriert
|
||||
|
||||
## Mögliche Ursachen (nicht gefunden, aber zu prüfen)
|
||||
|
||||
1. **Docker-Service-Restarts**: Am 08.11. um 16:12:58 wurde der Docker-Service gestoppt, was alle Container gestoppt hat
|
||||
- Prüfe: `journalctl -u docker.service` für regelmäßige Stops
|
||||
- Prüfe: System-Reboots oder Kernel-Updates
|
||||
|
||||
2. **Unattended-Upgrades**: Können zu Reboots führen
|
||||
- Prüfe: `journalctl -u unattended-upgrades`
|
||||
|
||||
3. **Manuelle Restarts**: Jemand könnte Traefik manuell restarten
|
||||
- Prüfe: Docker-Events für Stop-Events
|
||||
- Prüfe: SSH-Login-Historie
|
||||
|
||||
4. **Gitea Workflows**: Können indirekt Traefik beeinflussen
|
||||
- `build-image.yml`: Ruft `deploy-image.yml` auf (keine Traefik-Restarts)
|
||||
- `manual-deploy.yml`: Ruft `deploy-image.yml` auf (keine Traefik-Restarts)
|
||||
|
||||
## Verfügbare Playbooks
|
||||
|
||||
### Diagnose
|
||||
```bash
|
||||
# Erweiterte Diagnose durchführen
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/diagnose-traefik-restarts.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### acme.json Berechtigungen
|
||||
```bash
|
||||
# acme.json Berechtigungen prüfen und korrigieren
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/fix-traefik-acme-permissions.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### Auto-Restarts deaktivieren
|
||||
```bash
|
||||
# Prüfe Auto-Restart-Mechanismen
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/disable-traefik-auto-restarts.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
### Traefik stabilisieren
|
||||
```bash
|
||||
# Traefik stabilisieren und überwachen (10 Minuten)
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/stabilize-traefik.yml \
|
||||
-e "traefik_stabilize_wait_minutes=10" \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
## Empfohlene nächste Schritte
|
||||
|
||||
1. **Längere Überwachung**: Führe `stabilize-traefik.yml` mit 10 Minuten aus, um zu sehen, ob Restarts auftreten
|
||||
```bash
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/stabilize-traefik.yml \
|
||||
-e "traefik_stabilize_wait_minutes=10" \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
2. **Docker-Events überwachen**: Prüfe Docker-Events für Stop-Events
|
||||
```bash
|
||||
docker events --filter container=traefik --format "{{.Time}} {{.Action}}"
|
||||
```
|
||||
|
||||
3. **Traefik-Logs prüfen**: Suche nach Stop-Meldungen
|
||||
```bash
|
||||
cd /home/deploy/deployment/stacks/traefik
|
||||
docker compose logs traefik | grep -E "I have to go|Stopping server gracefully|SIGTERM|SIGINT"
|
||||
```
|
||||
|
||||
4. **Docker-Service-Logs prüfen**: Prüfe ob Docker-Service regelmäßig gestoppt wird
|
||||
```bash
|
||||
journalctl -u docker.service --since "7 days ago" | grep -i "stop\|restart"
|
||||
```
|
||||
|
||||
5. **System-Reboots prüfen**: Prüfe ob regelmäßige Reboots auftreten
|
||||
```bash
|
||||
last reboot
|
||||
uptime
|
||||
```
|
||||
|
||||
## Wichtige Erkenntnisse
|
||||
|
||||
- ✅ **Keine automatischen Restart-Mechanismen gefunden**: Keine Cronjobs, Systemd-Timer oder Services die Traefik regelmäßig stoppen
|
||||
- ✅ **acme.json ist korrekt konfiguriert**: Berechtigungen (600) und Container-Zugriff sind korrekt
|
||||
- ✅ **Ports sind korrekt**: Ports 80/443 zeigen auf Traefik
|
||||
- ✅ **Traefik läuft stabil**: Während der 2-minütigen Überwachung keine Restarts
|
||||
- ⚠️ **Docker-Service wurde einmalig gestoppt**: Am 08.11. um 16:12:58 - könnte die Ursache sein
|
||||
|
||||
## Fazit
|
||||
|
||||
Die Diagnose zeigt, dass **keine automatischen Restart-Mechanismen** aktiv sind. Die "I have to go..." Meldungen stammen wahrscheinlich von:
|
||||
1. Einmaligem Docker-Service-Stop (08.11. 16:12:58)
|
||||
2. System-Reboots (nicht in der Historie sichtbar, aber möglich)
|
||||
3. Manuellen Restarts (nicht nachweisbar)
|
||||
|
||||
**Empfehlung**: Überwache Traefik für 10-30 Minuten mit `stabilize-traefik.yml`, um zu sehen, ob weitere Restarts auftreten. Wenn keine Restarts auftreten, war das Problem wahrscheinlich der einmalige Docker-Service-Stop.
|
||||
|
||||
268
deployment/legacy/ansible/ansible/playbooks/backup.yml
Normal file
268
deployment/legacy/ansible/ansible/playbooks/backup.yml
Normal file
@@ -0,0 +1,268 @@
|
||||
---
|
||||
- name: Create Comprehensive Backups
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
|
||||
vars:
|
||||
backup_retention_days: "{{ backup_retention_days | default(7) }}"
|
||||
|
||||
pre_tasks:
|
||||
- name: Ensure backup directory exists
|
||||
file:
|
||||
path: "{{ backups_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
become: yes
|
||||
|
||||
- name: Create timestamp for backup
|
||||
set_fact:
|
||||
backup_timestamp: "{{ ansible_date_time.epoch }}"
|
||||
backup_date: "{{ ansible_date_time.date }}"
|
||||
backup_time: "{{ ansible_date_time.time }}"
|
||||
|
||||
- name: Create backup directory for this run
|
||||
file:
|
||||
path: "{{ backups_path }}/backup_{{ backup_date }}_{{ backup_time }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
register: backup_dir
|
||||
become: yes
|
||||
|
||||
- name: Set backup directory path
|
||||
set_fact:
|
||||
current_backup_dir: "{{ backup_dir.path }}"
|
||||
|
||||
tasks:
|
||||
- name: Backup PostgreSQL Database
|
||||
when: backup_postgresql | default(true) | bool
|
||||
block:
|
||||
- name: Check if PostgreSQL stack is running
|
||||
shell: docker compose -f {{ stacks_base_path }}/postgresql/docker-compose.yml ps --format json | jq -r '.[] | select(.Service=="postgres") | .State' | grep -q "running"
|
||||
register: postgres_running
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Get PostgreSQL container name
|
||||
shell: docker compose -f {{ stacks_base_path }}/postgresql/docker-compose.yml ps --format json | jq -r '.[] | select(.Service=="postgres") | .Name'
|
||||
register: postgres_container
|
||||
changed_when: false
|
||||
when: postgres_running.rc == 0
|
||||
|
||||
- name: Read PostgreSQL environment variables
|
||||
shell: |
|
||||
cd {{ stacks_base_path }}/postgresql
|
||||
grep -E "^POSTGRES_(DB|USER|PASSWORD)=" .env 2>/dev/null || echo ""
|
||||
register: postgres_env
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
no_log: true
|
||||
|
||||
- name: Extract PostgreSQL credentials
|
||||
set_fact:
|
||||
postgres_db: "{{ postgres_env.stdout | regex_search('POSTGRES_DB=([^\\n]+)', '\\1') | first | default('michaelschiemer') }}"
|
||||
postgres_user: "{{ postgres_env.stdout | regex_search('POSTGRES_USER=([^\\n]+)', '\\1') | first | default('postgres') }}"
|
||||
postgres_password: "{{ postgres_env.stdout | regex_search('POSTGRES_PASSWORD=([^\\n]+)', '\\1') | first | default('') }}"
|
||||
when: postgres_running.rc == 0
|
||||
no_log: true
|
||||
|
||||
- name: Create PostgreSQL backup
|
||||
shell: |
|
||||
cd {{ stacks_base_path }}/postgresql
|
||||
PGPASSWORD="{{ postgres_password }}" docker compose exec -T postgres pg_dump \
|
||||
-U {{ postgres_user }} \
|
||||
-d {{ postgres_db }} \
|
||||
--clean \
|
||||
--if-exists \
|
||||
--create \
|
||||
--no-owner \
|
||||
--no-privileges \
|
||||
| gzip > {{ current_backup_dir }}/postgresql_${postgres_db}_{{ backup_date }}_{{ backup_time }}.sql.gz
|
||||
when: postgres_running.rc == 0
|
||||
no_log: true
|
||||
|
||||
- name: Verify PostgreSQL backup
|
||||
stat:
|
||||
path: "{{ current_backup_dir }}/postgresql_{{ postgres_db }}_{{ backup_date }}_{{ backup_time }}.sql.gz"
|
||||
register: postgres_backup_file
|
||||
when: postgres_running.rc == 0
|
||||
|
||||
- name: Display PostgreSQL backup status
|
||||
debug:
|
||||
msg: "PostgreSQL backup: {{ 'SUCCESS' if (postgres_running.rc == 0 and postgres_backup_file.stat.exists) else 'SKIPPED (PostgreSQL not running)' }}"
|
||||
|
||||
- name: Backup Application Data
|
||||
when: backup_application_data | default(true) | bool
|
||||
block:
|
||||
- name: Check if production stack is running
|
||||
shell: docker compose -f {{ stacks_base_path }}/production/docker-compose.base.yml -f {{ stacks_base_path }}/production/docker-compose.production.yml ps --format json | jq -r '.[] | select(.Service=="php") | .State' | grep -q "running"
|
||||
register: app_running
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Backup application storage directory
|
||||
archive:
|
||||
path: "{{ stacks_base_path }}/production/storage"
|
||||
dest: "{{ current_backup_dir }}/application_storage_{{ backup_date }}_{{ backup_time }}.tar.gz"
|
||||
format: gz
|
||||
when: app_running.rc == 0
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Backup application logs
|
||||
archive:
|
||||
path: "{{ stacks_base_path }}/production/storage/logs"
|
||||
dest: "{{ current_backup_dir }}/application_logs_{{ backup_date }}_{{ backup_time }}.tar.gz"
|
||||
format: gz
|
||||
when: app_running.rc == 0
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Backup application .env file
|
||||
copy:
|
||||
src: "{{ stacks_base_path }}/production/.env"
|
||||
dest: "{{ current_backup_dir }}/application_env_{{ backup_date }}_{{ backup_time }}.env"
|
||||
remote_src: yes
|
||||
when: app_running.rc == 0
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display application backup status
|
||||
debug:
|
||||
msg: "Application data backup: {{ 'SUCCESS' if app_running.rc == 0 else 'SKIPPED (Application not running)' }}"
|
||||
|
||||
- name: Backup Gitea Data
|
||||
when: backup_gitea | default(true) | bool
|
||||
block:
|
||||
- name: Check if Gitea stack is running
|
||||
shell: docker compose -f {{ stacks_base_path }}/gitea/docker-compose.yml ps --format json | jq -r '.[] | select(.Service=="gitea") | .State' | grep -q "running"
|
||||
register: gitea_running
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Get Gitea volume name
|
||||
shell: docker compose -f {{ stacks_base_path }}/gitea/docker-compose.yml config --volumes | head -1
|
||||
register: gitea_volume
|
||||
changed_when: false
|
||||
when: gitea_running.rc == 0
|
||||
|
||||
- name: Backup Gitea volume
|
||||
shell: |
|
||||
docker run --rm \
|
||||
-v {{ gitea_volume.stdout }}:/source:ro \
|
||||
-v {{ current_backup_dir }}:/backup \
|
||||
alpine tar czf /backup/gitea_data_{{ backup_date }}_{{ backup_time }}.tar.gz -C /source .
|
||||
when: gitea_running.rc == 0 and gitea_volume.stdout != ""
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display Gitea backup status
|
||||
debug:
|
||||
msg: "Gitea backup: {{ 'SUCCESS' if (gitea_running.rc == 0 and gitea_volume.stdout != '') else 'SKIPPED (Gitea not running)' }}"
|
||||
|
||||
- name: Backup Docker Registry Images (Optional)
|
||||
when: backup_registry | default(false) | bool
|
||||
block:
|
||||
- name: Check if registry stack is running
|
||||
shell: docker compose -f {{ stacks_base_path }}/registry/docker-compose.yml ps --format json | jq -r '.[] | select(.Service=="registry") | .State' | grep -q "running"
|
||||
register: registry_running
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: List registry images
|
||||
shell: |
|
||||
cd {{ stacks_base_path }}/registry
|
||||
docker compose exec -T registry registry garbage-collect --dry-run /etc/docker/registry/config.yml 2>&1 | grep -E "repository|tag" || echo "No images found"
|
||||
register: registry_images
|
||||
changed_when: false
|
||||
when: registry_running.rc == 0
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Save registry image list
|
||||
copy:
|
||||
content: "{{ registry_images.stdout }}"
|
||||
dest: "{{ current_backup_dir }}/registry_images_{{ backup_date }}_{{ backup_time }}.txt"
|
||||
when: registry_running.rc == 0 and registry_images.stdout != ""
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display registry backup status
|
||||
debug:
|
||||
msg: "Registry backup: {{ 'SUCCESS' if registry_running.rc == 0 else 'SKIPPED (Registry not running)' }}"
|
||||
|
||||
- name: Create backup metadata
|
||||
copy:
|
||||
content: |
|
||||
Backup Date: {{ backup_date }} {{ backup_time }}
|
||||
Backup Timestamp: {{ backup_timestamp }}
|
||||
Host: {{ inventory_hostname }}
|
||||
|
||||
Components Backed Up:
|
||||
- PostgreSQL: {{ 'YES' if ((backup_postgresql | default(true) | bool) and (postgres_running.rc | default(1) == 0)) else 'NO' }}
|
||||
- Application Data: {{ 'YES' if ((backup_application_data | default(true) | bool) and (app_running.rc | default(1) == 0)) else 'NO' }}
|
||||
- Gitea: {{ 'YES' if ((backup_gitea | default(true) | bool) and (gitea_running.rc | default(1) == 0)) else 'NO' }}
|
||||
- Registry: {{ 'YES' if ((backup_registry | default(false) | bool) and (registry_running.rc | default(1) == 0)) else 'NO' }}
|
||||
|
||||
Backup Location: {{ current_backup_dir }}
|
||||
dest: "{{ current_backup_dir }}/backup_metadata.txt"
|
||||
mode: '0644'
|
||||
|
||||
- name: Verify backup files
|
||||
when: verify_backups | default(true) | bool
|
||||
block:
|
||||
- name: List all backup files
|
||||
find:
|
||||
paths: "{{ current_backup_dir }}"
|
||||
file_type: file
|
||||
register: backup_files
|
||||
|
||||
- name: Check backup file sizes
|
||||
stat:
|
||||
path: "{{ item.path }}"
|
||||
register: backup_file_stats
|
||||
loop: "{{ backup_files.files }}"
|
||||
|
||||
- name: Display backup summary
|
||||
debug:
|
||||
msg: |
|
||||
Backup Summary:
|
||||
- Total files: {{ backup_files.files | length }}
|
||||
- Total size: {{ backup_file_stats.results | map(attribute='stat.size') | sum | int / 1024 / 1024 }} MB
|
||||
- Location: {{ current_backup_dir }}
|
||||
|
||||
- name: Fail if no backup files created
|
||||
fail:
|
||||
msg: "No backup files were created in {{ current_backup_dir }}"
|
||||
when: backup_files.files | length == 0
|
||||
|
||||
- name: Cleanup old backups
|
||||
block:
|
||||
- name: Find old backup directories
|
||||
find:
|
||||
paths: "{{ backups_path }}"
|
||||
patterns: "backup_*"
|
||||
file_type: directory
|
||||
register: backup_dirs
|
||||
|
||||
- name: Calculate cutoff date
|
||||
set_fact:
|
||||
cutoff_timestamp: "{{ (ansible_date_time.epoch | int) - (backup_retention_days | int * 86400) }}"
|
||||
|
||||
- name: Remove old backup directories
|
||||
file:
|
||||
path: "{{ item.path }}"
|
||||
state: absent
|
||||
loop: "{{ backup_dirs.files }}"
|
||||
when: item.mtime | int < cutoff_timestamp | int
|
||||
become: yes
|
||||
|
||||
- name: Display cleanup summary
|
||||
debug:
|
||||
msg: "Cleaned up backups older than {{ backup_retention_days }} days"
|
||||
|
||||
post_tasks:
|
||||
- name: Display final backup status
|
||||
debug:
|
||||
msg: |
|
||||
==========================================
|
||||
Backup completed successfully!
|
||||
==========================================
|
||||
Backup location: {{ current_backup_dir }}
|
||||
Retention: {{ backup_retention_days }} days
|
||||
==========================================
|
||||
|
||||
@@ -0,0 +1,432 @@
|
||||
---
|
||||
- name: Build and Push Initial Docker Image
|
||||
hosts: production
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
vault_file: "{{ playbook_dir }}/../secrets/production.vault.yml"
|
||||
build_repo_path: "/home/deploy/michaelschiemer"
|
||||
build_repo_url: "{{ git_repository_url_default | default('https://git.michaelschiemer.de/michael/michaelschiemer.git') }}"
|
||||
build_repo_branch_default: "main"
|
||||
# Local repository path for cloning (temporary)
|
||||
local_repo_path: "/tmp/michaelschiemer-build-{{ ansible_date_time.epoch }}"
|
||||
# Check if local repository exists (project root)
|
||||
local_repo_check_path: "{{ playbook_dir | default('') | dirname | dirname | dirname }}"
|
||||
image_name: "{{ app_name | default('framework') }}"
|
||||
image_tag_default: "latest"
|
||||
registry_url: "{{ docker_registry | default('localhost:5000') }}"
|
||||
registry_username: "{{ vault_docker_registry_username | default(docker_registry_username_default | default('admin')) }}"
|
||||
registry_password: "{{ vault_docker_registry_password | default('') }}"
|
||||
|
||||
pre_tasks:
|
||||
- name: Verify vault file exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ vault_file }}"
|
||||
register: vault_stat
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Load vault secrets
|
||||
ansible.builtin.include_vars:
|
||||
file: "{{ vault_file }}"
|
||||
when: vault_stat.stat.exists
|
||||
no_log: yes
|
||||
ignore_errors: yes
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Verify registry password is set
|
||||
ansible.builtin.fail:
|
||||
msg: |
|
||||
Registry password is required!
|
||||
|
||||
Please set vault_docker_registry_password in:
|
||||
{{ vault_file }}
|
||||
|
||||
Or pass it via extra vars:
|
||||
-e "registry_password=your-password"
|
||||
when: registry_password | string | trim == ''
|
||||
|
||||
tasks:
|
||||
- name: Set build variables
|
||||
ansible.builtin.set_fact:
|
||||
build_repo_branch: "{{ build_repo_branch | default(build_repo_branch_default) }}"
|
||||
image_tag: "{{ build_image_tag | default(image_tag_default) }}"
|
||||
|
||||
- name: Display build information
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Building Docker Image:
|
||||
- Repository: {{ build_repo_url }}
|
||||
- Branch: {{ build_repo_branch }}
|
||||
- Build Path: {{ build_repo_path }}
|
||||
- Registry: {{ registry_url }}
|
||||
- Image: {{ image_name }}:{{ image_tag }}
|
||||
- Username: {{ registry_username }}
|
||||
|
||||
- name: Check if local repository exists (project root)
|
||||
ansible.builtin.stat:
|
||||
path: "{{ local_repo_check_path }}/.git"
|
||||
delegate_to: localhost
|
||||
register: local_repo_exists
|
||||
become: no
|
||||
|
||||
- name: Configure Git to skip SSL verification for git.michaelschiemer.de (local)
|
||||
ansible.builtin.command: |
|
||||
git config --global http.https://git.michaelschiemer.de.sslVerify false
|
||||
delegate_to: localhost
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
become: no
|
||||
when: not local_repo_exists.stat.exists
|
||||
|
||||
- name: Determine Git URL with authentication if token is available
|
||||
ansible.builtin.set_fact:
|
||||
git_repo_url_with_auth: >-
|
||||
{%- if vault_git_token is defined and vault_git_token | string | trim != '' -%}
|
||||
https://{{ vault_git_token }}@git.michaelschiemer.de/michael/michaelschiemer.git
|
||||
{%- elif vault_git_username is defined and vault_git_username | string | trim != '' and vault_git_password is defined and vault_git_password | string | trim != '' -%}
|
||||
https://{{ vault_git_username }}:{{ vault_git_password }}@git.michaelschiemer.de/michael/michaelschiemer.git
|
||||
{%- else -%}
|
||||
{{ build_repo_url }}
|
||||
{%- endif -%}
|
||||
no_log: yes
|
||||
|
||||
- name: Debug Git URL (without credentials)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Git Repository Configuration:
|
||||
- Original URL: {{ build_repo_url }}
|
||||
- Local repo exists: {{ 'YES' if local_repo_exists.stat.exists else 'NO' }}
|
||||
- Local repo path: {{ local_repo_check_path }}
|
||||
- Using authentication: {{ 'YES' if (vault_git_token is defined and vault_git_token | string | trim != '') or (vault_git_username is defined and vault_git_username | string | trim != '') else 'NO' }}
|
||||
- Auth method: {{ 'Token' if (vault_git_token is defined and vault_git_token | string | trim != '') else 'Username/Password' if (vault_git_username is defined and vault_git_username | string | trim != '') else 'None' }}
|
||||
no_log: yes
|
||||
|
||||
- name: Use existing local repository or clone to temporary location
|
||||
block:
|
||||
- name: Clone repository to temporary location (local)
|
||||
ansible.builtin.git:
|
||||
repo: "{{ git_repo_url_with_auth }}"
|
||||
dest: "{{ local_repo_path }}"
|
||||
version: "{{ build_repo_branch }}"
|
||||
force: yes
|
||||
update: yes
|
||||
delegate_to: localhost
|
||||
register: git_result
|
||||
changed_when: git_result.changed
|
||||
environment:
|
||||
GIT_SSL_NO_VERIFY: "1"
|
||||
no_log: yes
|
||||
when: not local_repo_exists.stat.exists
|
||||
|
||||
- name: Set local repository path to existing project root
|
||||
ansible.builtin.set_fact:
|
||||
source_repo_path: "{{ local_repo_check_path }}"
|
||||
when: local_repo_exists.stat.exists
|
||||
|
||||
- name: Set local repository path to cloned temporary location
|
||||
ansible.builtin.set_fact:
|
||||
source_repo_path: "{{ local_repo_path }}"
|
||||
when: not local_repo_exists.stat.exists
|
||||
|
||||
- name: Display repository source
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Repository source:
|
||||
- Using existing local repo: {{ 'YES' if local_repo_exists.stat.exists else 'NO' }}
|
||||
- Source path: {{ source_repo_path }}
|
||||
- Branch: {{ build_repo_branch }}
|
||||
|
||||
- name: Ensure build directory exists on server
|
||||
ansible.builtin.file:
|
||||
path: "{{ build_repo_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
|
||||
- name: Copy repository to server (excluding .git and build artifacts)
|
||||
ansible.builtin.synchronize:
|
||||
src: "{{ source_repo_path }}/"
|
||||
dest: "{{ build_repo_path }}/"
|
||||
delete: no
|
||||
recursive: yes
|
||||
rsync_opts:
|
||||
- "--chmod=D755,F644"
|
||||
- "--exclude=.git"
|
||||
- "--exclude=.gitignore"
|
||||
- "--exclude=node_modules"
|
||||
- "--exclude=vendor"
|
||||
- "--exclude=.env"
|
||||
- "--exclude=.env.*"
|
||||
- "--exclude=*.log"
|
||||
- "--exclude=.idea"
|
||||
- "--exclude=.vscode"
|
||||
- "--exclude=.DS_Store"
|
||||
- "--exclude=*.swp"
|
||||
- "--exclude=*.swo"
|
||||
- "--exclude=*~"
|
||||
- "--exclude=.phpunit.result.cache"
|
||||
- "--exclude=coverage"
|
||||
- "--exclude=.phpunit.cache"
|
||||
- "--exclude=public/assets"
|
||||
- "--exclude=storage/logs"
|
||||
- "--exclude=storage/framework/cache"
|
||||
- "--exclude=storage/framework/sessions"
|
||||
- "--exclude=storage/framework/views"
|
||||
|
||||
- name: Clean up temporary cloned repository
|
||||
ansible.builtin.file:
|
||||
path: "{{ local_repo_path }}"
|
||||
state: absent
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
when:
|
||||
- not local_repo_exists.stat.exists
|
||||
- local_repo_path is defined
|
||||
|
||||
- name: Check if Dockerfile.production exists on server
|
||||
ansible.builtin.stat:
|
||||
path: "{{ build_repo_path }}/Dockerfile.production"
|
||||
register: dockerfile_stat
|
||||
|
||||
- name: Fail if Dockerfile.production not found
|
||||
ansible.builtin.fail:
|
||||
msg: |
|
||||
Dockerfile.production not found at {{ build_repo_path }}/Dockerfile.production
|
||||
|
||||
Please verify:
|
||||
1. The repository was copied successfully to {{ build_repo_path }}
|
||||
2. The Dockerfile.production file exists in the repository
|
||||
3. The source repository path is correct: {{ source_repo_path }}
|
||||
when: not dockerfile_stat.stat.exists
|
||||
|
||||
- name: Check if entrypoint script exists on server
|
||||
ansible.builtin.stat:
|
||||
path: "{{ build_repo_path }}/docker/entrypoint.sh"
|
||||
register: entrypoint_stat
|
||||
|
||||
- name: Display entrypoint script status
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Entrypoint Script Check:
|
||||
- Path: {{ build_repo_path }}/docker/entrypoint.sh
|
||||
- Exists: {{ entrypoint_stat.stat.exists | default(false) }}
|
||||
{% if entrypoint_stat.stat.exists %}
|
||||
- Mode: {{ entrypoint_stat.stat.mode | default('unknown') }}
|
||||
- Size: {{ entrypoint_stat.stat.size | default(0) }} bytes
|
||||
{% endif %}
|
||||
when: not ansible_check_mode
|
||||
|
||||
- name: Fail if entrypoint script not found
|
||||
ansible.builtin.fail:
|
||||
msg: |
|
||||
Entrypoint script not found at {{ build_repo_path }}/docker/entrypoint.sh
|
||||
|
||||
This file is required for the Docker image build!
|
||||
|
||||
Please verify:
|
||||
1. The file exists in the source repository: {{ source_repo_path }}/docker/entrypoint.sh
|
||||
2. The rsync operation copied the file successfully
|
||||
when: not entrypoint_stat.stat.exists
|
||||
|
||||
- name: Convert entrypoint script to LF line endings
|
||||
ansible.builtin.shell: |
|
||||
sed -i 's/\r$//' "{{ build_repo_path }}/docker/entrypoint.sh"
|
||||
when: entrypoint_stat.stat.exists
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Verify entrypoint script has LF line endings
|
||||
ansible.builtin.shell: |
|
||||
if head -1 "{{ build_repo_path }}/docker/entrypoint.sh" | od -c | grep -q "\\r"; then
|
||||
echo "CRLF_DETECTED"
|
||||
else
|
||||
echo "LF_ONLY"
|
||||
fi
|
||||
register: line_ending_check
|
||||
changed_when: false
|
||||
when: entrypoint_stat.stat.exists
|
||||
|
||||
- name: Display line ending check result
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Entrypoint Script Line Endings:
|
||||
- Status: {{ line_ending_check.stdout | default('unknown') }}
|
||||
{% if 'CRLF_DETECTED' in line_ending_check.stdout %}
|
||||
⚠️ WARNING: Script still has CRLF line endings after conversion attempt!
|
||||
{% else %}
|
||||
✅ Script has LF line endings
|
||||
{% endif %}
|
||||
when:
|
||||
- entrypoint_stat.stat.exists
|
||||
- not ansible_check_mode
|
||||
|
||||
- name: Login to Docker registry
|
||||
community.docker.docker_login:
|
||||
registry_url: "{{ registry_url }}"
|
||||
username: "{{ registry_username }}"
|
||||
password: "{{ registry_password }}"
|
||||
no_log: yes
|
||||
register: login_result
|
||||
|
||||
- name: Verify registry login
|
||||
ansible.builtin.debug:
|
||||
msg: "✅ Successfully logged in to {{ registry_url }}"
|
||||
when: not login_result.failed | default(false)
|
||||
|
||||
- name: Fail if registry login failed
|
||||
ansible.builtin.fail:
|
||||
msg: |
|
||||
Failed to login to Docker registry!
|
||||
|
||||
Registry: {{ registry_url }}
|
||||
Username: {{ registry_username }}
|
||||
|
||||
Please verify:
|
||||
1. The registry is running and accessible
|
||||
2. The username and password are correct
|
||||
3. The registry URL is correct
|
||||
when: login_result.failed | default(false)
|
||||
|
||||
- name: Verify Docker Buildx is available
|
||||
ansible.builtin.command: docker buildx version
|
||||
register: buildx_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Warn if Buildx is not available
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
⚠️ Docker Buildx not found. BuildKit features may not work.
|
||||
Install with: apt-get install docker-buildx-plugin
|
||||
when: buildx_check.rc != 0
|
||||
|
||||
- name: Set build cache option
|
||||
ansible.builtin.set_fact:
|
||||
build_no_cache: "{{ build_no_cache | default('false') | bool }}"
|
||||
|
||||
- name: Build and push Docker image with BuildKit
|
||||
ansible.builtin.shell: |
|
||||
set -e
|
||||
BUILD_ARGS=""
|
||||
{% if build_no_cache | bool %}
|
||||
BUILD_ARGS="--no-cache"
|
||||
{% endif %}
|
||||
DOCKER_BUILDKIT=1 docker buildx build \
|
||||
--platform linux/amd64 \
|
||||
--file {{ build_repo_path }}/Dockerfile.production \
|
||||
--tag {{ registry_url }}/{{ image_name }}:{{ image_tag }} \
|
||||
--push \
|
||||
--progress=plain \
|
||||
$BUILD_ARGS \
|
||||
{{ build_repo_path }}
|
||||
register: build_result
|
||||
environment:
|
||||
DOCKER_BUILDKIT: "1"
|
||||
changed_when: build_result.rc == 0
|
||||
failed_when: build_result.rc != 0
|
||||
async: 3600
|
||||
poll: 10
|
||||
|
||||
- name: Display build result
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Build result:
|
||||
- Image: {{ registry_url }}/{{ image_name }}:{{ image_tag }}
|
||||
- Return code: {{ build_result.rc | default('unknown') }}
|
||||
- Changed: {{ build_result.changed | default(false) }}
|
||||
- Failed: {{ build_result.failed | default(false) }}
|
||||
when: build_result is defined
|
||||
|
||||
- name: Display build output on failure
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Build failed! Output:
|
||||
{{ build_result.stdout_lines | default([]) | join('\n') }}
|
||||
|
||||
Error output:
|
||||
{{ build_result.stderr_lines | default([]) | join('\n') }}
|
||||
when:
|
||||
- build_result is defined
|
||||
- build_result.rc | default(0) != 0
|
||||
|
||||
- name: Verify image exists locally
|
||||
ansible.builtin.command: |
|
||||
docker image inspect {{ registry_url }}/{{ image_name }}:{{ image_tag }}
|
||||
register: image_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Verify entrypoint script in built image
|
||||
shell: |
|
||||
CONTAINER_ID=$(docker create {{ registry_url }}/{{ image_name }}:{{ image_tag }} 2>/dev/null) && \
|
||||
if docker cp $CONTAINER_ID:/usr/local/bin/entrypoint.sh /tmp/entrypoint_verify.sh 2>&1; then \
|
||||
echo "FILE_EXISTS"; \
|
||||
ls -la /tmp/entrypoint_verify.sh; \
|
||||
head -3 /tmp/entrypoint_verify.sh; \
|
||||
file /tmp/entrypoint_verify.sh; \
|
||||
rm -f /tmp/entrypoint_verify.sh; \
|
||||
else \
|
||||
echo "FILE_NOT_FOUND"; \
|
||||
fi && \
|
||||
docker rm $CONTAINER_ID >/dev/null 2>&1 || true
|
||||
register: image_entrypoint_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display entrypoint verification result
|
||||
debug:
|
||||
msg: |
|
||||
==========================================
|
||||
Entrypoint Script Verification in Built Image
|
||||
==========================================
|
||||
Image: {{ registry_url }}/{{ image_name }}:{{ image_tag }}
|
||||
|
||||
Verification Result:
|
||||
{{ image_entrypoint_check.stdout | default('Check failed') }}
|
||||
|
||||
{% if 'FILE_NOT_FOUND' in image_entrypoint_check.stdout %}
|
||||
⚠️ CRITICAL: Entrypoint script NOT FOUND in built image!
|
||||
|
||||
This means the Docker build did not copy the entrypoint script.
|
||||
Possible causes:
|
||||
1. The COPY command in Dockerfile.production failed silently
|
||||
2. The docker/entrypoint.sh file was not in the build context
|
||||
3. There was an issue with the multi-stage build
|
||||
|
||||
Please check:
|
||||
1. Build logs above for COPY errors
|
||||
2. Verify docker/entrypoint.sh exists: ls -la {{ build_repo_path }}/docker/entrypoint.sh
|
||||
3. Rebuild with verbose output to see COPY step
|
||||
{% elif 'FILE_EXISTS' in image_entrypoint_check.stdout %}
|
||||
✅ Entrypoint script found in built image
|
||||
{% endif %}
|
||||
==========================================
|
||||
|
||||
- name: Display image information
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
✅ Image built and pushed successfully!
|
||||
|
||||
Registry: {{ registry_url }}
|
||||
Image: {{ image_name }}:{{ image_tag }}
|
||||
Local: {{ 'Available' if image_check.rc == 0 else 'Not found locally' }}
|
||||
|
||||
Next steps:
|
||||
1. Run setup-infrastructure.yml to deploy the application stack
|
||||
2. Or manually deploy using docker-compose
|
||||
when: image_check.rc == 0
|
||||
|
||||
- name: Warn if image not found locally
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
⚠️ Image was pushed but not found locally.
|
||||
This is normal if the image was pushed to a remote registry.
|
||||
|
||||
Verify the image exists in the registry:
|
||||
curl -u {{ registry_username }}:{{ registry_password }} http://{{ registry_url }}/v2/{{ image_name }}/tags/list
|
||||
when: image_check.rc != 0
|
||||
|
||||
@@ -0,0 +1,14 @@
|
||||
---
|
||||
# Check and Restart Gitea if Unhealthy
|
||||
# Wrapper Playbook for gitea role restart tasks
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
tasks:
|
||||
- name: Include gitea restart tasks
|
||||
ansible.builtin.include_role:
|
||||
name: gitea
|
||||
tasks_from: restart
|
||||
tags:
|
||||
- gitea
|
||||
- restart
|
||||
@@ -0,0 +1,14 @@
|
||||
---
|
||||
# Check Container Logs for Troubleshooting
|
||||
# Wrapper Playbook for application role logs tasks
|
||||
- hosts: production
|
||||
gather_facts: no
|
||||
become: no
|
||||
tasks:
|
||||
- name: Include application logs tasks
|
||||
ansible.builtin.include_role:
|
||||
name: application
|
||||
tasks_from: logs
|
||||
tags:
|
||||
- application
|
||||
- logs
|
||||
@@ -0,0 +1,15 @@
|
||||
---
|
||||
# Check Container Status After Code Sync
|
||||
# Wrapper Playbook for application role health_check tasks
|
||||
- hosts: production
|
||||
gather_facts: no
|
||||
become: no
|
||||
tasks:
|
||||
- name: Include application health_check tasks
|
||||
ansible.builtin.include_role:
|
||||
name: application
|
||||
tasks_from: health_check
|
||||
tags:
|
||||
- application
|
||||
- health
|
||||
- status
|
||||
@@ -0,0 +1,83 @@
|
||||
---
|
||||
- name: Check .env File and Environment Variables
|
||||
hosts: production
|
||||
gather_facts: no
|
||||
become: no
|
||||
|
||||
vars:
|
||||
application_stack_dest: "{{ app_stack_path | default(stacks_base_path + '/production') }}"
|
||||
application_compose_suffix: "production.yml"
|
||||
|
||||
tasks:
|
||||
- name: Check if .env file exists
|
||||
stat:
|
||||
path: "{{ application_stack_dest }}/.env"
|
||||
delegate_to: "{{ inventory_hostname }}"
|
||||
register: env_file_exists
|
||||
|
||||
- name: Display .env file status
|
||||
debug:
|
||||
msg: ".env file exists: {{ env_file_exists.stat.exists }}"
|
||||
|
||||
- name: Read .env file content (first 50 lines)
|
||||
shell: |
|
||||
head -50 {{ application_stack_dest }}/.env 2>&1 || echo "FILE_NOT_FOUND"
|
||||
delegate_to: "{{ inventory_hostname }}"
|
||||
register: env_file_content
|
||||
changed_when: false
|
||||
when: env_file_exists.stat.exists
|
||||
|
||||
- name: Display .env file content
|
||||
debug:
|
||||
msg: |
|
||||
.env file content:
|
||||
{{ env_file_content.stdout }}
|
||||
|
||||
- name: Check for DB_DATABASE in .env file
|
||||
shell: |
|
||||
grep -E "^DB_DATABASE=|^DB_NAME=" {{ application_stack_dest }}/.env 2>&1 || echo "NOT_FOUND"
|
||||
delegate_to: "{{ inventory_hostname }}"
|
||||
register: db_database_check
|
||||
changed_when: false
|
||||
when: env_file_exists.stat.exists
|
||||
|
||||
- name: Display DB_DATABASE check
|
||||
debug:
|
||||
msg: "DB_DATABASE in .env: {{ db_database_check.stdout }}"
|
||||
|
||||
- name: Check environment variables in queue-worker container
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} exec -T queue-worker env | grep -E "^DB_" | sort
|
||||
register: queue_worker_env
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display queue-worker environment variables
|
||||
debug:
|
||||
msg: |
|
||||
Queue-Worker DB Environment Variables:
|
||||
{{ queue_worker_env.stdout | default('CONTAINER_NOT_RUNNING') }}
|
||||
|
||||
- name: Check docker-compose project directory
|
||||
shell: |
|
||||
cd {{ application_stack_dest }} && pwd
|
||||
delegate_to: "{{ inventory_hostname }}"
|
||||
register: project_dir
|
||||
changed_when: false
|
||||
|
||||
- name: Display project directory
|
||||
debug:
|
||||
msg: "Docker Compose project directory: {{ project_dir.stdout }}"
|
||||
|
||||
- name: Check if .env file is in project directory
|
||||
shell: |
|
||||
test -f {{ application_stack_dest }}/.env && echo "EXISTS" || echo "MISSING"
|
||||
delegate_to: "{{ inventory_hostname }}"
|
||||
register: env_in_project_dir
|
||||
changed_when: false
|
||||
|
||||
- name: Display .env file location check
|
||||
debug:
|
||||
msg: ".env file in project directory: {{ env_in_project_dir.stdout }}"
|
||||
|
||||
@@ -0,0 +1,18 @@
|
||||
---
|
||||
# Check Final Container Status
|
||||
# Wrapper Playbook for application role health_check tasks (final status)
|
||||
- hosts: production
|
||||
gather_facts: no
|
||||
become: no
|
||||
vars:
|
||||
application_health_check_final: true
|
||||
application_health_check_logs_tail: 5
|
||||
tasks:
|
||||
- name: Include application health_check tasks (final)
|
||||
ansible.builtin.include_role:
|
||||
name: application
|
||||
tasks_from: health_check
|
||||
tags:
|
||||
- application
|
||||
- health
|
||||
- final
|
||||
@@ -0,0 +1,15 @@
|
||||
---
|
||||
# Check Traefik ACME Challenge Logs
|
||||
# Wrapper Playbook for traefik role logs tasks
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
tasks:
|
||||
- name: Include traefik logs tasks
|
||||
ansible.builtin.include_role:
|
||||
name: traefik
|
||||
tasks_from: logs
|
||||
tags:
|
||||
- traefik
|
||||
- logs
|
||||
- acme
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
# Check Worker and Scheduler Logs
|
||||
# Wrapper Playbook for application role logs tasks
|
||||
- hosts: production
|
||||
gather_facts: no
|
||||
become: no
|
||||
vars:
|
||||
application_logs_check_vendor: true
|
||||
tasks:
|
||||
- name: Include application logs tasks (worker)
|
||||
ansible.builtin.include_role:
|
||||
name: application
|
||||
tasks_from: logs
|
||||
tags:
|
||||
- application
|
||||
- logs
|
||||
- worker
|
||||
@@ -0,0 +1,60 @@
|
||||
---
|
||||
# Deploy Gitea Stack Configuration
|
||||
# Updates docker-compose.yml and restarts containers with new settings
|
||||
- name: Deploy Gitea Stack Configuration
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
gitea_stack_path: "{{ stacks_base_path }}/gitea"
|
||||
traefik_auto_restart: false
|
||||
gitea_auto_restart: false
|
||||
|
||||
tasks:
|
||||
- name: Check if Gitea stack directory exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ gitea_stack_path }}"
|
||||
register: gitea_stack_dir
|
||||
failed_when: false
|
||||
|
||||
- name: Fail if Gitea stack directory does not exist
|
||||
ansible.builtin.fail:
|
||||
msg: "Gitea stack directory does not exist: {{ gitea_stack_path }}"
|
||||
when: not gitea_stack_dir.stat.exists
|
||||
|
||||
- name: Sync Gitea docker-compose.yml
|
||||
ansible.builtin.synchronize:
|
||||
src: "{{ playbook_dir }}/../../stacks/gitea/docker-compose.yml"
|
||||
dest: "{{ gitea_stack_path }}/docker-compose.yml"
|
||||
mode: push
|
||||
register: compose_synced
|
||||
|
||||
- name: Restart Gitea and Postgres containers to apply new configuration
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose up -d --force-recreate gitea postgres
|
||||
register: gitea_restart
|
||||
changed_when: gitea_restart.rc == 0
|
||||
when: compose_synced.changed | default(false) | bool
|
||||
|
||||
- name: Wait for Gitea to be ready
|
||||
ansible.builtin.wait_for:
|
||||
timeout: 60
|
||||
delay: 5
|
||||
when: gitea_restart.changed | default(false) | bool
|
||||
|
||||
- name: Display result
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
GITEA STACK CONFIGURATION DEPLOYED
|
||||
================================================================================
|
||||
|
||||
Changes applied:
|
||||
- Gitea Connection Pool: MAX_OPEN_CONNS=50, MAX_IDLE_CONNS=30, CONN_MAX_LIFETIME=600, CONN_MAX_IDLE_TIME=300
|
||||
- Postgres Timeouts: authentication_timeout=180s, statement_timeout=30s, idle_in_transaction_timeout=30s
|
||||
|
||||
Containers restarted: {{ 'YES' if (gitea_restart.changed | default(false) | bool) else 'NO (no changes)' }}
|
||||
|
||||
================================================================================
|
||||
|
||||
@@ -0,0 +1,14 @@
|
||||
---
|
||||
# Deploy Traefik Configuration Files
|
||||
# Wrapper Playbook for traefik role config tasks
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
tasks:
|
||||
- name: Include traefik config tasks
|
||||
ansible.builtin.include_role:
|
||||
name: traefik
|
||||
tasks_from: config
|
||||
tags:
|
||||
- traefik
|
||||
- config
|
||||
18
deployment/legacy/ansible/ansible/playbooks/deploy/code.yml
Normal file
18
deployment/legacy/ansible/ansible/playbooks/deploy/code.yml
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
# Deploy Application Code via Git
|
||||
# Wrapper Playbook for application role deploy_code tasks
|
||||
- hosts: "{{ deployment_hosts | default('production') }}"
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
application_deployment_method: git
|
||||
deployment_environment: "{{ deployment_environment | default('production') }}"
|
||||
tasks:
|
||||
- name: Include application deploy_code tasks
|
||||
ansible.builtin.include_role:
|
||||
name: application
|
||||
tasks_from: deploy_code
|
||||
tags:
|
||||
- application
|
||||
- deploy
|
||||
- code
|
||||
@@ -0,0 +1,30 @@
|
||||
---
|
||||
# Complete Deployment Playbook
|
||||
# Combines all deployment steps into a single Ansible run:
|
||||
# 1. Deploy Application Code
|
||||
# 2. Deploy Docker Image
|
||||
# 3. Install Composer Dependencies
|
||||
#
|
||||
# This reduces SSH connections and Ansible overhead compared to running
|
||||
# three separate playbook calls.
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook -i inventory/production.yml playbooks/deploy-complete.yml \
|
||||
# -e "deployment_environment=staging" \
|
||||
# -e "deployment_hosts=production" \
|
||||
# -e "image_tag=latest" \
|
||||
# -e "docker_registry=registry.michaelschiemer.de" \
|
||||
# -e "docker_registry_username=admin" \
|
||||
# -e "docker_registry_password=password" \
|
||||
# -e "git_branch=staging" \
|
||||
# --vault-password-file /tmp/vault_pass
|
||||
|
||||
# Step 1: Deploy Application Code
|
||||
- import_playbook: deploy-application-code.yml
|
||||
|
||||
# Step 2: Deploy Docker Image
|
||||
- import_playbook: deploy-image.yml
|
||||
|
||||
# Step 3: Install Composer Dependencies
|
||||
- import_playbook: install-composer-dependencies.yml
|
||||
|
||||
513
deployment/legacy/ansible/ansible/playbooks/deploy/image.yml
Normal file
513
deployment/legacy/ansible/ansible/playbooks/deploy/image.yml
Normal file
@@ -0,0 +1,513 @@
|
||||
---
|
||||
- name: Deploy Docker Image to Application Stack
|
||||
hosts: "{{ deployment_hosts | default('production') }}"
|
||||
gather_facts: yes
|
||||
become: no
|
||||
|
||||
vars:
|
||||
# Application code directory (where docker-compose files are located)
|
||||
application_code_dest: "/home/deploy/michaelschiemer/current"
|
||||
application_compose_suffix: >-
|
||||
{%- if deployment_environment == 'staging' -%}
|
||||
staging.yml
|
||||
{%- else -%}
|
||||
production.yml
|
||||
{%- endif -%}
|
||||
# Image to deploy (can be overridden via -e image_tag=...)
|
||||
image_tag: "{{ image_tag | default('latest') }}"
|
||||
docker_registry: "{{ docker_registry | default('registry.michaelschiemer.de') }}"
|
||||
app_name_default: "framework"
|
||||
# Deployment environment (staging or production)
|
||||
deployment_environment: "{{ deployment_environment | default('production') }}"
|
||||
|
||||
tasks:
|
||||
- name: Check if vault file exists locally
|
||||
stat:
|
||||
path: "{{ playbook_dir }}/../secrets/{{ deployment_environment }}.vault.yml"
|
||||
delegate_to: localhost
|
||||
register: vault_file_stat
|
||||
become: no
|
||||
|
||||
- name: Load secrets from vault file if exists
|
||||
include_vars:
|
||||
file: "{{ playbook_dir }}/../secrets/{{ deployment_environment }}.vault.yml"
|
||||
when: vault_file_stat.stat.exists
|
||||
no_log: yes
|
||||
ignore_errors: yes
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Set app_name from provided value or default
|
||||
ansible.builtin.set_fact:
|
||||
app_name: "{{ app_name if (app_name is defined and app_name != '') else app_name_default }}"
|
||||
|
||||
- name: Extract registry URL from docker-compose file (for image deployment)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
grep -h "image:" docker-compose.base.yml docker-compose.{{ application_compose_suffix }} 2>/dev/null | \
|
||||
grep -E "{{ app_name }}" | head -1 | \
|
||||
sed -E 's/.*image:\s*([^\/]+).*/\1/' | \
|
||||
sed -E 's/\/.*$//' || echo "localhost:5000"
|
||||
register: compose_registry_url
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Set local registry (where containers expect the image)
|
||||
ansible.builtin.set_fact:
|
||||
local_registry: "{{ (compose_registry_url.stdout | trim) if (compose_registry_url.stdout | trim != '') else 'localhost:5000' }}"
|
||||
|
||||
- name: Set source registry (where workflow pushes the image)
|
||||
ansible.builtin.set_fact:
|
||||
source_registry: "{{ docker_registry }}"
|
||||
|
||||
- name: Set deploy_image from source registry (for pulling)
|
||||
ansible.builtin.set_fact:
|
||||
deploy_image: "{{ source_registry }}/{{ app_name }}:{{ image_tag }}"
|
||||
|
||||
- name: Set local_image (where containers expect the image)
|
||||
ansible.builtin.set_fact:
|
||||
local_image: "{{ local_registry }}/{{ app_name }}:{{ image_tag }}"
|
||||
|
||||
- name: Set database and MinIO variables from vault or defaults
|
||||
ansible.builtin.set_fact:
|
||||
db_username: "{{ db_username | default(vault_db_user | default('postgres')) }}"
|
||||
db_password: "{{ db_password | default(vault_db_password | default('')) }}"
|
||||
minio_root_user: "{{ minio_root_user | default(vault_minio_root_user | default('minioadmin')) }}"
|
||||
minio_root_password: "{{ minio_root_password | default(vault_minio_root_password | default('')) }}"
|
||||
secrets_dir: "{{ secrets_dir | default('./secrets') }}"
|
||||
git_repository_url: "{{ git_repository_url | default(vault_git_repository_url | default('https://git.michaelschiemer.de/michael/michaelschiemer.git')) }}"
|
||||
git_branch: >-
|
||||
{%- if deployment_environment == 'staging' -%}
|
||||
staging
|
||||
{%- else -%}
|
||||
main
|
||||
{%- endif -%}
|
||||
git_token: "{{ git_token | default(vault_git_token | default('')) }}"
|
||||
git_username: "{{ git_username | default(vault_git_username | default('')) }}"
|
||||
git_password: "{{ git_password | default(vault_git_password | default('')) }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Determine Docker registry password from vault or extra vars
|
||||
ansible.builtin.set_fact:
|
||||
registry_password: >-
|
||||
{%- if docker_registry_password is defined and docker_registry_password | string | trim != '' -%}
|
||||
{{ docker_registry_password }}
|
||||
{%- elif vault_docker_registry_password is defined and vault_docker_registry_password | string | trim != '' -%}
|
||||
{{ vault_docker_registry_password }}
|
||||
{%- else -%}
|
||||
{{ '' }}
|
||||
{%- endif -%}
|
||||
no_log: yes
|
||||
|
||||
- name: Check if registry is accessible
|
||||
ansible.builtin.uri:
|
||||
url: "https://{{ docker_registry }}/v2/"
|
||||
method: GET
|
||||
status_code: [200, 401]
|
||||
timeout: 5
|
||||
validate_certs: no
|
||||
register: registry_check
|
||||
ignore_errors: yes
|
||||
delegate_to: "{{ inventory_hostname }}"
|
||||
become: no
|
||||
|
||||
- name: Set registry accessible flag
|
||||
ansible.builtin.set_fact:
|
||||
registry_accessible: "{{ 'true' if (registry_check.status is defined and registry_check.status | int in [200, 401]) else 'false' }}"
|
||||
|
||||
- name: Login to Docker registry
|
||||
community.docker.docker_login:
|
||||
registry_url: "{{ docker_registry }}"
|
||||
username: "{{ docker_registry_username | default('admin') }}"
|
||||
password: "{{ registry_password }}"
|
||||
when:
|
||||
- registry_password | string | trim != ''
|
||||
- registry_accessible == 'true'
|
||||
no_log: yes
|
||||
ignore_errors: yes
|
||||
register: docker_login_result
|
||||
|
||||
- name: Display image pull information
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "Attempting to pull image: {{ deploy_image }}"
|
||||
- "Source registry: {{ source_registry }}"
|
||||
- "Local registry: {{ local_registry }}"
|
||||
- "Registry accessible: {{ registry_accessible | default('unknown') }}"
|
||||
when: registry_accessible is defined and registry_accessible == 'true'
|
||||
|
||||
- name: Check if image already exists locally
|
||||
ansible.builtin.shell: |
|
||||
docker images --format "{{ '{{' }}.Repository{{ '}}' }}:{{ '{{' }}.Tag{{ '}}' }}" | grep -E "^{{ deploy_image | regex_escape }}$" || echo "NOT_FOUND"
|
||||
register: image_exists_before_pull
|
||||
when: registry_accessible is defined and registry_accessible == 'true'
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display image existence check
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "Image exists before pull: {{ image_exists_before_pull.stdout | default('unknown') }}"
|
||||
- "Will pull: {{ 'YES' if (image_exists_before_pull.stdout | default('') == 'NOT_FOUND') else 'NO (already exists)' }}"
|
||||
when: registry_accessible is defined and registry_accessible == 'true'
|
||||
|
||||
- name: Pull Docker image from registry using shell command
|
||||
ansible.builtin.shell: |
|
||||
docker pull {{ deploy_image }} 2>&1
|
||||
when:
|
||||
- registry_accessible is defined and registry_accessible == 'true'
|
||||
register: image_pull_result
|
||||
ignore_errors: yes
|
||||
failed_when: false
|
||||
changed_when: image_pull_result.rc == 0
|
||||
|
||||
- name: Display pull result
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "Pull command exit code: {{ image_pull_result.rc | default('unknown') }}"
|
||||
- "Pull stdout: {{ image_pull_result.stdout | default('none') }}"
|
||||
- "Pull stderr: {{ image_pull_result.stderr | default('none') }}"
|
||||
- "Pull succeeded: {{ 'YES' if (image_pull_result.rc | default(1) == 0) else 'NO' }}"
|
||||
when: registry_accessible is defined and registry_accessible == 'true'
|
||||
|
||||
- name: Verify image exists locally after pull
|
||||
community.docker.docker_image_info:
|
||||
name: "{{ deploy_image }}"
|
||||
register: image_info
|
||||
when: registry_accessible is defined and registry_accessible == 'true'
|
||||
ignore_errors: yes
|
||||
failed_when: false
|
||||
|
||||
- name: Check if image exists by inspecting docker images
|
||||
ansible.builtin.shell: |
|
||||
docker images --format "{{ '{{' }}.Repository{{ '}}' }}:{{ '{{' }}.Tag{{ '}}' }}" | grep -E "^{{ deploy_image | regex_escape }}$" || echo "NOT_FOUND"
|
||||
register: image_check
|
||||
when: registry_accessible is defined and registry_accessible == 'true'
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display image verification results
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "Image info result: {{ image_info | default('not executed') }}"
|
||||
- "Image check result: {{ image_check.stdout | default('not executed') }}"
|
||||
- "Image exists: {{ 'YES' if (image_check.stdout | default('') != 'NOT_FOUND' and image_check.stdout | default('') != '') else 'NO' }}"
|
||||
when: registry_accessible is defined and registry_accessible == 'true'
|
||||
|
||||
- name: Fail if image was not pulled successfully
|
||||
ansible.builtin.fail:
|
||||
msg: |
|
||||
Failed to pull image {{ deploy_image }} from registry.
|
||||
The image does not exist locally after pull attempt.
|
||||
|
||||
Pull command result:
|
||||
- Exit code: {{ image_pull_result.rc | default('unknown') }}
|
||||
- Stdout: {{ image_pull_result.stdout | default('none') }}
|
||||
- Stderr: {{ image_pull_result.stderr | default('none') }}
|
||||
|
||||
Image check result: {{ image_check.stdout | default('unknown') }}
|
||||
|
||||
Please check:
|
||||
1. Does the image exist in {{ source_registry }}?
|
||||
2. Are registry credentials correct?
|
||||
3. Is the registry accessible?
|
||||
4. Check the pull command output above for specific error messages.
|
||||
when:
|
||||
- registry_accessible is defined and registry_accessible == 'true'
|
||||
- (image_check.stdout | default('') == 'NOT_FOUND' or image_check.stdout | default('') == '')
|
||||
|
||||
- name: Tag image for local registry (if source and local registry differ)
|
||||
community.docker.docker_image:
|
||||
name: "{{ deploy_image }}"
|
||||
repository: "{{ local_image }}"
|
||||
tag: "{{ image_tag }}"
|
||||
source: local
|
||||
when:
|
||||
- source_registry != local_registry
|
||||
- image_check.stdout is defined
|
||||
- image_check.stdout != 'NOT_FOUND'
|
||||
- image_check.stdout != ''
|
||||
register: image_tag_result
|
||||
|
||||
- name: Push image to local registry (if source and local registry differ)
|
||||
community.docker.docker_image:
|
||||
name: "{{ local_image }}"
|
||||
push: true
|
||||
source: local
|
||||
when:
|
||||
- source_registry != local_registry
|
||||
- image_tag_result.changed | default(false)
|
||||
ignore_errors: yes
|
||||
failed_when: false
|
||||
|
||||
- name: Update docker-compose file with new image tag
|
||||
ansible.builtin.replace:
|
||||
path: "{{ application_code_dest }}/docker-compose.{{ application_compose_suffix }}"
|
||||
regexp: '^(\s+image:\s+)({{ local_registry }}/{{ app_name }}:)(.*)$'
|
||||
replace: '\1\2{{ image_tag }}'
|
||||
register: compose_update_result
|
||||
failed_when: false
|
||||
changed_when: compose_update_result.changed | default(false)
|
||||
|
||||
- name: Update docker-compose file with new image (alternative pattern - any registry)
|
||||
ansible.builtin.replace:
|
||||
path: "{{ application_code_dest }}/docker-compose.{{ application_compose_suffix }}"
|
||||
regexp: '^(\s+image:\s+)([^\/]+\/{{ app_name }}:)(.*)$'
|
||||
replace: '\1{{ local_registry }}/{{ app_name }}:{{ image_tag }}'
|
||||
register: compose_update_alt
|
||||
when: compose_update_result.changed == false
|
||||
failed_when: false
|
||||
changed_when: compose_update_alt.changed | default(false)
|
||||
|
||||
# Ensure PostgreSQL Staging Stack is running (creates postgres-staging-internal network)
|
||||
- name: Check if PostgreSQL Staging Stack directory exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ stacks_base_path | default('/home/deploy/deployment/stacks') }}/postgresql-staging/docker-compose.yml"
|
||||
register: postgres_staging_compose_exists
|
||||
changed_when: false
|
||||
when: deployment_environment | default('') == 'staging'
|
||||
|
||||
- name: Check if PostgreSQL Staging Stack is running
|
||||
ansible.builtin.shell: |
|
||||
cd {{ stacks_base_path | default('/home/deploy/deployment/stacks') }}/postgresql-staging
|
||||
docker compose ps --format json 2>/dev/null | grep -q '"State":"running"' && echo "RUNNING" || echo "NOT_RUNNING"
|
||||
register: postgres_staging_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when:
|
||||
- deployment_environment | default('') == 'staging'
|
||||
- postgres_staging_compose_exists.stat.exists | default(false) | bool
|
||||
|
||||
- name: Start PostgreSQL Staging Stack if not running
|
||||
ansible.builtin.shell: |
|
||||
cd {{ stacks_base_path | default('/home/deploy/deployment/stacks') }}/postgresql-staging
|
||||
docker compose up -d
|
||||
register: postgres_staging_start
|
||||
changed_when: postgres_staging_start.rc == 0
|
||||
when:
|
||||
- deployment_environment | default('') == 'staging'
|
||||
- postgres_staging_compose_exists.stat.exists | default(false) | bool
|
||||
- postgres_staging_status.stdout | default('') == 'NOT_RUNNING'
|
||||
|
||||
- name: Wait for PostgreSQL Staging Stack to be ready
|
||||
ansible.builtin.wait_for:
|
||||
timeout: 30
|
||||
delay: 2
|
||||
when:
|
||||
- deployment_environment | default('') == 'staging'
|
||||
- postgres_staging_start.changed | default(false) | bool
|
||||
|
||||
# Extract and create external networks (fallback if PostgreSQL Stack doesn't create them)
|
||||
- name: Extract external networks from docker-compose files
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
grep -h "external:" docker-compose.base.yml docker-compose.{{ application_compose_suffix }} 2>/dev/null | \
|
||||
grep -B 5 "external: true" | \
|
||||
grep -E "^\s+[a-zA-Z0-9_-]+:" | \
|
||||
sed 's/://' | \
|
||||
sed 's/^[[:space:]]*//' | \
|
||||
sort -u || echo ""
|
||||
register: external_networks_raw
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: "Extract external network names (with name field)"
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
grep -A 2 "external: true" docker-compose.base.yml docker-compose.{{ application_compose_suffix }} 2>/dev/null | \
|
||||
grep "name:" | \
|
||||
sed 's/.*name:[[:space:]]*//' | \
|
||||
sed 's/"//g' | \
|
||||
sort -u || echo ""
|
||||
register: external_network_names
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Set list of external networks to create
|
||||
ansible.builtin.set_fact:
|
||||
external_networks_to_create: >-
|
||||
{%- set networks_from_keys = external_networks_raw.stdout | trim | split('\n') | select('match', '.+') | list -%}
|
||||
{%- set networks_from_names = external_network_names.stdout | trim | split('\n') | select('match', '.+') | list -%}
|
||||
{%- set all_networks = (networks_from_keys + networks_from_names) | unique | list -%}
|
||||
{%- set default_networks = ['traefik-public', 'app-internal'] -%}
|
||||
{%- set final_networks = (all_networks + default_networks) | unique | list -%}
|
||||
{{ final_networks }}
|
||||
|
||||
# Create external networks (fallback if they weren't created by their respective stacks)
|
||||
# Note: This is a fallback - ideally networks are created by their stacks (e.g., postgres-staging-internal by PostgreSQL Staging Stack)
|
||||
- name: Ensure Docker networks exist
|
||||
community.docker.docker_network:
|
||||
name: "{{ item }}"
|
||||
state: present
|
||||
driver: bridge
|
||||
loop: "{{ external_networks_to_create | default(['traefik-public', 'app-internal']) }}"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Check if .env file exists
|
||||
stat:
|
||||
path: "{{ application_code_dest }}/.env"
|
||||
register: env_file_exists
|
||||
|
||||
- name: Create minimal .env file if it doesn't exist
|
||||
copy:
|
||||
dest: "{{ application_code_dest }}/.env"
|
||||
content: |
|
||||
# Minimal .env file for Docker Compose
|
||||
# This file should be properly configured by the application setup playbook
|
||||
DB_USERNAME={{ db_username | default('postgres') }}
|
||||
DB_PASSWORD={{ db_password | default('') }}
|
||||
MINIO_ROOT_USER={{ minio_root_user | default('minioadmin') }}
|
||||
MINIO_ROOT_PASSWORD={{ minio_root_password | default('') }}
|
||||
SECRETS_DIR={{ secrets_dir | default('./secrets') }}
|
||||
GIT_REPOSITORY_URL={{ git_repository_url | default('') }}
|
||||
GIT_TOKEN={{ git_token | default('') }}
|
||||
GIT_USERNAME={{ git_username | default('') }}
|
||||
GIT_PASSWORD={{ git_password | default('') }}
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0600'
|
||||
when: not env_file_exists.stat.exists
|
||||
become: yes
|
||||
|
||||
- name: Check if Docker daemon.json exists
|
||||
stat:
|
||||
path: /etc/docker/daemon.json
|
||||
register: docker_daemon_json
|
||||
become: yes
|
||||
|
||||
- name: Read existing Docker daemon.json
|
||||
slurp:
|
||||
src: /etc/docker/daemon.json
|
||||
register: docker_daemon_config
|
||||
when: docker_daemon_json.stat.exists
|
||||
become: yes
|
||||
changed_when: false
|
||||
|
||||
- name: Set Docker daemon configuration with insecure registry
|
||||
set_fact:
|
||||
docker_daemon_config_dict: "{{ docker_daemon_config.content | b64decode | from_json if (docker_daemon_json.stat.exists and docker_daemon_config.content is defined) else {} }}"
|
||||
|
||||
- name: Build insecure registries list
|
||||
set_fact:
|
||||
insecure_registries_list: >-
|
||||
{%- set existing = docker_daemon_config_dict.get('insecure-registries', []) | list -%}
|
||||
{%- set needed_registries = [local_registry] | list -%}
|
||||
{%- set all_registries = (existing + needed_registries) | unique | list -%}
|
||||
{{ all_registries }}
|
||||
|
||||
- name: Merge insecure registry into Docker daemon config
|
||||
set_fact:
|
||||
docker_daemon_config_merged: "{{ docker_daemon_config_dict | combine({'insecure-registries': insecure_registries_list}) }}"
|
||||
|
||||
- name: Update Docker daemon.json with insecure registry
|
||||
copy:
|
||||
dest: /etc/docker/daemon.json
|
||||
content: "{{ docker_daemon_config_merged | to_json(indent=2) }}"
|
||||
mode: '0644'
|
||||
when: docker_daemon_config_merged != docker_daemon_config_dict
|
||||
become: yes
|
||||
register: docker_daemon_updated
|
||||
|
||||
- name: Restart Docker daemon if configuration changed
|
||||
systemd:
|
||||
name: docker
|
||||
state: restarted
|
||||
when: docker_daemon_updated.changed | default(false)
|
||||
become: yes
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Wait for Docker daemon to be ready
|
||||
shell: docker ps > /dev/null 2>&1
|
||||
register: docker_ready
|
||||
until: docker_ready.rc == 0
|
||||
retries: 30
|
||||
delay: 2
|
||||
when: docker_daemon_updated.changed | default(false)
|
||||
ignore_errors: yes
|
||||
changed_when: false
|
||||
|
||||
- name: Set list of registries to login to (source registry for pulling, local registry for pushing)
|
||||
ansible.builtin.set_fact:
|
||||
registries_to_login: >-
|
||||
{%- if source_registry is defined and local_registry is defined -%}
|
||||
{%- if source_registry != local_registry -%}
|
||||
{%- set reg_list = [source_registry, local_registry] -%}
|
||||
{%- else -%}
|
||||
{%- set reg_list = [local_registry] -%}
|
||||
{%- endif -%}
|
||||
{%- set final_list = reg_list | unique | list -%}
|
||||
{{ final_list }}
|
||||
{%- else -%}
|
||||
{%- set default_list = [docker_registry | default('localhost:5000')] -%}
|
||||
{{ default_list }}
|
||||
{%- endif -%}
|
||||
|
||||
- name: Login to all Docker registries before compose up
|
||||
community.docker.docker_login:
|
||||
registry_url: "{{ item }}"
|
||||
username: "{{ docker_registry_username | default('admin') }}"
|
||||
password: "{{ registry_password }}"
|
||||
when:
|
||||
- registry_password | string | trim != ''
|
||||
- registry_accessible == 'true'
|
||||
loop: "{{ registries_to_login | default([docker_registry]) }}"
|
||||
no_log: yes
|
||||
register: docker_login_results
|
||||
failed_when: false
|
||||
|
||||
- name: Display login results
|
||||
ansible.builtin.debug:
|
||||
msg: "Docker login to {{ item.item }}: {% if item.failed %}FAILED ({{ item.msg | default('unknown error') }}){% else %}SUCCESS{% endif %}"
|
||||
when:
|
||||
- registry_password | string | trim != ''
|
||||
- registry_accessible == 'true'
|
||||
loop: "{{ docker_login_results.results | default([]) }}"
|
||||
loop_control:
|
||||
label: "{{ item.item }}"
|
||||
|
||||
- name: Deploy application stack with new image
|
||||
shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} up -d --pull missing --force-recreate --remove-orphans
|
||||
register: compose_deploy_result
|
||||
changed_when: true
|
||||
environment:
|
||||
DB_USERNAME: "{{ db_username | default('postgres') }}"
|
||||
DB_PASSWORD: "{{ db_password | default('') }}"
|
||||
MINIO_ROOT_USER: "{{ minio_root_user | default('minioadmin') }}"
|
||||
MINIO_ROOT_PASSWORD: "{{ minio_root_password | default('') }}"
|
||||
SECRETS_DIR: "{{ secrets_dir | default('./secrets') }}"
|
||||
GIT_REPOSITORY_URL: "{{ git_repository_url | default('') }}"
|
||||
GIT_BRANCH: "{{ git_branch | default('main') }}"
|
||||
GIT_TOKEN: "{{ git_token | default('') }}"
|
||||
GIT_USERNAME: "{{ git_username | default('') }}"
|
||||
GIT_PASSWORD: "{{ git_password | default('') }}"
|
||||
|
||||
- name: Wait for containers to start
|
||||
ansible.builtin.pause:
|
||||
seconds: 15
|
||||
|
||||
- name: Check container status
|
||||
shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} ps
|
||||
register: container_status
|
||||
changed_when: false
|
||||
|
||||
- name: Display deployment summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
========================================
|
||||
Image Deployment Summary
|
||||
========================================
|
||||
Image: {{ deploy_image }}
|
||||
Tag: {{ image_tag }}
|
||||
Environment: {{ deployment_environment }}
|
||||
Stack: {{ application_code_dest }}
|
||||
Status: SUCCESS
|
||||
========================================
|
||||
|
||||
Container Status:
|
||||
{{ container_status.stdout | default('Not available') }}
|
||||
========================================
|
||||
|
||||
408
deployment/legacy/ansible/ansible/playbooks/diagnose/gitea.yml
Normal file
408
deployment/legacy/ansible/ansible/playbooks/diagnose/gitea.yml
Normal file
@@ -0,0 +1,408 @@
|
||||
---
|
||||
# Consolidated Gitea Diagnosis Playbook
|
||||
# Consolidates: diagnose-gitea-timeouts.yml, diagnose-gitea-timeout-deep.yml,
|
||||
# diagnose-gitea-timeout-live.yml, diagnose-gitea-timeouts-complete.yml,
|
||||
# comprehensive-gitea-diagnosis.yml
|
||||
#
|
||||
# Usage:
|
||||
# # Basic diagnosis (default)
|
||||
# ansible-playbook -i inventory/production.yml playbooks/diagnose/gitea.yml
|
||||
#
|
||||
# # Deep diagnosis (includes resource checks, multiple connection tests)
|
||||
# ansible-playbook -i inventory/production.yml playbooks/diagnose/gitea.yml --tags deep
|
||||
#
|
||||
# # Live diagnosis (monitors during request)
|
||||
# ansible-playbook -i inventory/production.yml playbooks/diagnose/gitea.yml --tags live
|
||||
#
|
||||
# # Complete diagnosis (all checks)
|
||||
# ansible-playbook -i inventory/production.yml playbooks/diagnose/gitea.yml --tags complete
|
||||
|
||||
- name: Diagnose Gitea Issues
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
gitea_stack_path: "{{ stacks_base_path }}/gitea"
|
||||
traefik_stack_path: "{{ stacks_base_path }}/traefik"
|
||||
gitea_url: "https://{{ gitea_domain }}"
|
||||
gitea_container_name: "gitea"
|
||||
traefik_container_name: "traefik"
|
||||
|
||||
tasks:
|
||||
# ========================================
|
||||
# BASIC DIAGNOSIS (always runs)
|
||||
# ========================================
|
||||
- name: Display diagnostic plan
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
GITEA DIAGNOSIS
|
||||
================================================================================
|
||||
|
||||
Running diagnosis with tags: {{ ansible_run_tags | default(['all']) }}
|
||||
|
||||
Basic checks (always):
|
||||
- Container status
|
||||
- Health endpoints
|
||||
- Network connectivity
|
||||
- Service discovery
|
||||
|
||||
Deep checks (--tags deep):
|
||||
- Resource usage
|
||||
- Multiple connection tests
|
||||
- Log analysis
|
||||
|
||||
Live checks (--tags live):
|
||||
- Real-time monitoring during request
|
||||
|
||||
Complete checks (--tags complete):
|
||||
- All checks including app.ini, ServersTransport, etc.
|
||||
|
||||
================================================================================
|
||||
|
||||
- name: Check Gitea container status
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose ps {{ gitea_container_name }}
|
||||
register: gitea_status
|
||||
changed_when: false
|
||||
|
||||
- name: Check Traefik container status
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose ps {{ traefik_container_name }}
|
||||
register: traefik_status
|
||||
changed_when: false
|
||||
|
||||
- name: Check Gitea health endpoint (direct from container)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose exec -T {{ gitea_container_name }} curl -f http://localhost:3000/api/healthz 2>&1 || echo "HEALTH_CHECK_FAILED"
|
||||
register: gitea_health_direct
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Check Gitea health endpoint (via Traefik)
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
validate_certs: false
|
||||
timeout: 10
|
||||
register: gitea_health_traefik
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
|
||||
- name: Check if Gitea is in traefik-public network
|
||||
ansible.builtin.shell: |
|
||||
docker network inspect traefik-public --format '{{ '{{' }}range .Containers{{ '}}' }}{{ '{{' }}.Name{{ '}}' }} {{ '{{' }}end{{ '}}' }}' 2>/dev/null | grep -q {{ gitea_container_name }} && echo "YES" || echo "NO"
|
||||
register: gitea_in_traefik_network
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Test connection from Traefik to Gitea
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose exec -T {{ traefik_container_name }} wget -qO- --timeout=5 http://{{ gitea_container_name }}:3000/api/healthz 2>&1 || echo "CONNECTION_FAILED"
|
||||
register: traefik_gitea_connection
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Check Traefik service discovery for Gitea
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose exec -T {{ traefik_container_name }} traefik show providers docker 2>/dev/null | grep -i "gitea" || echo "NOT_FOUND"
|
||||
register: traefik_gitea_service
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# DEEP DIAGNOSIS (--tags deep)
|
||||
# ========================================
|
||||
- name: Check Gitea container resources (CPU/Memory)
|
||||
ansible.builtin.shell: |
|
||||
docker stats {{ gitea_container_name }} --no-stream --format 'CPU: {{ '{{' }}.CPUPerc{{ '}}' }} | Memory: {{ '{{' }}.MemUsage{{ '}}' }}' 2>/dev/null || echo "Could not get stats"
|
||||
register: gitea_resources
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- deep
|
||||
- complete
|
||||
|
||||
- name: Check Traefik container resources (CPU/Memory)
|
||||
ansible.builtin.shell: |
|
||||
docker stats {{ traefik_container_name }} --no-stream --format 'CPU: {{ '{{' }}.CPUPerc{{ '}}' }} | Memory: {{ '{{' }}.MemUsage{{ '}}' }}' 2>/dev/null || echo "Could not get stats"
|
||||
register: traefik_resources
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- deep
|
||||
- complete
|
||||
|
||||
- name: Test Gitea direct connection (multiple attempts)
|
||||
ansible.builtin.shell: |
|
||||
for i in {1..5}; do
|
||||
echo "=== Attempt $i ==="
|
||||
cd {{ gitea_stack_path }}
|
||||
timeout 5 docker compose exec -T {{ gitea_container_name }} curl -f http://localhost:3000/api/healthz 2>&1 || echo "FAILED"
|
||||
sleep 1
|
||||
done
|
||||
register: gitea_direct_tests
|
||||
changed_when: false
|
||||
tags:
|
||||
- deep
|
||||
- complete
|
||||
|
||||
- name: Test Gitea via Traefik (multiple attempts)
|
||||
ansible.builtin.shell: |
|
||||
for i in {1..5}; do
|
||||
echo "=== Attempt $i ==="
|
||||
timeout 10 curl -k -s -o /dev/null -w "%{http_code}" {{ gitea_url }}/api/healthz 2>&1 || echo "TIMEOUT"
|
||||
sleep 2
|
||||
done
|
||||
register: gitea_traefik_tests
|
||||
changed_when: false
|
||||
tags:
|
||||
- deep
|
||||
- complete
|
||||
|
||||
- name: Check Gitea logs for errors/timeouts
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose logs {{ gitea_container_name }} --tail=50 2>&1 | grep -iE "error|timeout|failed|panic|fatal" | tail -20 || echo "No errors in recent logs"
|
||||
register: gitea_errors
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- deep
|
||||
- complete
|
||||
|
||||
- name: Check Traefik logs for Gitea-related errors
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose logs {{ traefik_container_name }} --tail=50 2>&1 | grep -iE "gitea|git\.michaelschiemer\.de|timeout|error" | tail -20 || echo "No Gitea-related errors in Traefik logs"
|
||||
register: traefik_gitea_errors
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- deep
|
||||
- complete
|
||||
|
||||
# ========================================
|
||||
# COMPLETE DIAGNOSIS (--tags complete)
|
||||
# ========================================
|
||||
- name: Test Gitea internal port (127.0.0.1:3000)
|
||||
ansible.builtin.shell: |
|
||||
docker exec {{ gitea_container_name }} curl -sS -I http://127.0.0.1:3000/ 2>&1 | head -5
|
||||
register: gitea_internal_test
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
- name: Test Traefik to Gitea via Docker DNS (gitea:3000)
|
||||
ansible.builtin.shell: |
|
||||
docker exec {{ traefik_container_name }} sh -lc 'apk add --no-cache curl >/dev/null 2>&1 || true; curl -sS -I http://gitea:3000/ 2>&1' | head -10
|
||||
register: traefik_gitea_dns_test
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
- name: Check Traefik logs for 504 errors
|
||||
ansible.builtin.shell: |
|
||||
docker logs {{ traefik_container_name }} --tail=100 2>&1 | grep -i "504\|timeout" | tail -20 || echo "No 504/timeout errors found"
|
||||
register: traefik_504_logs
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
- name: Check Gitea Traefik labels
|
||||
ansible.builtin.shell: |
|
||||
docker inspect {{ gitea_container_name }} --format '{{ '{{' }}json .Config.Labels{{ '}}' }}' 2>/dev/null | python3 -m json.tool | grep -E "traefik" || echo "No Traefik labels found"
|
||||
register: gitea_labels
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
- name: Verify service port is 3000
|
||||
ansible.builtin.shell: |
|
||||
docker inspect {{ gitea_container_name }} --format '{{ '{{' }}json .Config.Labels{{ '}}' }}' 2>/dev/null | python3 -c "import sys, json; labels = json.load(sys.stdin); print('server.port:', labels.get('traefik.http.services.gitea.loadbalancer.server.port', 'NOT SET'))"
|
||||
register: gitea_service_port
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
- name: Check ServersTransport configuration
|
||||
ansible.builtin.shell: |
|
||||
docker inspect {{ gitea_container_name }} --format '{{ '{{' }}json .Config.Labels{{ '}}' }}' 2>/dev/null | python3 -c "
|
||||
import sys, json
|
||||
labels = json.load(sys.stdin)
|
||||
transport = labels.get('traefik.http.services.gitea.loadbalancer.serversTransport', '')
|
||||
if transport:
|
||||
print('ServersTransport:', transport)
|
||||
print('dialtimeout:', labels.get('traefik.http.serverstransports.gitea-transport.forwardingtimeouts.dialtimeout', 'NOT SET'))
|
||||
print('responseheadertimeout:', labels.get('traefik.http.serverstransports.gitea-transport.forwardingtimeouts.responseheadertimeout', 'NOT SET'))
|
||||
print('idleconntimeout:', labels.get('traefik.http.serverstransports.gitea-transport.forwardingtimeouts.idleconntimeout', 'NOT SET'))
|
||||
print('maxidleconnsperhost:', labels.get('traefik.http.serverstransports.gitea-transport.maxidleconnsperhost', 'NOT SET'))
|
||||
else:
|
||||
print('ServersTransport: NOT CONFIGURED')
|
||||
"
|
||||
register: gitea_timeout_config
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
- name: Check Gitea app.ini proxy settings
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose exec -T {{ gitea_container_name }} cat /data/gitea/conf/app.ini 2>/dev/null | grep -E "PROXY_TRUSTED_PROXIES|LOCAL_ROOT_URL|COOKIE_SECURE|SAME_SITE" || echo "Proxy settings not found in app.ini"
|
||||
register: gitea_proxy_settings
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
- name: Check if Traefik can resolve Gitea hostname
|
||||
ansible.builtin.shell: |
|
||||
docker exec {{ traefik_container_name }} getent hosts {{ gitea_container_name }} || echo "DNS resolution failed"
|
||||
register: traefik_dns_resolution
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
- name: Check Docker networks for Gitea and Traefik
|
||||
ansible.builtin.shell: |
|
||||
docker inspect {{ gitea_container_name }} --format '{{ '{{' }}json .NetworkSettings.Networks{{ '}}' }}' | python3 -c "import sys, json; data=json.load(sys.stdin); print('Gitea networks:', list(data.keys()))"
|
||||
docker inspect {{ traefik_container_name }} --format '{{ '{{' }}json .NetworkSettings.Networks{{ '}}' }}' | python3 -c "import sys, json; data=json.load(sys.stdin); print('Traefik networks:', list(data.keys()))"
|
||||
register: docker_networks_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
- name: Test long-running endpoint from external
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/user/events"
|
||||
method: GET
|
||||
status_code: [200, 504]
|
||||
validate_certs: false
|
||||
timeout: 60
|
||||
register: long_running_endpoint_test
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
- name: Check Redis connection from Gitea
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose exec -T {{ gitea_container_name }} sh -c "redis-cli -h redis -a {{ vault_gitea_redis_password | default('gitea_redis_password') }} ping 2>&1" || echo "REDIS_CONNECTION_FAILED"
|
||||
register: gitea_redis_connection
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
- name: Check PostgreSQL connection from Gitea
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose exec -T {{ gitea_container_name }} sh -c "pg_isready -h postgres -p 5432 -U gitea 2>&1" || echo "POSTGRES_CONNECTION_FAILED"
|
||||
register: gitea_postgres_connection
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
# ========================================
|
||||
# SUMMARY
|
||||
# ========================================
|
||||
- name: Summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
GITEA DIAGNOSIS SUMMARY
|
||||
================================================================================
|
||||
|
||||
Container Status:
|
||||
- Gitea: {{ gitea_status.stdout | regex_replace('.*(Up|Down|Restarting).*', '\\1') | default('UNKNOWN') }}
|
||||
- Traefik: {{ traefik_status.stdout | regex_replace('.*(Up|Down|Restarting).*', '\\1') | default('UNKNOWN') }}
|
||||
|
||||
Health Checks:
|
||||
- Gitea (direct): {% if 'HEALTH_CHECK_FAILED' not in gitea_health_direct.stdout %}✅{% else %}❌{% endif %}
|
||||
- Gitea (via Traefik): {% if gitea_health_traefik.status == 200 %}✅{% else %}❌ (Status: {{ gitea_health_traefik.status | default('TIMEOUT') }}){% endif %}
|
||||
|
||||
Network:
|
||||
- Gitea in traefik-public: {% if gitea_in_traefik_network.stdout == 'YES' %}✅{% else %}❌{% endif %}
|
||||
- Traefik → Gitea: {% if 'CONNECTION_FAILED' not in traefik_gitea_connection.stdout %}✅{% else %}❌{% endif %}
|
||||
|
||||
Service Discovery:
|
||||
- Traefik finds Gitea: {% if 'NOT_FOUND' not in traefik_gitea_service.stdout %}✅{% else %}❌{% endif %}
|
||||
|
||||
{% if 'deep' in ansible_run_tags or 'complete' in ansible_run_tags %}
|
||||
Resources:
|
||||
- Gitea: {{ gitea_resources.stdout | default('N/A') }}
|
||||
- Traefik: {{ traefik_resources.stdout | default('N/A') }}
|
||||
|
||||
Connection Tests:
|
||||
- Direct (5 attempts): {{ gitea_direct_tests.stdout | default('N/A') }}
|
||||
- Via Traefik (5 attempts): {{ gitea_traefik_tests.stdout | default('N/A') }}
|
||||
|
||||
Error Logs:
|
||||
- Gitea: {{ gitea_errors.stdout | default('No errors') }}
|
||||
- Traefik: {{ traefik_gitea_errors.stdout | default('No errors') }}
|
||||
{% endif %}
|
||||
|
||||
{% if 'complete' in ansible_run_tags %}
|
||||
Configuration:
|
||||
- Service Port: {{ gitea_service_port.stdout | default('N/A') }}
|
||||
- ServersTransport: {{ gitea_timeout_config.stdout | default('N/A') }}
|
||||
- Proxy Settings: {{ gitea_proxy_settings.stdout | default('N/A') }}
|
||||
- DNS Resolution: {{ traefik_dns_resolution.stdout | default('N/A') }}
|
||||
- Networks: {{ docker_networks_check.stdout | default('N/A') }}
|
||||
|
||||
Long-Running Endpoint:
|
||||
- Status: {{ long_running_endpoint_test.status | default('N/A') }}
|
||||
|
||||
Dependencies:
|
||||
- Redis: {% if 'REDIS_CONNECTION_FAILED' not in gitea_redis_connection.stdout %}✅{% else %}❌{% endif %}
|
||||
- PostgreSQL: {% if 'POSTGRES_CONNECTION_FAILED' not in gitea_postgres_connection.stdout %}✅{% else %}❌{% endif %}
|
||||
{% endif %}
|
||||
|
||||
================================================================================
|
||||
RECOMMENDATIONS
|
||||
================================================================================
|
||||
|
||||
{% if gitea_health_traefik.status != 200 %}
|
||||
❌ Gitea is not reachable via Traefik
|
||||
→ Run: ansible-playbook -i inventory/production.yml playbooks/manage/gitea.yml --tags restart
|
||||
{% endif %}
|
||||
|
||||
{% if gitea_in_traefik_network.stdout != 'YES' %}
|
||||
❌ Gitea is not in traefik-public network
|
||||
→ Restart Gitea container to update network membership
|
||||
{% endif %}
|
||||
|
||||
{% if 'CONNECTION_FAILED' in traefik_gitea_connection.stdout %}
|
||||
❌ Traefik cannot reach Gitea
|
||||
→ Restart both containers
|
||||
{% endif %}
|
||||
|
||||
{% if 'NOT_FOUND' in traefik_gitea_service.stdout %}
|
||||
❌ Gitea not found in Traefik service discovery
|
||||
→ Restart Traefik to refresh service discovery
|
||||
{% endif %}
|
||||
|
||||
================================================================================
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
234
deployment/legacy/ansible/ansible/playbooks/diagnose/traefik.yml
Normal file
234
deployment/legacy/ansible/ansible/playbooks/diagnose/traefik.yml
Normal file
@@ -0,0 +1,234 @@
|
||||
---
|
||||
# Consolidated Traefik Diagnosis Playbook
|
||||
# Consolidates: diagnose-traefik-restarts.yml, find-traefik-restart-source.yml,
|
||||
# monitor-traefik-restarts.yml, monitor-traefik-continuously.yml,
|
||||
# verify-traefik-fix.yml
|
||||
#
|
||||
# Usage:
|
||||
# # Basic diagnosis (default)
|
||||
# ansible-playbook -i inventory/production.yml playbooks/diagnose/traefik.yml
|
||||
#
|
||||
# # Find restart source
|
||||
# ansible-playbook -i inventory/production.yml playbooks/diagnose/traefik.yml --tags restart-source
|
||||
#
|
||||
# # Monitor restarts
|
||||
# ansible-playbook -i inventory/production.yml playbooks/diagnose/traefik.yml --tags monitor
|
||||
|
||||
- name: Diagnose Traefik Issues
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: yes
|
||||
vars:
|
||||
traefik_stack_path: "{{ stacks_base_path }}/traefik"
|
||||
traefik_container_name: "traefik"
|
||||
monitor_duration_seconds: "{{ monitor_duration_seconds | default(120) }}"
|
||||
monitor_lookback_hours: "{{ monitor_lookback_hours | default(24) }}"
|
||||
|
||||
tasks:
|
||||
- name: Display diagnostic plan
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
TRAEFIK DIAGNOSIS
|
||||
================================================================================
|
||||
|
||||
Running diagnosis with tags: {{ ansible_run_tags | default(['all']) }}
|
||||
|
||||
Basic checks (always):
|
||||
- Container status
|
||||
- Restart count
|
||||
- Recent logs
|
||||
|
||||
Restart source (--tags restart-source):
|
||||
- Find source of restart loops
|
||||
- Check cronjobs, systemd, scripts
|
||||
|
||||
Monitor (--tags monitor):
|
||||
- Monitor for restarts over time
|
||||
|
||||
================================================================================
|
||||
|
||||
# ========================================
|
||||
# BASIC DIAGNOSIS (always runs)
|
||||
# ========================================
|
||||
- name: Check Traefik container status
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose ps {{ traefik_container_name }}
|
||||
register: traefik_status
|
||||
changed_when: false
|
||||
|
||||
- name: Check Traefik container restart count
|
||||
ansible.builtin.shell: |
|
||||
docker inspect {{ traefik_container_name }} --format '{{ '{{' }}.RestartCount{{ '}}' }}' 2>/dev/null || echo "0"
|
||||
register: traefik_restart_count
|
||||
changed_when: false
|
||||
|
||||
- name: Check Traefik container start time
|
||||
ansible.builtin.shell: |
|
||||
docker inspect {{ traefik_container_name }} --format '{{ '{{' }}.State.StartedAt{{ '}}' }}' 2>/dev/null || echo "UNKNOWN"
|
||||
register: traefik_started_at
|
||||
changed_when: false
|
||||
|
||||
- name: Check Traefik logs for recent restarts
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose logs {{ traefik_container_name }} --since 2h 2>&1 | grep -iE "stopping server gracefully|I have to go|restart|shutdown" | tail -20 || echo "No restart messages in last 2 hours"
|
||||
register: traefik_restart_logs
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Check Traefik logs for errors
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose logs {{ traefik_container_name }} --tail=100 2>&1 | grep -iE "error|warn|fail" | tail -20 || echo "No errors in recent logs"
|
||||
register: traefik_error_logs
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# RESTART SOURCE DIAGNOSIS (--tags restart-source)
|
||||
# ========================================
|
||||
- name: Check all user crontabs for Traefik/Docker commands
|
||||
ansible.builtin.shell: |
|
||||
for user in $(cut -f1 -d: /etc/passwd); do
|
||||
crontab -u "$user" -l 2>/dev/null | grep -qE "traefik|docker.*compose.*traefik|docker.*stop.*traefik|docker.*restart.*traefik|docker.*down.*traefik" && echo "=== User: $user ===" && crontab -u "$user" -l 2>/dev/null | grep -E "traefik|docker.*compose.*traefik|docker.*stop.*traefik|docker.*restart.*traefik|docker.*down.*traefik" || true
|
||||
done || echo "No user crontabs with Traefik commands found"
|
||||
register: all_user_crontabs
|
||||
changed_when: false
|
||||
tags:
|
||||
- restart-source
|
||||
|
||||
- name: Check system-wide cron directories
|
||||
ansible.builtin.shell: |
|
||||
for dir in /etc/cron.d /etc/cron.daily /etc/cron.hourly /etc/cron.weekly /etc/cron.monthly; do
|
||||
if [ -d "$dir" ]; then
|
||||
echo "=== $dir ==="
|
||||
grep -rE "traefik|docker.*compose.*traefik|docker.*stop.*traefik|docker.*restart.*traefik|docker.*down.*traefik" "$dir" 2>/dev/null || echo "No matches"
|
||||
fi
|
||||
done
|
||||
register: system_cron_dirs
|
||||
changed_when: false
|
||||
tags:
|
||||
- restart-source
|
||||
|
||||
- name: Check systemd timers and services
|
||||
ansible.builtin.shell: |
|
||||
echo "=== Active Timers ==="
|
||||
systemctl list-timers --all --no-pager | grep -E "traefik|docker.*compose" || echo "No Traefik-related timers"
|
||||
echo ""
|
||||
echo "=== Custom Services ==="
|
||||
systemctl list-units --type=service --all | grep -E "traefik|docker.*compose" || echo "No Traefik-related services"
|
||||
register: systemd_services
|
||||
changed_when: false
|
||||
tags:
|
||||
- restart-source
|
||||
|
||||
- name: Check for scripts in deployment directory that restart Traefik
|
||||
ansible.builtin.shell: |
|
||||
find /home/deploy/deployment -type f \( -name "*.sh" -o -name "*.yml" -o -name "*.yaml" \) -exec grep -lE "traefik.*restart|docker.*compose.*traefik.*restart|docker.*compose.*traefik.*down|docker.*compose.*traefik.*stop" {} \; 2>/dev/null | head -30
|
||||
register: deployment_scripts
|
||||
changed_when: false
|
||||
tags:
|
||||
- restart-source
|
||||
|
||||
- name: Check Ansible roles for traefik_auto_restart or restart tasks
|
||||
ansible.builtin.shell: |
|
||||
grep -rE "traefik_auto_restart|traefik.*restart|docker.*compose.*traefik.*restart" /home/deploy/deployment/ansible/roles/ 2>/dev/null | grep -v ".git" | head -20 || echo "No auto-restart settings found"
|
||||
register: ansible_auto_restart
|
||||
changed_when: false
|
||||
tags:
|
||||
- restart-source
|
||||
|
||||
- name: Check Docker events for Traefik (last 24 hours)
|
||||
ansible.builtin.shell: |
|
||||
timeout 5 docker events --since 24h --filter container={{ traefik_container_name }} --filter event=die --format "{{ '{{' }}.Time{{ '}}' }} {{ '{{' }}.Action{{ '}}' }}" 2>/dev/null | tail -20 || echo "No Traefik die events found"
|
||||
register: docker_events_traefik
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- restart-source
|
||||
|
||||
# ========================================
|
||||
# MONITOR (--tags monitor)
|
||||
# ========================================
|
||||
- name: Check Traefik logs for stop messages (lookback period)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose logs {{ traefik_container_name }} --since {{ monitor_lookback_hours }}h 2>&1 | grep -E "I have to go|Stopping server gracefully" | tail -20 || echo "No stop messages found"
|
||||
register: traefik_stop_messages
|
||||
changed_when: false
|
||||
tags:
|
||||
- monitor
|
||||
|
||||
- name: Count stop messages
|
||||
ansible.builtin.set_fact:
|
||||
stop_count: "{{ traefik_stop_messages.stdout | regex_findall('I have to go|Stopping server gracefully') | length }}"
|
||||
tags:
|
||||
- monitor
|
||||
|
||||
- name: Check system reboot history
|
||||
ansible.builtin.shell: |
|
||||
last reboot | head -5 || echo "No reboots found"
|
||||
register: reboots
|
||||
changed_when: false
|
||||
tags:
|
||||
- monitor
|
||||
|
||||
# ========================================
|
||||
# SUMMARY
|
||||
# ========================================
|
||||
- name: Summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
TRAEFIK DIAGNOSIS SUMMARY
|
||||
================================================================================
|
||||
|
||||
Container Status:
|
||||
- Status: {{ traefik_status.stdout | regex_replace('.*(Up|Down|Restarting).*', '\\1') | default('UNKNOWN') }}
|
||||
- Restart Count: {{ traefik_restart_count.stdout }}
|
||||
- Started At: {{ traefik_started_at.stdout }}
|
||||
|
||||
Recent Logs:
|
||||
- Restart Messages (last 2h): {{ traefik_restart_logs.stdout | default('None') }}
|
||||
- Errors (last 100 lines): {{ traefik_error_logs.stdout | default('None') }}
|
||||
|
||||
{% if 'restart-source' in ansible_run_tags %}
|
||||
Restart Source Analysis:
|
||||
- User Crontabs: {{ all_user_crontabs.stdout | default('None found') }}
|
||||
- System Cron: {{ system_cron_dirs.stdout | default('None found') }}
|
||||
- Systemd Services/Timers: {{ systemd_services.stdout | default('None found') }}
|
||||
- Deployment Scripts: {{ deployment_scripts.stdout | default('None found') }}
|
||||
- Ansible Auto-Restart: {{ ansible_auto_restart.stdout | default('None found') }}
|
||||
- Docker Events: {{ docker_events_traefik.stdout | default('None found') }}
|
||||
{% endif %}
|
||||
|
||||
{% if 'monitor' in ansible_run_tags %}
|
||||
Monitoring (last {{ monitor_lookback_hours }} hours):
|
||||
- Stop Messages: {{ stop_count | default(0) }}
|
||||
- System Reboots: {{ reboots.stdout | default('None') }}
|
||||
{% endif %}
|
||||
|
||||
================================================================================
|
||||
RECOMMENDATIONS
|
||||
================================================================================
|
||||
|
||||
{% if 'stopping server gracefully' in traefik_restart_logs.stdout | lower or 'I have to go' in traefik_restart_logs.stdout %}
|
||||
❌ PROBLEM: Traefik is being stopped regularly!
|
||||
→ Run with --tags restart-source to find the source
|
||||
{% endif %}
|
||||
|
||||
{% if (traefik_restart_count.stdout | int) > 5 %}
|
||||
⚠️ WARNING: High restart count ({{ traefik_restart_count.stdout }})
|
||||
→ Check restart source: ansible-playbook -i inventory/production.yml playbooks/diagnose/traefik.yml --tags restart-source
|
||||
{% endif %}
|
||||
|
||||
================================================================================
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,79 @@
|
||||
---
|
||||
- name: Final Status Check - All Containers
|
||||
hosts: production
|
||||
gather_facts: no
|
||||
become: no
|
||||
|
||||
vars:
|
||||
application_stack_dest: "{{ app_stack_path | default(stacks_base_path + '/production') }}"
|
||||
application_compose_suffix: "production.yml"
|
||||
|
||||
tasks:
|
||||
- name: Wait for containers to fully start
|
||||
pause:
|
||||
seconds: 15
|
||||
|
||||
- name: Get all container status
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} ps
|
||||
register: all_containers
|
||||
changed_when: false
|
||||
|
||||
- name: Display all container status
|
||||
debug:
|
||||
msg: |
|
||||
========================================
|
||||
Final Container Status
|
||||
========================================
|
||||
{{ all_containers.stdout }}
|
||||
|
||||
- name: Check web container health
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} exec -T web curl -f http://localhost/health 2>&1 || echo "HEALTH_CHECK_FAILED"
|
||||
register: web_health_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display web health check
|
||||
debug:
|
||||
msg: |
|
||||
Web Container Health Check:
|
||||
{{ web_health_check.stdout }}
|
||||
|
||||
- name: Get web container logs (last 10 lines)
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} logs --tail=10 web 2>&1 | tail -10 || true
|
||||
register: web_logs
|
||||
changed_when: false
|
||||
|
||||
- name: Display web container logs
|
||||
debug:
|
||||
msg: |
|
||||
Web Container Logs (last 10 lines):
|
||||
{{ web_logs.stdout }}
|
||||
|
||||
- name: Get queue-worker logs (last 3 lines)
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} logs --tail=3 queue-worker 2>&1 | tail -3 || true
|
||||
register: queue_worker_logs
|
||||
changed_when: false
|
||||
|
||||
- name: Display queue-worker logs
|
||||
debug:
|
||||
msg: |
|
||||
Queue-Worker (last 3 lines):
|
||||
{{ queue_worker_logs.stdout }}
|
||||
|
||||
- name: Get scheduler logs (last 3 lines)
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} logs --tail=3 scheduler 2>&1 | tail -3 || true
|
||||
register: scheduler_logs
|
||||
changed_when: false
|
||||
|
||||
- name: Display scheduler logs
|
||||
debug:
|
||||
msg: |
|
||||
Scheduler (last 3 lines):
|
||||
{{ scheduler_logs.stdout }}
|
||||
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
# Fix Container Issues (Composer Dependencies and Permissions)
|
||||
# Wrapper Playbook for application role containers tasks (fix action)
|
||||
- hosts: production
|
||||
gather_facts: no
|
||||
become: no
|
||||
vars:
|
||||
application_container_action: fix
|
||||
tasks:
|
||||
- name: Include application containers tasks (fix)
|
||||
ansible.builtin.include_role:
|
||||
name: application
|
||||
tasks_from: containers
|
||||
tags:
|
||||
- application
|
||||
- containers
|
||||
- fix
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
# Fix Gitea Runner Configuration
|
||||
# Wrapper Playbook for gitea role runner tasks (fix action)
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
gitea_runner_action: fix
|
||||
tasks:
|
||||
- name: Include gitea runner tasks (fix)
|
||||
ansible.builtin.include_role:
|
||||
name: gitea
|
||||
tasks_from: runner
|
||||
tags:
|
||||
- gitea
|
||||
- runner
|
||||
- fix
|
||||
@@ -0,0 +1,138 @@
|
||||
---
|
||||
# Fix Traefik ACME JSON Permissions
|
||||
# Prüft und korrigiert Berechtigungen für acme.json Datei
|
||||
- name: Fix Traefik ACME JSON Permissions
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
|
||||
tasks:
|
||||
- name: Check if Traefik stack directory exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ traefik_stack_path | default('/home/deploy/deployment/stacks/traefik') }}"
|
||||
register: traefik_stack_exists
|
||||
|
||||
- name: Fail if Traefik stack directory does not exist
|
||||
ansible.builtin.fail:
|
||||
msg: "Traefik stack directory not found at {{ traefik_stack_path | default('/home/deploy/deployment/stacks/traefik') }}"
|
||||
when: not traefik_stack_exists.stat.exists
|
||||
|
||||
- name: Check if acme.json exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ traefik_stack_path | default('/home/deploy/deployment/stacks/traefik') }}/acme.json"
|
||||
register: acme_json_exists
|
||||
|
||||
- name: Create acme.json if it doesn't exist
|
||||
ansible.builtin.file:
|
||||
path: "{{ traefik_stack_path | default('/home/deploy/deployment/stacks/traefik') }}/acme.json"
|
||||
state: file
|
||||
mode: '0600'
|
||||
owner: "{{ ansible_user | default('deploy') }}"
|
||||
group: "{{ ansible_user | default('deploy') }}"
|
||||
when: not acme_json_exists.stat.exists
|
||||
|
||||
- name: Get current acme.json permissions
|
||||
ansible.builtin.stat:
|
||||
path: "{{ traefik_stack_path | default('/home/deploy/deployment/stacks/traefik') }}/acme.json"
|
||||
register: acme_json_stat
|
||||
|
||||
- name: Display current acme.json permissions
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
Aktuelle acme.json Berechtigungen:
|
||||
================================================================================
|
||||
Path: {{ acme_json_stat.stat.path }}
|
||||
Owner: {{ acme_json_stat.stat.pw_name }} (UID: {{ acme_json_stat.stat.uid }})
|
||||
Group: {{ acme_json_stat.stat.gr_name }} (GID: {{ acme_json_stat.stat.gid }})
|
||||
Mode: {{ acme_json_stat.stat.mode | string | regex_replace('^0o?', '') }}
|
||||
Size: {{ acme_json_stat.stat.size }} bytes
|
||||
================================================================================
|
||||
|
||||
- name: Fix acme.json permissions (chmod 600)
|
||||
ansible.builtin.file:
|
||||
path: "{{ traefik_stack_path | default('/home/deploy/deployment/stacks/traefik') }}/acme.json"
|
||||
mode: '0600'
|
||||
owner: "{{ ansible_user | default('deploy') }}"
|
||||
group: "{{ ansible_user | default('deploy') }}"
|
||||
register: acme_json_permissions_fixed
|
||||
|
||||
- name: Verify acme.json permissions after fix
|
||||
ansible.builtin.stat:
|
||||
path: "{{ traefik_stack_path | default('/home/deploy/deployment/stacks/traefik') }}/acme.json"
|
||||
register: acme_json_stat_after
|
||||
|
||||
- name: Display fixed acme.json permissions
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
Korrigierte acme.json Berechtigungen:
|
||||
================================================================================
|
||||
Path: {{ acme_json_stat_after.stat.path }}
|
||||
Owner: {{ acme_json_stat_after.stat.pw_name }} (UID: {{ acme_json_stat_after.stat.uid }})
|
||||
Group: {{ acme_json_stat_after.stat.gr_name }} (GID: {{ acme_json_stat_after.stat.gid }})
|
||||
Mode: {{ acme_json_stat_after.stat.mode | string | regex_replace('^0o?', '') }}
|
||||
Size: {{ acme_json_stat_after.stat.size }} bytes
|
||||
================================================================================
|
||||
✅ acme.json hat jetzt chmod 600 (nur Owner kann lesen/schreiben)
|
||||
================================================================================
|
||||
|
||||
- name: Check Traefik container can write to acme.json
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path | default('/home/deploy/deployment/stacks/traefik') }}
|
||||
docker compose exec -T traefik sh -c "test -w /acme.json && echo 'WRITABLE' || echo 'NOT_WRITABLE'" 2>&1 || echo "CONTAINER_CHECK_FAILED"
|
||||
register: acme_json_writable_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display acme.json writable check
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
Traefik Container Schreibzugriff auf acme.json:
|
||||
================================================================================
|
||||
{% if 'WRITABLE' in acme_json_writable_check.stdout %}
|
||||
✅ Traefik Container kann auf acme.json schreiben
|
||||
{% elif 'NOT_WRITABLE' in acme_json_writable_check.stdout %}
|
||||
⚠️ Traefik Container kann NICHT auf acme.json schreiben
|
||||
{% else %}
|
||||
⚠️ Konnte Container-Zugriff nicht prüfen: {{ acme_json_writable_check.stdout }}
|
||||
{% endif %}
|
||||
================================================================================
|
||||
|
||||
- name: Check Docker volume mount for acme.json
|
||||
ansible.builtin.shell: |
|
||||
docker inspect traefik --format '{{ '{{' }}json .Mounts{{ '}}' }}' 2>/dev/null | jq '.[] | select(.Destination=="/acme.json")' || echo "Could not check volume mount"
|
||||
register: acme_json_mount
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display acme.json volume mount
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
Docker Volume Mount für acme.json:
|
||||
================================================================================
|
||||
{{ acme_json_mount.stdout }}
|
||||
================================================================================
|
||||
|
||||
- name: Summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
ZUSAMMENFASSUNG - acme.json Berechtigungen:
|
||||
================================================================================
|
||||
|
||||
✅ acme.json Berechtigungen auf chmod 600 gesetzt
|
||||
✅ Owner/Group auf {{ ansible_user | default('deploy') }} gesetzt
|
||||
|
||||
Wichtig:
|
||||
- acme.json muss beschreibbar sein für Traefik Container
|
||||
- Port 80/443 müssen vom Host auf Traefik zeigen
|
||||
- Traefik muss stabil laufen (keine häufigen Restarts)
|
||||
|
||||
Nächste Schritte:
|
||||
- Stelle sicher, dass Traefik stabil läuft
|
||||
- Warte 5-10 Minuten auf ACME-Challenge-Abschluss
|
||||
- Prüfe Traefik-Logs auf ACME-Fehler
|
||||
================================================================================
|
||||
@@ -0,0 +1,18 @@
|
||||
---
|
||||
# Fix Web Container Permissions
|
||||
# Wrapper Playbook for application role containers tasks (fix-web action)
|
||||
- hosts: production
|
||||
gather_facts: no
|
||||
become: no
|
||||
vars:
|
||||
application_container_action: fix-web
|
||||
application_container_stabilize_wait: 10
|
||||
tasks:
|
||||
- name: Include application containers tasks (fix-web)
|
||||
ansible.builtin.include_role:
|
||||
name: application
|
||||
tasks_from: containers
|
||||
tags:
|
||||
- application
|
||||
- containers
|
||||
- web
|
||||
@@ -0,0 +1,229 @@
|
||||
---
|
||||
# WireGuard Client Configuration Generator
|
||||
# Usage: ansible-playbook playbooks/generate-wireguard-client.yml -e "client_name=michael-laptop"
|
||||
|
||||
- name: Generate WireGuard Client Configuration
|
||||
hosts: server
|
||||
become: true
|
||||
gather_facts: true
|
||||
|
||||
vars:
|
||||
# Default values (can be overridden with -e)
|
||||
wireguard_config_dir: "/etc/wireguard"
|
||||
wireguard_interface: "wg0"
|
||||
wireguard_server_endpoint: "{{ ansible_default_ipv4.address }}"
|
||||
wireguard_server_port: 51820
|
||||
wireguard_vpn_network: "10.8.0.0/24"
|
||||
wireguard_server_ip: "10.8.0.1"
|
||||
|
||||
# Client output directory (local)
|
||||
client_config_dir: "{{ playbook_dir }}/../wireguard/configs"
|
||||
|
||||
# Required variable (must be passed via -e)
|
||||
# client_name: "device-name"
|
||||
|
||||
tasks:
|
||||
- name: Validate client_name is provided
|
||||
assert:
|
||||
that:
|
||||
- client_name is defined
|
||||
- client_name | length > 0
|
||||
fail_msg: "ERROR: client_name must be provided via -e client_name=<name>"
|
||||
success_msg: "Generating config for client: {{ client_name }}"
|
||||
|
||||
- name: Validate client_name format (alphanumeric and hyphens only)
|
||||
assert:
|
||||
that:
|
||||
- client_name is match('^[a-zA-Z0-9-]+$')
|
||||
fail_msg: "ERROR: client_name must contain only letters, numbers, and hyphens"
|
||||
success_msg: "Client name format is valid"
|
||||
|
||||
- name: Check if WireGuard server is configured
|
||||
stat:
|
||||
path: "{{ wireguard_config_dir }}/{{ wireguard_interface }}.conf"
|
||||
register: server_config
|
||||
|
||||
- name: Fail if server config doesn't exist
|
||||
fail:
|
||||
msg: "WireGuard server config not found. Run setup-wireguard-host.yml first."
|
||||
when: not server_config.stat.exists
|
||||
|
||||
- name: Read server public key
|
||||
slurp:
|
||||
src: "{{ wireguard_config_dir }}/server_public.key"
|
||||
register: server_public_key_raw
|
||||
|
||||
- name: Set server public key fact
|
||||
set_fact:
|
||||
server_public_key: "{{ server_public_key_raw.content | b64decode | trim }}"
|
||||
|
||||
- name: Get next available IP address
|
||||
shell: |
|
||||
# Parse existing peer IPs from wg0.conf
|
||||
existing_ips=$(grep -oP 'AllowedIPs\s*=\s*\K[0-9.]+' {{ wireguard_config_dir }}/{{ wireguard_interface }}.conf 2>/dev/null || echo "")
|
||||
|
||||
# Start from .2 (server is .1)
|
||||
i=2
|
||||
while [ $i -le 254 ]; do
|
||||
ip="10.8.0.$i"
|
||||
if ! echo "$existing_ips" | grep -q "^$ip$"; then
|
||||
echo "$ip"
|
||||
exit 0
|
||||
fi
|
||||
i=$((i + 1))
|
||||
done
|
||||
|
||||
echo "ERROR: No free IP addresses" >&2
|
||||
exit 1
|
||||
register: next_ip_result
|
||||
changed_when: false
|
||||
|
||||
- name: Set client IP fact
|
||||
set_fact:
|
||||
client_ip: "{{ next_ip_result.stdout | trim }}"
|
||||
|
||||
- name: Display client IP assignment
|
||||
debug:
|
||||
msg: "Assigned IP for {{ client_name }}: {{ client_ip }}"
|
||||
|
||||
- name: Check if client already exists
|
||||
shell: |
|
||||
grep -q "# Client: {{ client_name }}" {{ wireguard_config_dir }}/{{ wireguard_interface }}.conf
|
||||
register: client_exists
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Warn if client already exists
|
||||
debug:
|
||||
msg: "WARNING: Client '{{ client_name }}' already exists in server config. Creating new keys anyway."
|
||||
when: client_exists.rc == 0
|
||||
|
||||
- name: Generate client private key
|
||||
shell: wg genkey
|
||||
register: client_private_key_result
|
||||
changed_when: true
|
||||
no_log: true
|
||||
|
||||
- name: Generate client public key
|
||||
shell: echo "{{ client_private_key_result.stdout }}" | wg pubkey
|
||||
register: client_public_key_result
|
||||
changed_when: false
|
||||
no_log: true
|
||||
|
||||
- name: Generate preshared key
|
||||
shell: wg genpsk
|
||||
register: preshared_key_result
|
||||
changed_when: true
|
||||
no_log: true
|
||||
|
||||
- name: Set client key facts
|
||||
set_fact:
|
||||
client_private_key: "{{ client_private_key_result.stdout | trim }}"
|
||||
client_public_key: "{{ client_public_key_result.stdout | trim }}"
|
||||
preshared_key: "{{ preshared_key_result.stdout | trim }}"
|
||||
no_log: true
|
||||
|
||||
- name: Create client config directory on control node
|
||||
delegate_to: localhost
|
||||
file:
|
||||
path: "{{ client_config_dir }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
become: false
|
||||
|
||||
- name: Generate client WireGuard configuration
|
||||
delegate_to: localhost
|
||||
copy:
|
||||
content: |
|
||||
[Interface]
|
||||
# Client: {{ client_name }}
|
||||
# Generated: {{ ansible_date_time.iso8601 }}
|
||||
PrivateKey = {{ client_private_key }}
|
||||
Address = {{ client_ip }}/32
|
||||
DNS = 1.1.1.1, 8.8.8.8
|
||||
|
||||
[Peer]
|
||||
# WireGuard Server
|
||||
PublicKey = {{ server_public_key }}
|
||||
PresharedKey = {{ preshared_key }}
|
||||
Endpoint = {{ wireguard_server_endpoint }}:{{ wireguard_server_port }}
|
||||
AllowedIPs = {{ wireguard_vpn_network }}
|
||||
PersistentKeepalive = 25
|
||||
dest: "{{ client_config_dir }}/{{ client_name }}.conf"
|
||||
mode: '0600'
|
||||
become: false
|
||||
no_log: true
|
||||
|
||||
- name: Add client peer to server configuration
|
||||
blockinfile:
|
||||
path: "{{ wireguard_config_dir }}/{{ wireguard_interface }}.conf"
|
||||
marker: "# {mark} ANSIBLE MANAGED BLOCK - Client: {{ client_name }}"
|
||||
block: |
|
||||
|
||||
[Peer]
|
||||
# Client: {{ client_name }}
|
||||
PublicKey = {{ client_public_key }}
|
||||
PresharedKey = {{ preshared_key }}
|
||||
AllowedIPs = {{ client_ip }}/32
|
||||
no_log: true
|
||||
|
||||
- name: Reload WireGuard configuration
|
||||
shell: wg syncconf {{ wireguard_interface }} <(wg-quick strip {{ wireguard_interface }})
|
||||
args:
|
||||
executable: /bin/bash
|
||||
|
||||
- name: Generate QR code (ASCII)
|
||||
delegate_to: localhost
|
||||
shell: |
|
||||
qrencode -t ansiutf8 < {{ client_config_dir }}/{{ client_name }}.conf > {{ client_config_dir }}/{{ client_name }}.qr.txt
|
||||
become: false
|
||||
changed_when: true
|
||||
|
||||
- name: Generate QR code (PNG)
|
||||
delegate_to: localhost
|
||||
shell: |
|
||||
qrencode -t png -o {{ client_config_dir }}/{{ client_name }}.qr.png < {{ client_config_dir }}/{{ client_name }}.conf
|
||||
become: false
|
||||
changed_when: true
|
||||
|
||||
- name: Display QR code for mobile devices
|
||||
delegate_to: localhost
|
||||
shell: cat {{ client_config_dir }}/{{ client_name }}.qr.txt
|
||||
register: qr_code_output
|
||||
become: false
|
||||
changed_when: false
|
||||
|
||||
- name: Client configuration summary
|
||||
debug:
|
||||
msg:
|
||||
- "========================================="
|
||||
- "WireGuard Client Configuration Created!"
|
||||
- "========================================="
|
||||
- ""
|
||||
- "Client: {{ client_name }}"
|
||||
- "IP Address: {{ client_ip }}/32"
|
||||
- "Public Key: {{ client_public_key }}"
|
||||
- ""
|
||||
- "Configuration Files:"
|
||||
- " Config: {{ client_config_dir }}/{{ client_name }}.conf"
|
||||
- " QR Code (ASCII): {{ client_config_dir }}/{{ client_name }}.qr.txt"
|
||||
- " QR Code (PNG): {{ client_config_dir }}/{{ client_name }}.qr.png"
|
||||
- ""
|
||||
- "Server Configuration:"
|
||||
- " Endpoint: {{ wireguard_server_endpoint }}:{{ wireguard_server_port }}"
|
||||
- " Allowed IPs: {{ wireguard_vpn_network }}"
|
||||
- ""
|
||||
- "Next Steps:"
|
||||
- " Linux/macOS: sudo cp {{ client_config_dir }}/{{ client_name }}.conf /etc/wireguard/ && sudo wg-quick up {{ client_name }}"
|
||||
- " Windows: Import {{ client_name }}.conf in WireGuard GUI"
|
||||
- " iOS/Android: Scan QR code with WireGuard app"
|
||||
- ""
|
||||
- "Test Connection:"
|
||||
- " ping {{ wireguard_server_ip }}"
|
||||
- " curl -k https://{{ wireguard_server_ip }}:8080 # Traefik Dashboard"
|
||||
- ""
|
||||
- "========================================="
|
||||
|
||||
- name: Display QR code
|
||||
debug:
|
||||
msg: "{{ qr_code_output.stdout_lines }}"
|
||||
@@ -0,0 +1,350 @@
|
||||
---
|
||||
- name: Initial Server Setup - Debian 13 (Trixie)
|
||||
hosts: production
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
# User configuration
|
||||
deploy_user: "{{ ansible_user | default('deploy') }}"
|
||||
deploy_user_groups: ['sudo'] # docker group added after Docker installation
|
||||
|
||||
# SSH configuration
|
||||
ssh_key_only_auth: false # Set to true AFTER SSH keys are properly configured
|
||||
ssh_disable_root_login: false # Set to true after deploy user is configured
|
||||
|
||||
# Firewall configuration
|
||||
firewall_enable: false # Set to true after initial setup is complete
|
||||
firewall_ports:
|
||||
- { port: 22, proto: 'tcp', comment: 'SSH' }
|
||||
- { port: 80, proto: 'tcp', comment: 'HTTP' }
|
||||
- { port: 443, proto: 'tcp', comment: 'HTTPS' }
|
||||
- { port: 51820, proto: 'udp', comment: 'WireGuard' }
|
||||
|
||||
# System packages
|
||||
system_base_packages:
|
||||
- curl
|
||||
- wget
|
||||
- git
|
||||
- vim
|
||||
- sudo
|
||||
- ufw
|
||||
- fail2ban
|
||||
- rsync
|
||||
|
||||
tasks:
|
||||
- name: Display system information
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "Distribution: {{ ansible_distribution }} {{ ansible_distribution_version }}"
|
||||
- "Hostname: {{ ansible_hostname }}"
|
||||
- "Deploy User: {{ deploy_user }}"
|
||||
|
||||
# ========================================
|
||||
# 1. System Updates
|
||||
# ========================================
|
||||
|
||||
- name: Check and wait for apt locks to be released
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
for lock in /var/lib/dpkg/lock /var/lib/apt/lists/lock /var/cache/apt/archives/lock; do
|
||||
if [ -f "$lock" ]; then
|
||||
echo "Waiting for lock: $lock"
|
||||
count=0
|
||||
while [ -f "$lock" ] && [ $count -lt 60 ]; do
|
||||
sleep 1
|
||||
count=$((count + 1))
|
||||
done
|
||||
if [ -f "$lock" ]; then
|
||||
echo "Warning: Lock still exists after 60s: $lock"
|
||||
else
|
||||
echo "Lock released: $lock"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
timeout: 70
|
||||
|
||||
- name: Update apt cache
|
||||
ansible.builtin.shell:
|
||||
cmd: timeout 300 apt-get update -qq
|
||||
environment:
|
||||
DEBIAN_FRONTEND: noninteractive
|
||||
APT_LISTCHANGES_FRONTEND: none
|
||||
register: apt_update_result
|
||||
changed_when: apt_update_result.rc == 0
|
||||
failed_when: apt_update_result.rc != 0
|
||||
timeout: 300
|
||||
|
||||
- name: Display apt update result
|
||||
ansible.builtin.debug:
|
||||
msg: "apt update completed successfully"
|
||||
when: apt_update_result.rc == 0
|
||||
|
||||
- name: Show packages to be upgraded
|
||||
ansible.builtin.command:
|
||||
cmd: apt list --upgradable 2>/dev/null | tail -n +2 | wc -l
|
||||
register: packages_to_upgrade
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display upgrade information
|
||||
ansible.builtin.debug:
|
||||
msg: "Packages to upgrade: {{ packages_to_upgrade.stdout | default('0') | trim }}"
|
||||
|
||||
- name: Upgrade system packages
|
||||
ansible.builtin.shell:
|
||||
cmd: timeout 600 apt-get upgrade -y -qq && apt-get autoremove -y -qq
|
||||
environment:
|
||||
DEBIAN_FRONTEND: noninteractive
|
||||
APT_LISTCHANGES_FRONTEND: none
|
||||
register: apt_upgrade_result
|
||||
changed_when: apt_upgrade_result.rc == 0
|
||||
failed_when: apt_upgrade_result.rc != 0
|
||||
timeout: 600
|
||||
|
||||
- name: Display apt upgrade result
|
||||
ansible.builtin.debug:
|
||||
msg: "apt upgrade completed: {{ 'Packages upgraded' if apt_upgrade_result.rc == 0 else 'Failed' }}"
|
||||
when: apt_upgrade_result.rc is defined
|
||||
|
||||
# ========================================
|
||||
# 2. Install Base Packages
|
||||
# ========================================
|
||||
|
||||
- name: Install base packages
|
||||
ansible.builtin.shell:
|
||||
cmd: timeout 300 apt-get install -y -qq {{ system_base_packages | join(' ') }}
|
||||
environment:
|
||||
DEBIAN_FRONTEND: noninteractive
|
||||
APT_LISTCHANGES_FRONTEND: none
|
||||
register: apt_install_result
|
||||
changed_when: apt_install_result.rc == 0
|
||||
failed_when: apt_install_result.rc != 0
|
||||
timeout: 300
|
||||
|
||||
- name: Display apt install result
|
||||
ansible.builtin.debug:
|
||||
msg: "apt install completed: {{ 'Packages installed/updated' if apt_install_result.rc == 0 else 'Failed' }}"
|
||||
when: apt_install_result.rc is defined
|
||||
|
||||
# ========================================
|
||||
# 3. Create Deploy User
|
||||
# ========================================
|
||||
|
||||
- name: Check if deploy user exists
|
||||
ansible.builtin.shell:
|
||||
cmd: timeout 5 getent passwd {{ deploy_user }} >/dev/null 2>&1 && echo "exists" || echo "not_found"
|
||||
register: deploy_user_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
timeout: 10
|
||||
|
||||
- name: Create deploy user
|
||||
ansible.builtin.user:
|
||||
name: "{{ deploy_user }}"
|
||||
groups: "{{ deploy_user_groups }}"
|
||||
append: yes
|
||||
shell: /bin/bash
|
||||
create_home: yes
|
||||
when:
|
||||
- "'not_found' in deploy_user_check.stdout"
|
||||
- deploy_user != 'root'
|
||||
|
||||
- name: Ensure deploy user has sudo access
|
||||
ansible.builtin.lineinfile:
|
||||
path: /etc/sudoers.d/deploy
|
||||
line: "{{ deploy_user }} ALL=(ALL) NOPASSWD: ALL"
|
||||
create: yes
|
||||
validate: 'visudo -cf %s'
|
||||
mode: '0440'
|
||||
when: deploy_user != 'root'
|
||||
|
||||
# ========================================
|
||||
# 4. SSH Configuration
|
||||
# ========================================
|
||||
|
||||
- name: Get deploy user home directory
|
||||
ansible.builtin.getent:
|
||||
database: passwd
|
||||
key: "{{ deploy_user }}"
|
||||
register: deploy_user_info
|
||||
when: deploy_user != 'root'
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Set deploy user home directory (root)
|
||||
ansible.builtin.set_fact:
|
||||
deploy_user_home: "/root"
|
||||
when: deploy_user == 'root'
|
||||
|
||||
- name: Set deploy user home directory (from getent)
|
||||
ansible.builtin.set_fact:
|
||||
deploy_user_home: "{{ deploy_user_info.ansible_facts.getent_passwd[deploy_user][4] }}"
|
||||
when:
|
||||
- deploy_user != 'root'
|
||||
- deploy_user_info.ansible_facts.getent_passwd[deploy_user] is defined
|
||||
|
||||
- name: Set deploy user home directory (fallback)
|
||||
ansible.builtin.set_fact:
|
||||
deploy_user_home: "/home/{{ deploy_user }}"
|
||||
when: deploy_user_home is not defined
|
||||
|
||||
- name: Ensure .ssh directory exists
|
||||
ansible.builtin.file:
|
||||
path: "{{ deploy_user_home }}/.ssh"
|
||||
state: directory
|
||||
owner: "{{ deploy_user }}"
|
||||
group: "{{ deploy_user }}"
|
||||
mode: '0700'
|
||||
|
||||
- name: Add SSH public key from control node
|
||||
ansible.builtin.authorized_key:
|
||||
user: "{{ deploy_user }}"
|
||||
state: present
|
||||
key: "{{ lookup('file', ansible_ssh_private_key_file | default('~/.ssh/production') + '.pub') }}"
|
||||
when: ansible_ssh_private_key_file is defined
|
||||
|
||||
- name: Verify SSH key is configured before disabling password auth
|
||||
ansible.builtin.stat:
|
||||
path: "{{ deploy_user_home }}/.ssh/authorized_keys"
|
||||
register: ssh_key_file
|
||||
when: ssh_key_only_auth | bool
|
||||
|
||||
- name: Configure SSH key-only authentication
|
||||
ansible.builtin.lineinfile:
|
||||
path: /etc/ssh/sshd_config
|
||||
regexp: "{{ item.regexp }}"
|
||||
line: "{{ item.line }}"
|
||||
backup: yes
|
||||
loop:
|
||||
- { regexp: '^#?PasswordAuthentication', line: 'PasswordAuthentication no' }
|
||||
- { regexp: '^#?PubkeyAuthentication', line: 'PubkeyAuthentication yes' }
|
||||
- { regexp: '^#?AuthorizedKeysFile', line: 'AuthorizedKeysFile .ssh/authorized_keys' }
|
||||
when:
|
||||
- ssh_key_only_auth | bool
|
||||
- ssh_key_file.stat.exists | default(false)
|
||||
notify: restart sshd
|
||||
|
||||
- name: Disable root login (optional)
|
||||
ansible.builtin.lineinfile:
|
||||
path: /etc/ssh/sshd_config
|
||||
regexp: '^#?PermitRootLogin'
|
||||
line: 'PermitRootLogin no'
|
||||
backup: yes
|
||||
when: ssh_disable_root_login | bool
|
||||
notify: restart sshd
|
||||
|
||||
# ========================================
|
||||
# 5. Firewall Configuration
|
||||
# ========================================
|
||||
# WICHTIG: Firewall wird erst am Ende konfiguriert, um SSH-Verbindung nicht zu unterbrechen
|
||||
|
||||
- name: Check current UFW status
|
||||
ansible.builtin.command:
|
||||
cmd: ufw status | head -1
|
||||
register: ufw_current_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: firewall_enable | bool
|
||||
|
||||
- name: Display current firewall status
|
||||
ansible.builtin.debug:
|
||||
msg: "Current firewall status: {{ ufw_current_status.stdout | default('Unknown') }}"
|
||||
when: firewall_enable | bool and ufw_current_status is defined
|
||||
|
||||
- name: Ensure SSH port is allowed before configuring firewall
|
||||
ansible.builtin.command:
|
||||
cmd: ufw allow 22/tcp comment 'SSH - Allow before enabling firewall'
|
||||
when:
|
||||
- firewall_enable | bool
|
||||
- "'inactive' in (ufw_current_status.stdout | default(''))"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Reset UFW to defaults (only if inactive)
|
||||
ansible.builtin.command:
|
||||
cmd: ufw --force reset
|
||||
when:
|
||||
- firewall_enable | bool
|
||||
- "'inactive' in (ufw_current_status.stdout | default(''))"
|
||||
changed_when: false
|
||||
|
||||
- name: Set UFW default policies
|
||||
ansible.builtin.command:
|
||||
cmd: "ufw default {{ item.policy }} {{ item.direction }}"
|
||||
loop:
|
||||
- { policy: 'deny', direction: 'incoming' }
|
||||
- { policy: 'allow', direction: 'outgoing' }
|
||||
when:
|
||||
- firewall_enable | bool
|
||||
- "'inactive' in (ufw_current_status.stdout | default(''))"
|
||||
|
||||
- name: Allow firewall ports (ensure SSH is first)
|
||||
ansible.builtin.command:
|
||||
cmd: "ufw allow {{ item.port }}/{{ item.proto }} comment '{{ item.comment }}'"
|
||||
loop: "{{ firewall_ports }}"
|
||||
when:
|
||||
- firewall_enable | bool
|
||||
- "'inactive' in (ufw_current_status.stdout | default(''))"
|
||||
register: ufw_rules
|
||||
changed_when: ufw_rules.rc == 0
|
||||
|
||||
- name: Enable UFW (only if inactive)
|
||||
ansible.builtin.command:
|
||||
cmd: ufw --force enable
|
||||
when:
|
||||
- firewall_enable | bool
|
||||
- "'inactive' in (ufw_current_status.stdout | default(''))"
|
||||
|
||||
- name: Display UFW status
|
||||
ansible.builtin.command:
|
||||
cmd: ufw status verbose
|
||||
register: ufw_status
|
||||
changed_when: false
|
||||
|
||||
- name: Show UFW status
|
||||
ansible.builtin.debug:
|
||||
msg: "{{ ufw_status.stdout_lines }}"
|
||||
|
||||
# ========================================
|
||||
# 6. Fail2ban Configuration
|
||||
# ========================================
|
||||
|
||||
- name: Ensure fail2ban is enabled and started
|
||||
ansible.builtin.systemd:
|
||||
name: fail2ban
|
||||
enabled: yes
|
||||
state: started
|
||||
when: "'fail2ban' in system_base_packages"
|
||||
|
||||
# ========================================
|
||||
# 7. System Configuration
|
||||
# ========================================
|
||||
|
||||
- name: Configure timezone
|
||||
ansible.builtin.timezone:
|
||||
name: Europe/Berlin
|
||||
|
||||
- name: Display setup summary
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "=========================================="
|
||||
- "Initial Server Setup Complete"
|
||||
- "=========================================="
|
||||
- "Deploy User: {{ deploy_user }}"
|
||||
- "SSH Key-only Auth: {{ ssh_key_only_auth }}"
|
||||
- "Firewall: {{ 'Enabled' if firewall_enable else 'Disabled' }}"
|
||||
- "Fail2ban: {{ 'Enabled' if 'fail2ban' in system_base_packages else 'Disabled' }}"
|
||||
- "=========================================="
|
||||
- "Next Steps:"
|
||||
- "1. Test SSH connection: ssh {{ deploy_user }}@{{ ansible_host }}"
|
||||
- "2. Install Docker: ansible-playbook playbooks/install-docker.yml"
|
||||
- "3. Deploy Infrastructure: ansible-playbook playbooks/setup-infrastructure.yml"
|
||||
- "=========================================="
|
||||
|
||||
handlers:
|
||||
- name: restart sshd
|
||||
ansible.builtin.systemd:
|
||||
name: sshd
|
||||
state: restarted
|
||||
|
||||
@@ -0,0 +1,16 @@
|
||||
---
|
||||
# Install Composer Dependencies in Application Container
|
||||
# Wrapper Playbook for application role composer tasks
|
||||
- hosts: "{{ deployment_hosts | default('production') }}"
|
||||
gather_facts: no
|
||||
become: no
|
||||
vars:
|
||||
deployment_environment: "{{ deployment_environment | default('production') }}"
|
||||
tasks:
|
||||
- name: Include application composer tasks
|
||||
ansible.builtin.include_role:
|
||||
name: application
|
||||
tasks_from: composer
|
||||
tags:
|
||||
- application
|
||||
- composer
|
||||
@@ -0,0 +1,92 @@
|
||||
---
|
||||
- name: Install Docker on Production Server
|
||||
hosts: production
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
|
||||
tasks:
|
||||
- name: Install prerequisites
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- ca-certificates
|
||||
- curl
|
||||
state: present
|
||||
update_cache: yes
|
||||
|
||||
- name: Create keyrings directory
|
||||
ansible.builtin.file:
|
||||
path: /etc/apt/keyrings
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Detect distribution (Debian or Ubuntu)
|
||||
ansible.builtin.set_fact:
|
||||
docker_distribution: "{{ 'debian' if ansible_distribution == 'Debian' else 'ubuntu' }}"
|
||||
changed_when: false
|
||||
|
||||
- name: Add Docker GPG key
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
curl -fsSL https://download.docker.com/linux/{{ docker_distribution }}/gpg -o /etc/apt/keyrings/docker.asc
|
||||
chmod a+r /etc/apt/keyrings/docker.asc
|
||||
creates: /etc/apt/keyrings/docker.asc
|
||||
|
||||
- name: Add Docker repository
|
||||
ansible.builtin.shell:
|
||||
cmd: |
|
||||
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/{{ docker_distribution }} $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||
creates: /etc/apt/sources.list.d/docker.list
|
||||
|
||||
- name: Update apt cache after adding Docker repo
|
||||
ansible.builtin.apt:
|
||||
update_cache: yes
|
||||
|
||||
- name: Install Docker packages
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- docker-ce
|
||||
- docker-ce-cli
|
||||
- containerd.io
|
||||
- docker-buildx-plugin
|
||||
- docker-compose-plugin
|
||||
state: present
|
||||
|
||||
- name: Start and enable Docker service
|
||||
ansible.builtin.systemd:
|
||||
name: docker
|
||||
state: started
|
||||
enabled: yes
|
||||
|
||||
- name: Add deploy user to docker group
|
||||
ansible.builtin.user:
|
||||
name: "{{ ansible_user | default('deploy') }}"
|
||||
groups: docker
|
||||
append: yes
|
||||
|
||||
- name: Verify Docker installation
|
||||
ansible.builtin.command: docker --version
|
||||
register: docker_version
|
||||
changed_when: false
|
||||
|
||||
- name: Display Docker version
|
||||
ansible.builtin.debug:
|
||||
msg: "Docker installed successfully: {{ docker_version.stdout }}"
|
||||
|
||||
- name: Verify Docker Compose installation
|
||||
ansible.builtin.command: docker compose version
|
||||
register: compose_version
|
||||
changed_when: false
|
||||
|
||||
- name: Display Docker Compose version
|
||||
ansible.builtin.debug:
|
||||
msg: "Docker Compose installed successfully: {{ compose_version.stdout }}"
|
||||
|
||||
- name: Run Docker hello-world test
|
||||
ansible.builtin.command: docker run --rm hello-world
|
||||
register: docker_test
|
||||
changed_when: false
|
||||
|
||||
- name: Display Docker test result
|
||||
ansible.builtin.debug:
|
||||
msg: "Docker is working correctly!"
|
||||
when: "'Hello from Docker!' in docker_test.stdout"
|
||||
@@ -0,0 +1,198 @@
|
||||
---
|
||||
# Backup Before Redeploy
|
||||
# Creates comprehensive backup of Gitea data, SSL certificates, and configurations
|
||||
# before redeploying Traefik and Gitea stacks
|
||||
|
||||
- name: Backup Before Redeploy
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
gitea_stack_path: "{{ stacks_base_path }}/gitea"
|
||||
traefik_stack_path: "{{ stacks_base_path }}/traefik"
|
||||
backup_base_path: "{{ backups_path | default('/home/deploy/backups') }}"
|
||||
backup_name: "redeploy-backup-{{ ansible_date_time.epoch }}"
|
||||
|
||||
tasks:
|
||||
- name: Display backup plan
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
BACKUP BEFORE REDEPLOY
|
||||
================================================================================
|
||||
|
||||
This playbook will backup:
|
||||
1. Gitea data (volumes)
|
||||
2. SSL certificates (acme.json)
|
||||
3. Gitea configuration (app.ini)
|
||||
4. Traefik configuration
|
||||
5. PostgreSQL data (if applicable)
|
||||
|
||||
Backup location: {{ backup_base_path }}/{{ backup_name }}
|
||||
|
||||
================================================================================
|
||||
|
||||
- name: Ensure backup directory exists
|
||||
ansible.builtin.file:
|
||||
path: "{{ backup_base_path }}/{{ backup_name }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
become: yes
|
||||
|
||||
- name: Create backup timestamp file
|
||||
ansible.builtin.copy:
|
||||
content: |
|
||||
Backup created: {{ ansible_date_time.iso8601 }}
|
||||
Backup name: {{ backup_name }}
|
||||
Purpose: Before Traefik/Gitea redeploy
|
||||
dest: "{{ backup_base_path }}/{{ backup_name }}/backup-info.txt"
|
||||
mode: '0644'
|
||||
become: yes
|
||||
|
||||
# ========================================
|
||||
# Backup Gitea Data
|
||||
# ========================================
|
||||
- name: Check Gitea volumes
|
||||
ansible.builtin.shell: |
|
||||
docker volume ls --filter name=gitea --format "{{ '{{' }}.Name{{ '}}' }}"
|
||||
register: gitea_volumes
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Backup Gitea volumes
|
||||
ansible.builtin.shell: |
|
||||
for volume in {{ gitea_volumes.stdout_lines | join(' ') }}; do
|
||||
if [ -n "$volume" ]; then
|
||||
echo "Backing up volume: $volume"
|
||||
docker run --rm \
|
||||
-v "$volume:/source:ro" \
|
||||
-v "{{ backup_base_path }}/{{ backup_name }}:/backup" \
|
||||
alpine tar czf "/backup/gitea-volume-${volume}.tar.gz" -C /source .
|
||||
fi
|
||||
done
|
||||
when: gitea_volumes.stdout_lines | length > 0
|
||||
register: gitea_volumes_backup
|
||||
changed_when: gitea_volumes_backup.rc == 0
|
||||
|
||||
# ========================================
|
||||
# Backup SSL Certificates
|
||||
# ========================================
|
||||
- name: Check if acme.json exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ traefik_stack_path }}/acme.json"
|
||||
register: acme_json_stat
|
||||
|
||||
- name: Backup acme.json
|
||||
ansible.builtin.copy:
|
||||
src: "{{ traefik_stack_path }}/acme.json"
|
||||
dest: "{{ backup_base_path }}/{{ backup_name }}/acme.json"
|
||||
remote_src: yes
|
||||
mode: '0600'
|
||||
when: acme_json_stat.stat.exists
|
||||
register: acme_backup
|
||||
changed_when: acme_backup.changed | default(false)
|
||||
|
||||
# ========================================
|
||||
# Backup Gitea Configuration
|
||||
# ========================================
|
||||
- name: Backup Gitea app.ini
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose exec -T gitea cat /data/gitea/conf/app.ini > "{{ backup_base_path }}/{{ backup_name }}/gitea-app.ini" 2>/dev/null || echo "Could not read app.ini"
|
||||
register: gitea_app_ini_backup
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Backup Gitea docker-compose.yml
|
||||
ansible.builtin.copy:
|
||||
src: "{{ gitea_stack_path }}/docker-compose.yml"
|
||||
dest: "{{ backup_base_path }}/{{ backup_name }}/gitea-docker-compose.yml"
|
||||
remote_src: yes
|
||||
mode: '0644'
|
||||
register: gitea_compose_backup
|
||||
changed_when: gitea_compose_backup.changed | default(false)
|
||||
|
||||
# ========================================
|
||||
# Backup Traefik Configuration
|
||||
# ========================================
|
||||
- name: Backup Traefik configuration files
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
tar czf "{{ backup_base_path }}/{{ backup_name }}/traefik-config.tar.gz" \
|
||||
traefik.yml \
|
||||
docker-compose.yml \
|
||||
dynamic/ 2>/dev/null || echo "Some files may be missing"
|
||||
register: traefik_config_backup
|
||||
changed_when: traefik_config_backup.rc == 0
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# Backup PostgreSQL Data (if applicable)
|
||||
# ========================================
|
||||
- name: Check if PostgreSQL stack exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ stacks_base_path }}/postgresql/docker-compose.yml"
|
||||
register: postgres_compose_exists
|
||||
|
||||
- name: Backup PostgreSQL database (if running)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ stacks_base_path }}/postgresql
|
||||
if docker compose ps postgres | grep -q "Up"; then
|
||||
docker compose exec -T postgres pg_dumpall -U postgres | gzip > "{{ backup_base_path }}/{{ backup_name }}/postgresql-all-{{ ansible_date_time.epoch }}.sql.gz"
|
||||
echo "PostgreSQL backup created"
|
||||
else
|
||||
echo "PostgreSQL not running, skipping backup"
|
||||
fi
|
||||
when: postgres_compose_exists.stat.exists
|
||||
register: postgres_backup
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# Verify Backup
|
||||
# ========================================
|
||||
- name: List backup contents
|
||||
ansible.builtin.shell: |
|
||||
ls -lh "{{ backup_base_path }}/{{ backup_name }}/"
|
||||
register: backup_contents
|
||||
changed_when: false
|
||||
|
||||
- name: Calculate backup size
|
||||
ansible.builtin.shell: |
|
||||
du -sh "{{ backup_base_path }}/{{ backup_name }}" | awk '{print $1}'
|
||||
register: backup_size
|
||||
changed_when: false
|
||||
|
||||
- name: Summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
BACKUP SUMMARY
|
||||
================================================================================
|
||||
|
||||
Backup location: {{ backup_base_path }}/{{ backup_name }}
|
||||
Backup size: {{ backup_size.stdout }}
|
||||
|
||||
Backed up:
|
||||
- Gitea volumes: {% if gitea_volumes_backup.changed %}✅{% else %}ℹ️ No volumes found{% endif %}
|
||||
- SSL certificates (acme.json): {% if acme_backup.changed | default(false) %}✅{% else %}ℹ️ Not found{% endif %}
|
||||
- Gitea app.ini: {% if gitea_app_ini_backup.rc == 0 %}✅{% else %}⚠️ Could not read{% endif %}
|
||||
- Gitea docker-compose.yml: {% if gitea_compose_backup.changed | default(false) %}✅{% else %}ℹ️ Not found{% endif %}
|
||||
- Traefik configuration: {% if traefik_config_backup.rc == 0 %}✅{% else %}⚠️ Some files may be missing{% endif %}
|
||||
- PostgreSQL data: {% if postgres_backup.rc == 0 and 'created' in postgres_backup.stdout %}✅{% else %}ℹ️ Not running or not found{% endif %}
|
||||
|
||||
Backup contents:
|
||||
{{ backup_contents.stdout }}
|
||||
|
||||
================================================================================
|
||||
NEXT STEPS
|
||||
================================================================================
|
||||
|
||||
Backup completed successfully. You can now proceed with redeploy:
|
||||
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup/redeploy-traefik-gitea-clean.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name={{ backup_name }}"
|
||||
|
||||
================================================================================
|
||||
|
||||
@@ -0,0 +1,216 @@
|
||||
---
|
||||
- name: Cleanup All Containers and Networks on Production Server
|
||||
hosts: production
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
cleanup_volumes: true # Set to false to preserve volumes
|
||||
|
||||
tasks:
|
||||
- name: Set stacks_base_path if not defined
|
||||
set_fact:
|
||||
stacks_base_path: "{{ stacks_base_path | default('/home/deploy/deployment/stacks') }}"
|
||||
|
||||
- name: Display cleanup warning
|
||||
debug:
|
||||
msg:
|
||||
- "=== WARNING: This will stop and remove ALL containers ==="
|
||||
- "Volumes will be removed: {{ cleanup_volumes }}"
|
||||
- "This will cause downtime for all services"
|
||||
- "Stacks path: {{ stacks_base_path }}"
|
||||
- ""
|
||||
|
||||
- name: List all running containers before cleanup
|
||||
command: docker ps --format 'table {{ "{{" }}.Names{{ "}}" }}\t{{ "{{" }}.Status{{ "}}" }}\t{{ "{{" }}.Ports{{ "}}" }}'
|
||||
register: containers_before
|
||||
changed_when: false
|
||||
|
||||
- name: Display running containers
|
||||
debug:
|
||||
msg: "{{ containers_before.stdout_lines }}"
|
||||
|
||||
# Stop all Docker Compose stacks
|
||||
- name: Stop Traefik stack
|
||||
command: docker compose -f {{ stacks_base_path }}/traefik/docker-compose.yml down
|
||||
args:
|
||||
chdir: "{{ stacks_base_path }}/traefik"
|
||||
ignore_errors: yes
|
||||
register: traefik_stop
|
||||
|
||||
- name: Stop Gitea stack
|
||||
command: docker compose -f {{ stacks_base_path }}/gitea/docker-compose.yml down
|
||||
args:
|
||||
chdir: "{{ stacks_base_path }}/gitea"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Stop PostgreSQL Production stack
|
||||
command: docker compose -f {{ stacks_base_path }}/postgresql-production/docker-compose.yml down
|
||||
args:
|
||||
chdir: "{{ stacks_base_path }}/postgresql-production"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Stop PostgreSQL Staging stack
|
||||
command: docker compose -f {{ stacks_base_path }}/postgresql-staging/docker-compose.yml down
|
||||
args:
|
||||
chdir: "{{ stacks_base_path }}/postgresql-staging"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Stop Redis stack
|
||||
command: docker compose -f {{ stacks_base_path }}/redis/docker-compose.yml down
|
||||
args:
|
||||
chdir: "{{ stacks_base_path }}/redis"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Stop Docker Registry stack
|
||||
command: docker compose -f {{ stacks_base_path }}/registry/docker-compose.yml down
|
||||
args:
|
||||
chdir: "{{ stacks_base_path }}/registry"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Stop MinIO stack
|
||||
command: docker compose -f {{ stacks_base_path }}/minio/docker-compose.yml down
|
||||
args:
|
||||
chdir: "{{ stacks_base_path }}/minio"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Stop Monitoring stack
|
||||
command: docker compose -f {{ stacks_base_path }}/monitoring/docker-compose.yml down
|
||||
args:
|
||||
chdir: "{{ stacks_base_path }}/monitoring"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Stop Production stack
|
||||
command: docker compose -f {{ stacks_base_path }}/production/docker-compose.base.yml -f {{ stacks_base_path }}/production/docker-compose.production.yml down
|
||||
args:
|
||||
chdir: "{{ stacks_base_path }}/production"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Stop Staging stack
|
||||
command: docker compose -f {{ stacks_base_path }}/staging/docker-compose.base.yml -f {{ stacks_base_path }}/staging/docker-compose.staging.yml down
|
||||
args:
|
||||
chdir: "{{ stacks_base_path }}/staging"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Stop WireGuard stack
|
||||
command: docker compose -f {{ stacks_base_path }}/wireguard/docker-compose.yml down
|
||||
args:
|
||||
chdir: "{{ stacks_base_path }}/wireguard"
|
||||
ignore_errors: yes
|
||||
|
||||
# Remove all containers (including stopped ones)
|
||||
- name: Get all container IDs
|
||||
command: docker ps -a -q
|
||||
register: all_containers
|
||||
changed_when: false
|
||||
|
||||
- name: Remove all containers
|
||||
command: docker rm -f {{ item }}
|
||||
loop: "{{ all_containers.stdout_lines }}"
|
||||
when: all_containers.stdout_lines | length > 0
|
||||
ignore_errors: yes
|
||||
|
||||
# Check for port conflicts
|
||||
- name: Check what's using port 80
|
||||
command: sudo ss -tlnp 'sport = :80'
|
||||
register: port_80_check
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display port 80 status
|
||||
debug:
|
||||
msg: "{{ port_80_check.stdout_lines if port_80_check.rc == 0 else 'Port 80 is free or cannot be checked' }}"
|
||||
|
||||
- name: Check what's using port 443
|
||||
command: sudo ss -tlnp 'sport = :443'
|
||||
register: port_443_check
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display port 443 status
|
||||
debug:
|
||||
msg: "{{ port_443_check.stdout_lines if port_443_check.rc == 0 else 'Port 443 is free or cannot be checked' }}"
|
||||
|
||||
# Clean up networks
|
||||
- name: Remove traefik-public network
|
||||
community.docker.docker_network:
|
||||
name: traefik-public
|
||||
state: absent
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Remove app-internal network
|
||||
community.docker.docker_network:
|
||||
name: app-internal
|
||||
state: absent
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Get all custom networks
|
||||
command: docker network ls --format '{{ "{{" }}.Name{{ "}}" }}'
|
||||
register: all_networks
|
||||
changed_when: false
|
||||
|
||||
- name: Remove custom networks (except default ones)
|
||||
community.docker.docker_network:
|
||||
name: "{{ item }}"
|
||||
state: absent
|
||||
loop: "{{ all_networks.stdout_lines }}"
|
||||
when:
|
||||
- item not in ['bridge', 'host', 'none']
|
||||
- item not in ['traefik-public', 'app-internal'] # Already removed above
|
||||
ignore_errors: yes
|
||||
|
||||
# Clean up volumes (if requested)
|
||||
- name: Get all volumes
|
||||
command: docker volume ls -q
|
||||
register: all_volumes
|
||||
changed_when: false
|
||||
when: cleanup_volumes | bool
|
||||
|
||||
- name: Remove all volumes
|
||||
command: docker volume rm {{ item }}
|
||||
loop: "{{ all_volumes.stdout_lines }}"
|
||||
when:
|
||||
- cleanup_volumes | bool
|
||||
- all_volumes.stdout_lines | length > 0
|
||||
ignore_errors: yes
|
||||
|
||||
# Final verification
|
||||
- name: List remaining containers
|
||||
command: docker ps -a
|
||||
register: containers_after
|
||||
changed_when: false
|
||||
|
||||
- name: Display remaining containers
|
||||
debug:
|
||||
msg: "{{ containers_after.stdout_lines }}"
|
||||
|
||||
- name: List remaining networks
|
||||
command: docker network ls
|
||||
register: networks_after
|
||||
changed_when: false
|
||||
|
||||
- name: Display remaining networks
|
||||
debug:
|
||||
msg: "{{ networks_after.stdout_lines }}"
|
||||
|
||||
- name: Verify ports 80 and 443 are free
|
||||
command: sudo ss -tlnp 'sport = :{{ item }}'
|
||||
register: port_check
|
||||
changed_when: false
|
||||
failed_when: port_check.rc == 0 and port_check.stdout_lines | length > 0
|
||||
loop:
|
||||
- 80
|
||||
- 443
|
||||
|
||||
- name: Display cleanup summary
|
||||
debug:
|
||||
msg:
|
||||
- "=== Cleanup Complete ==="
|
||||
- "All containers stopped and removed"
|
||||
- "Networks cleaned up"
|
||||
- "Volumes removed: {{ cleanup_volumes }}"
|
||||
- ""
|
||||
- "Next steps:"
|
||||
- "1. Run sync-stacks.yml to sync configurations"
|
||||
- "2. Run setup-infrastructure.yml to deploy fresh infrastructure"
|
||||
|
||||
@@ -0,0 +1,260 @@
|
||||
---
|
||||
# Rollback Redeploy
|
||||
# Restores Traefik and Gitea from backup created before redeploy
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook -i inventory/production.yml playbooks/maintenance/rollback-redeploy.yml \
|
||||
# --vault-password-file secrets/.vault_pass \
|
||||
# -e "backup_name=redeploy-backup-1234567890"
|
||||
|
||||
- name: Rollback Redeploy
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
traefik_stack_path: "{{ stacks_base_path }}/traefik"
|
||||
gitea_stack_path: "{{ stacks_base_path }}/gitea"
|
||||
backup_base_path: "{{ backups_path | default('/home/deploy/backups') }}"
|
||||
backup_name: "{{ backup_name | default('') }}"
|
||||
|
||||
tasks:
|
||||
- name: Validate backup name
|
||||
ansible.builtin.fail:
|
||||
msg: "backup_name is required. Use: -e 'backup_name=redeploy-backup-1234567890'"
|
||||
when: backup_name == ""
|
||||
|
||||
- name: Check if backup directory exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ backup_base_path }}/{{ backup_name }}"
|
||||
register: backup_dir_stat
|
||||
|
||||
- name: Fail if backup not found
|
||||
ansible.builtin.fail:
|
||||
msg: "Backup directory not found: {{ backup_base_path }}/{{ backup_name }}"
|
||||
when: not backup_dir_stat.stat.exists
|
||||
|
||||
- name: Display rollback plan
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
ROLLBACK REDEPLOY
|
||||
================================================================================
|
||||
|
||||
This playbook will restore from backup: {{ backup_base_path }}/{{ backup_name }}
|
||||
|
||||
Steps:
|
||||
1. Stop Traefik and Gitea stacks
|
||||
2. Restore Gitea volumes
|
||||
3. Restore SSL certificates (acme.json)
|
||||
4. Restore Gitea configuration (app.ini)
|
||||
5. Restore Traefik configuration
|
||||
6. Restore PostgreSQL data (if applicable)
|
||||
7. Restart stacks
|
||||
8. Verify
|
||||
|
||||
⚠️ WARNING: This will overwrite current state!
|
||||
|
||||
================================================================================
|
||||
|
||||
# ========================================
|
||||
# 1. STOP STACKS
|
||||
# ========================================
|
||||
- name: Stop Traefik stack
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose down
|
||||
register: traefik_stop
|
||||
changed_when: traefik_stop.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Stop Gitea stack
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose down
|
||||
register: gitea_stop
|
||||
changed_when: gitea_stop.rc == 0
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# 2. RESTORE GITEA VOLUMES
|
||||
# ========================================
|
||||
- name: List Gitea volume backups
|
||||
ansible.builtin.shell: |
|
||||
ls -1 "{{ backup_base_path }}/{{ backup_name }}/gitea-volume-"*.tar.gz 2>/dev/null || echo ""
|
||||
register: gitea_volume_backups
|
||||
changed_when: false
|
||||
|
||||
- name: Restore Gitea volumes
|
||||
ansible.builtin.shell: |
|
||||
for backup_file in {{ backup_base_path }}/{{ backup_name }}/gitea-volume-*.tar.gz; do
|
||||
if [ -f "$backup_file" ]; then
|
||||
volume_name=$(basename "$backup_file" .tar.gz | sed 's/gitea-volume-//')
|
||||
echo "Restoring volume: $volume_name"
|
||||
docker volume create "$volume_name" 2>/dev/null || true
|
||||
docker run --rm \
|
||||
-v "$volume_name:/target" \
|
||||
-v "{{ backup_base_path }}/{{ backup_name }}:/backup:ro" \
|
||||
alpine sh -c "cd /target && tar xzf /backup/$(basename $backup_file)"
|
||||
fi
|
||||
done
|
||||
when: gitea_volume_backups.stdout != ""
|
||||
register: gitea_volumes_restore
|
||||
changed_when: gitea_volumes_restore.rc == 0
|
||||
|
||||
# ========================================
|
||||
# 3. RESTORE SSL CERTIFICATES
|
||||
# ========================================
|
||||
- name: Restore acme.json
|
||||
ansible.builtin.copy:
|
||||
src: "{{ backup_base_path }}/{{ backup_name }}/acme.json"
|
||||
dest: "{{ traefik_stack_path }}/acme.json"
|
||||
remote_src: yes
|
||||
mode: '0600'
|
||||
register: acme_restore
|
||||
changed_when: acme_restore.rc == 0
|
||||
|
||||
# ========================================
|
||||
# 4. RESTORE CONFIGURATIONS
|
||||
# ========================================
|
||||
- name: Restore Gitea docker-compose.yml
|
||||
ansible.builtin.copy:
|
||||
src: "{{ backup_base_path }}/{{ backup_name }}/gitea-docker-compose.yml"
|
||||
dest: "{{ gitea_stack_path }}/docker-compose.yml"
|
||||
remote_src: yes
|
||||
mode: '0644'
|
||||
register: gitea_compose_restore
|
||||
changed_when: gitea_compose_restore.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Restore Traefik configuration
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
tar xzf "{{ backup_base_path }}/{{ backup_name }}/traefik-config.tar.gz" 2>/dev/null || echo "Some files may be missing"
|
||||
register: traefik_config_restore
|
||||
changed_when: traefik_config_restore.rc == 0
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# 5. RESTORE POSTGRESQL DATA
|
||||
# ========================================
|
||||
- name: Find PostgreSQL backup
|
||||
ansible.builtin.shell: |
|
||||
ls -1 "{{ backup_base_path }}/{{ backup_name }}/postgresql-all-"*.sql.gz 2>/dev/null | head -1 || echo ""
|
||||
register: postgres_backup_file
|
||||
changed_when: false
|
||||
|
||||
- name: Restore PostgreSQL database
|
||||
ansible.builtin.shell: |
|
||||
cd {{ stacks_base_path }}/postgresql
|
||||
if docker compose ps postgres | grep -q "Up"; then
|
||||
gunzip -c "{{ postgres_backup_file.stdout }}" | docker compose exec -T postgres psql -U postgres
|
||||
echo "PostgreSQL restored"
|
||||
else
|
||||
echo "PostgreSQL not running, skipping restore"
|
||||
fi
|
||||
when: postgres_backup_file.stdout != ""
|
||||
register: postgres_restore
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# 6. RESTART STACKS
|
||||
# ========================================
|
||||
- name: Deploy Traefik stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ traefik_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
register: traefik_deploy
|
||||
|
||||
- name: Wait for Traefik to be ready
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose ps traefik | grep -Eiq "Up|running"
|
||||
register: traefik_ready
|
||||
changed_when: false
|
||||
until: traefik_ready.rc == 0
|
||||
retries: 12
|
||||
delay: 5
|
||||
failed_when: traefik_ready.rc != 0
|
||||
|
||||
- name: Deploy Gitea stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ gitea_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
register: gitea_deploy
|
||||
|
||||
- name: Restore Gitea app.ini
|
||||
ansible.builtin.shell: |
|
||||
if [ -f "{{ backup_base_path }}/{{ backup_name }}/gitea-app.ini" ]; then
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose exec -T gitea sh -c "cat > /data/gitea/conf/app.ini" < "{{ backup_base_path }}/{{ backup_name }}/gitea-app.ini"
|
||||
docker compose restart gitea
|
||||
echo "app.ini restored and Gitea restarted"
|
||||
else
|
||||
echo "No app.ini backup found"
|
||||
fi
|
||||
register: gitea_app_ini_restore
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Wait for Gitea to be ready
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose ps gitea | grep -Eiq "Up|running"
|
||||
register: gitea_ready
|
||||
changed_when: false
|
||||
until: gitea_ready.rc == 0
|
||||
retries: 12
|
||||
delay: 5
|
||||
failed_when: gitea_ready.rc != 0
|
||||
|
||||
# ========================================
|
||||
# 7. VERIFY
|
||||
# ========================================
|
||||
- name: Wait for Gitea to be healthy
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose exec -T gitea curl -f http://localhost:3000/api/healthz 2>&1 | grep -q "status.*pass" && echo "HEALTHY" || echo "NOT_HEALTHY"
|
||||
register: gitea_health
|
||||
changed_when: false
|
||||
until: gitea_health.stdout == "HEALTHY"
|
||||
retries: 30
|
||||
delay: 2
|
||||
failed_when: false
|
||||
|
||||
- name: Summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
ROLLBACK SUMMARY
|
||||
================================================================================
|
||||
|
||||
Restored from backup: {{ backup_base_path }}/{{ backup_name }}
|
||||
|
||||
Restored:
|
||||
- Gitea volumes: {% if gitea_volumes_restore.changed %}✅{% else %}ℹ️ No volumes to restore{% endif %}
|
||||
- SSL certificates: {% if acme_restore.changed %}✅{% else %}ℹ️ Not found{% endif %}
|
||||
- Gitea docker-compose.yml: {% if gitea_compose_restore.changed %}✅{% else %}ℹ️ Not found{% endif %}
|
||||
- Traefik configuration: {% if traefik_config_restore.rc == 0 %}✅{% else %}⚠️ Some files may be missing{% endif %}
|
||||
- PostgreSQL data: {% if postgres_restore.rc == 0 and 'restored' in postgres_restore.stdout %}✅{% else %}ℹ️ Not restored{% endif %}
|
||||
- Gitea app.ini: {% if gitea_app_ini_restore.rc == 0 and 'restored' in gitea_app_ini_restore.stdout %}✅{% else %}ℹ️ Not found{% endif %}
|
||||
|
||||
Status:
|
||||
- Traefik: {% if traefik_ready.rc == 0 %}✅ Running{% else %}❌ Not running{% endif %}
|
||||
- Gitea: {% if gitea_ready.rc == 0 %}✅ Running{% else %}❌ Not running{% endif %}
|
||||
- Gitea Health: {% if gitea_health.stdout == 'HEALTHY' %}✅ Healthy{% else %}❌ Not healthy{% endif %}
|
||||
|
||||
Next steps:
|
||||
1. Test Gitea: curl -k https://{{ gitea_domain }}/api/healthz
|
||||
2. Check logs if issues: cd {{ gitea_stack_path }} && docker compose logs gitea --tail=50
|
||||
|
||||
================================================================================
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
299
deployment/legacy/ansible/ansible/playbooks/manage/gitea.yml
Normal file
299
deployment/legacy/ansible/ansible/playbooks/manage/gitea.yml
Normal file
@@ -0,0 +1,299 @@
|
||||
---
|
||||
# Consolidated Gitea Management Playbook
|
||||
# Consolidates: fix-gitea-timeouts.yml, fix-gitea-traefik-connection.yml,
|
||||
# fix-gitea-ssl-routing.yml, fix-gitea-servers-transport.yml,
|
||||
# fix-gitea-complete.yml, restart-gitea-complete.yml,
|
||||
# restart-gitea-with-cache.yml
|
||||
#
|
||||
# Usage:
|
||||
# # Restart Gitea
|
||||
# ansible-playbook -i inventory/production.yml playbooks/manage/gitea.yml --tags restart
|
||||
#
|
||||
# # Fix timeouts (restart Gitea and Traefik)
|
||||
# ansible-playbook -i inventory/production.yml playbooks/manage/gitea.yml --tags fix-timeouts
|
||||
#
|
||||
# # Fix SSL/routing issues
|
||||
# ansible-playbook -i inventory/production.yml playbooks/manage/gitea.yml --tags fix-ssl
|
||||
#
|
||||
# # Complete fix (runner stop + restart + service discovery)
|
||||
# ansible-playbook -i inventory/production.yml playbooks/manage/gitea.yml --tags complete
|
||||
|
||||
- name: Manage Gitea
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
gitea_stack_path: "{{ stacks_base_path }}/gitea"
|
||||
traefik_stack_path: "{{ stacks_base_path }}/traefik"
|
||||
gitea_runner_path: "{{ stacks_base_path }}/../gitea-runner"
|
||||
gitea_url: "https://{{ gitea_domain }}"
|
||||
gitea_container_name: "gitea"
|
||||
traefik_container_name: "traefik"
|
||||
|
||||
tasks:
|
||||
- name: Display management plan
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
GITEA MANAGEMENT
|
||||
================================================================================
|
||||
|
||||
Running management tasks with tags: {{ ansible_run_tags | default(['all']) }}
|
||||
|
||||
Available actions:
|
||||
- restart: Restart Gitea container
|
||||
- fix-timeouts: Restart Gitea and Traefik to fix timeouts
|
||||
- fix-ssl: Fix SSL/routing issues
|
||||
- fix-servers-transport: Update ServersTransport configuration
|
||||
- complete: Complete fix (stop runner, restart services, verify)
|
||||
|
||||
================================================================================
|
||||
|
||||
# ========================================
|
||||
# COMPLETE FIX (--tags complete)
|
||||
# ========================================
|
||||
- name: Check Gitea Runner status
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_runner_path }}
|
||||
docker compose ps gitea-runner 2>/dev/null || echo "Runner not found"
|
||||
register: runner_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- complete
|
||||
|
||||
- name: Stop Gitea Runner to reduce load
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_runner_path }}
|
||||
docker compose stop gitea-runner
|
||||
register: runner_stop
|
||||
changed_when: runner_stop.rc == 0
|
||||
failed_when: false
|
||||
when: runner_status.rc == 0
|
||||
tags:
|
||||
- complete
|
||||
|
||||
# ========================================
|
||||
# RESTART GITEA (--tags restart, fix-timeouts, complete)
|
||||
# ========================================
|
||||
- name: Check Gitea container status before restart
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose ps {{ gitea_container_name }}
|
||||
register: gitea_status_before
|
||||
changed_when: false
|
||||
tags:
|
||||
- restart
|
||||
- fix-timeouts
|
||||
- complete
|
||||
|
||||
- name: Restart Gitea container
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose restart {{ gitea_container_name }}
|
||||
register: gitea_restart
|
||||
changed_when: gitea_restart.rc == 0
|
||||
tags:
|
||||
- restart
|
||||
- fix-timeouts
|
||||
- complete
|
||||
|
||||
- name: Wait for Gitea to be ready (direct check)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
for i in {1..30}; do
|
||||
if docker compose exec -T {{ gitea_container_name }} curl -f http://localhost:3000/api/healthz >/dev/null 2>&1; then
|
||||
echo "Gitea is ready"
|
||||
exit 0
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
echo "Gitea not ready after 60 seconds"
|
||||
exit 1
|
||||
register: gitea_ready
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- restart
|
||||
- fix-timeouts
|
||||
- complete
|
||||
|
||||
# ========================================
|
||||
# RESTART TRAEFIK (--tags fix-timeouts, complete)
|
||||
# ========================================
|
||||
- name: Check Traefik container status before restart
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose ps {{ traefik_container_name }}
|
||||
register: traefik_status_before
|
||||
changed_when: false
|
||||
tags:
|
||||
- fix-timeouts
|
||||
- complete
|
||||
|
||||
- name: Restart Traefik to refresh service discovery
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose restart {{ traefik_container_name }}
|
||||
register: traefik_restart
|
||||
changed_when: traefik_restart.rc == 0
|
||||
when: traefik_auto_restart | default(false) | bool
|
||||
tags:
|
||||
- fix-timeouts
|
||||
- complete
|
||||
|
||||
- name: Wait for Traefik to be ready
|
||||
ansible.builtin.wait_for:
|
||||
timeout: 30
|
||||
delay: 2
|
||||
changed_when: false
|
||||
when: traefik_restart.changed | default(false) | bool
|
||||
tags:
|
||||
- fix-timeouts
|
||||
- complete
|
||||
|
||||
# ========================================
|
||||
# FIX SERVERS TRANSPORT (--tags fix-servers-transport)
|
||||
# ========================================
|
||||
- name: Sync Gitea stack configuration
|
||||
ansible.builtin.synchronize:
|
||||
src: "{{ playbook_dir }}/../../stacks/gitea/"
|
||||
dest: "{{ gitea_stack_path }}/"
|
||||
delete: no
|
||||
recursive: yes
|
||||
rsync_opts:
|
||||
- "--chmod=D755,F644"
|
||||
- "--exclude=.git"
|
||||
- "--exclude=*.log"
|
||||
- "--exclude=data/"
|
||||
- "--exclude=volumes/"
|
||||
tags:
|
||||
- fix-servers-transport
|
||||
|
||||
- name: Restart Gitea container to apply new labels
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose up -d --force-recreate {{ gitea_container_name }}
|
||||
register: gitea_restart_transport
|
||||
changed_when: gitea_restart_transport.rc == 0
|
||||
tags:
|
||||
- fix-servers-transport
|
||||
|
||||
# ========================================
|
||||
# VERIFICATION (--tags fix-timeouts, fix-ssl, complete)
|
||||
# ========================================
|
||||
- name: Wait for Gitea to be reachable via Traefik (with retries)
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
validate_certs: false
|
||||
timeout: 10
|
||||
register: gitea_health_via_traefik
|
||||
until: gitea_health_via_traefik.status == 200
|
||||
retries: 15
|
||||
delay: 2
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: (traefik_restart.changed | default(false) | bool) or (gitea_restart.changed | default(false) | bool)
|
||||
tags:
|
||||
- fix-timeouts
|
||||
- fix-ssl
|
||||
- complete
|
||||
|
||||
- name: Check if Gitea is in Traefik service discovery
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose exec -T {{ traefik_container_name }} traefik show providers docker 2>/dev/null | grep -i "gitea" || echo "NOT_FOUND"
|
||||
register: traefik_gitea_service_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: (traefik_restart.changed | default(false) | bool) or (gitea_restart.changed | default(false) | bool)
|
||||
tags:
|
||||
- fix-timeouts
|
||||
- fix-ssl
|
||||
- complete
|
||||
|
||||
- name: Final status check
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
validate_certs: false
|
||||
timeout: 10
|
||||
register: final_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- fix-timeouts
|
||||
- fix-ssl
|
||||
- complete
|
||||
|
||||
# ========================================
|
||||
# SUMMARY
|
||||
# ========================================
|
||||
- name: Summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
GITEA MANAGEMENT SUMMARY
|
||||
================================================================================
|
||||
|
||||
Actions performed:
|
||||
{% if 'complete' in ansible_run_tags %}
|
||||
- Gitea Runner: {% if runner_stop.changed | default(false) %}✅ Stopped{% else %}ℹ️ Not active or not found{% endif %}
|
||||
{% endif %}
|
||||
{% if 'restart' in ansible_run_tags or 'fix-timeouts' in ansible_run_tags or 'complete' in ansible_run_tags %}
|
||||
- Gitea Restart: {% if gitea_restart.changed %}✅ Performed{% else %}ℹ️ Not needed{% endif %}
|
||||
- Gitea Ready: {% if gitea_ready.rc == 0 %}✅ Ready{% else %}❌ Not ready{% endif %}
|
||||
{% endif %}
|
||||
{% if 'fix-timeouts' in ansible_run_tags or 'complete' in ansible_run_tags %}
|
||||
- Traefik Restart: {% if traefik_restart.changed %}✅ Performed{% else %}ℹ️ Not needed (traefik_auto_restart=false){% endif %}
|
||||
{% endif %}
|
||||
{% if 'fix-servers-transport' in ansible_run_tags %}
|
||||
- ServersTransport Update: {% if gitea_restart_transport.changed %}✅ Applied{% else %}ℹ️ Not needed{% endif %}
|
||||
{% endif %}
|
||||
|
||||
Final Status:
|
||||
{% if 'fix-timeouts' in ansible_run_tags or 'fix-ssl' in ansible_run_tags or 'complete' in ansible_run_tags %}
|
||||
- Gitea via Traefik: {% if final_status.status == 200 %}✅ Reachable (Status: 200){% else %}❌ Not reachable (Status: {{ final_status.status | default('TIMEOUT') }}){% endif %}
|
||||
- Traefik Service Discovery: {% if 'NOT_FOUND' not in traefik_gitea_service_check.stdout %}✅ Gitea found{% else %}❌ Gitea not found{% endif %}
|
||||
{% endif %}
|
||||
|
||||
{% if final_status.status == 200 and 'NOT_FOUND' not in traefik_gitea_service_check.stdout %}
|
||||
✅ SUCCESS: Gitea is now reachable via Traefik!
|
||||
URL: {{ gitea_url }}
|
||||
|
||||
Next steps:
|
||||
1. Test Gitea in browser: {{ gitea_url }}
|
||||
{% if 'complete' in ansible_run_tags %}
|
||||
2. If everything is stable, you can reactivate the runner:
|
||||
cd {{ gitea_runner_path }} && docker compose up -d gitea-runner
|
||||
3. Monitor if the runner overloads Gitea again
|
||||
{% endif %}
|
||||
{% else %}
|
||||
⚠️ PROBLEM: Gitea is not fully reachable
|
||||
|
||||
Possible causes:
|
||||
{% if final_status.status != 200 %}
|
||||
- Gitea does not respond via Traefik (Status: {{ final_status.status | default('TIMEOUT') }})
|
||||
{% endif %}
|
||||
{% if 'NOT_FOUND' in traefik_gitea_service_check.stdout %}
|
||||
- Traefik Service Discovery has not recognized Gitea yet
|
||||
{% endif %}
|
||||
|
||||
Next steps:
|
||||
1. Wait 1-2 minutes and test again: curl -k {{ gitea_url }}/api/healthz
|
||||
2. Check Traefik logs: cd {{ traefik_stack_path }} && docker compose logs {{ traefik_container_name }} --tail=50
|
||||
3. Check Gitea logs: cd {{ gitea_stack_path }} && docker compose logs {{ gitea_container_name }} --tail=50
|
||||
4. Run diagnosis: ansible-playbook -i inventory/production.yml playbooks/diagnose/gitea.yml
|
||||
{% endif %}
|
||||
|
||||
================================================================================
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
167
deployment/legacy/ansible/ansible/playbooks/manage/traefik.yml
Normal file
167
deployment/legacy/ansible/ansible/playbooks/manage/traefik.yml
Normal file
@@ -0,0 +1,167 @@
|
||||
---
|
||||
# Consolidated Traefik Management Playbook
|
||||
# Consolidates: stabilize-traefik.yml, disable-traefik-auto-restarts.yml
|
||||
#
|
||||
# Usage:
|
||||
# # Stabilize Traefik (fix acme.json, ensure running, monitor)
|
||||
# ansible-playbook -i inventory/production.yml playbooks/manage/traefik.yml --tags stabilize
|
||||
#
|
||||
# # Disable auto-restarts
|
||||
# ansible-playbook -i inventory/production.yml playbooks/manage/traefik.yml --tags disable-auto-restart
|
||||
|
||||
- name: Manage Traefik
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
traefik_stack_path: "{{ stacks_base_path }}/traefik"
|
||||
traefik_container_name: "traefik"
|
||||
traefik_stabilize_wait_minutes: "{{ traefik_stabilize_wait_minutes | default(10) }}"
|
||||
traefik_stabilize_check_interval: 60
|
||||
|
||||
tasks:
|
||||
- name: Display management plan
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
TRAEFIK MANAGEMENT
|
||||
================================================================================
|
||||
|
||||
Running management tasks with tags: {{ ansible_run_tags | default(['all']) }}
|
||||
|
||||
Available actions:
|
||||
- stabilize: Fix acme.json, ensure running, monitor stability
|
||||
- disable-auto-restart: Check and document auto-restart mechanisms
|
||||
|
||||
================================================================================
|
||||
|
||||
# ========================================
|
||||
# STABILIZE (--tags stabilize)
|
||||
# ========================================
|
||||
- name: Fix acme.json permissions
|
||||
ansible.builtin.file:
|
||||
path: "{{ traefik_stack_path }}/acme.json"
|
||||
state: file
|
||||
mode: '0600'
|
||||
owner: "{{ ansible_user | default('deploy') }}"
|
||||
group: "{{ ansible_user | default('deploy') }}"
|
||||
register: acme_permissions_fixed
|
||||
tags:
|
||||
- stabilize
|
||||
|
||||
- name: Ensure Traefik container is running
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose up -d {{ traefik_container_name }}
|
||||
register: traefik_start
|
||||
changed_when: traefik_start.rc == 0
|
||||
tags:
|
||||
- stabilize
|
||||
|
||||
- name: Wait for Traefik to be ready
|
||||
ansible.builtin.wait_for:
|
||||
timeout: 30
|
||||
delay: 2
|
||||
changed_when: false
|
||||
tags:
|
||||
- stabilize
|
||||
|
||||
- name: Monitor Traefik stability
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose ps {{ traefik_container_name }} --format "{{ '{{' }}.State{{ '}}' }}" | head -1 || echo "UNKNOWN"
|
||||
register: traefik_state_check
|
||||
changed_when: false
|
||||
until: traefik_state_check.stdout == "running"
|
||||
retries: "{{ (traefik_stabilize_wait_minutes | int * 60 / traefik_stabilize_check_interval) | int }}"
|
||||
delay: "{{ traefik_stabilize_check_interval }}"
|
||||
tags:
|
||||
- stabilize
|
||||
|
||||
- name: Check Traefik logs for restarts during monitoring
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose logs {{ traefik_container_name }} --since "{{ traefik_stabilize_wait_minutes }}m" 2>&1 | grep -iE "stopping server gracefully|I have to go" | wc -l
|
||||
register: restarts_during_monitoring
|
||||
changed_when: false
|
||||
tags:
|
||||
- stabilize
|
||||
|
||||
# ========================================
|
||||
# DISABLE AUTO-RESTART (--tags disable-auto-restart)
|
||||
# ========================================
|
||||
- name: Check Ansible traefik_auto_restart setting
|
||||
ansible.builtin.shell: |
|
||||
grep -r "traefik_auto_restart" /home/deploy/deployment/ansible/inventory/group_vars/ 2>/dev/null | head -5 || echo "No traefik_auto_restart setting found"
|
||||
register: ansible_auto_restart_setting
|
||||
changed_when: false
|
||||
tags:
|
||||
- disable-auto-restart
|
||||
|
||||
- name: Check for cronjobs that restart Traefik
|
||||
ansible.builtin.shell: |
|
||||
(crontab -l 2>/dev/null || true) | grep -E "traefik|docker.*compose.*restart.*traefik|docker.*stop.*traefik" || echo "No cronjobs found"
|
||||
register: traefik_cronjobs
|
||||
changed_when: false
|
||||
tags:
|
||||
- disable-auto-restart
|
||||
|
||||
- name: Check systemd timers for Traefik
|
||||
ansible.builtin.shell: |
|
||||
systemctl list-timers --all --no-pager | grep -E "traefik|docker.*compose.*traefik" || echo "No Traefik-related timers"
|
||||
register: traefik_timers
|
||||
changed_when: false
|
||||
tags:
|
||||
- disable-auto-restart
|
||||
|
||||
# ========================================
|
||||
# SUMMARY
|
||||
# ========================================
|
||||
- name: Summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
TRAEFIK MANAGEMENT SUMMARY
|
||||
================================================================================
|
||||
|
||||
{% if 'stabilize' in ansible_run_tags %}
|
||||
Stabilization:
|
||||
- acme.json permissions: {% if acme_permissions_fixed.changed %}✅ Fixed{% else %}ℹ️ Already correct{% endif %}
|
||||
- Traefik started: {% if traefik_start.changed %}✅ Started{% else %}ℹ️ Already running{% endif %}
|
||||
- Stability monitoring: {{ traefik_stabilize_wait_minutes }} minutes
|
||||
- Restarts during monitoring: {{ restarts_during_monitoring.stdout | default('0') }}
|
||||
|
||||
{% if (restarts_during_monitoring.stdout | default('0') | int) == 0 %}
|
||||
✅ Traefik ran stable during monitoring period!
|
||||
{% else %}
|
||||
⚠️ {{ restarts_during_monitoring.stdout }} restarts detected during monitoring
|
||||
→ Run diagnosis: ansible-playbook -i inventory/production.yml playbooks/diagnose/traefik.yml --tags restart-source
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
{% if 'disable-auto-restart' in ansible_run_tags %}
|
||||
Auto-Restart Analysis:
|
||||
- Ansible setting: {{ ansible_auto_restart_setting.stdout | default('Not found') }}
|
||||
- Cronjobs: {{ traefik_cronjobs.stdout | default('None found') }}
|
||||
- Systemd timers: {{ traefik_timers.stdout | default('None found') }}
|
||||
|
||||
Recommendations:
|
||||
{% if 'traefik_auto_restart.*true' in ansible_auto_restart_setting.stdout %}
|
||||
- Set traefik_auto_restart: false in group_vars
|
||||
{% endif %}
|
||||
{% if 'No cronjobs' not in traefik_cronjobs.stdout %}
|
||||
- Remove or disable cronjobs that restart Traefik
|
||||
{% endif %}
|
||||
{% if 'No Traefik-related timers' not in traefik_timers.stdout %}
|
||||
- Disable systemd timers that restart Traefik
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
================================================================================
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,192 @@
|
||||
---
|
||||
# Monitor Workflow Performance
|
||||
# Collects comprehensive metrics about workflow execution, Gitea load, and system resources
|
||||
- name: Monitor Workflow Performance
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
monitoring_output_dir: "/home/deploy/monitoring/workflow-metrics"
|
||||
monitoring_lookback_hours: 24
|
||||
gitea_stack_path: "{{ stacks_base_path }}/gitea"
|
||||
traefik_stack_path: "{{ stacks_base_path }}/traefik"
|
||||
|
||||
tasks:
|
||||
- name: Create monitoring output directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ monitoring_output_dir }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Get system load average
|
||||
ansible.builtin.shell: |
|
||||
uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | tr -d ' '
|
||||
register: system_load
|
||||
changed_when: false
|
||||
|
||||
- name: Get Docker container count
|
||||
ansible.builtin.shell: |
|
||||
docker ps --format '{{ '{{' }}.Names{{ '}}' }}' | wc -l
|
||||
register: docker_container_count
|
||||
changed_when: false
|
||||
|
||||
- name: Get Gitea Runner status
|
||||
ansible.builtin.shell: |
|
||||
if docker ps --format '{{ '{{' }}.Names{{ '}}' }}' | grep -q "gitea-runner"; then
|
||||
echo "running"
|
||||
else
|
||||
echo "stopped"
|
||||
fi
|
||||
register: gitea_runner_status
|
||||
changed_when: false
|
||||
|
||||
- name: Get Gitea container resource usage
|
||||
ansible.builtin.shell: |
|
||||
docker stats gitea --no-stream --format "{{ '{{' }}.CPUPerc{{ '}}' }},{{ '{{' }}.MemUsage{{ '}}' }},{{ '{{' }}.MemPerc{{ '}}' }}" 2>/dev/null || echo "N/A,N/A,N/A"
|
||||
register: gitea_stats
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Get Traefik container resource usage
|
||||
ansible.builtin.shell: |
|
||||
docker stats traefik --no-stream --format "{{ '{{' }}.CPUPerc{{ '}}' }},{{ '{{' }}.MemUsage{{ '}}' }},{{ '{{' }}.MemPerc{{ '}}' }}" 2>/dev/null || echo "N/A,N/A,N/A"
|
||||
register: traefik_stats
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Check Gitea API response time
|
||||
ansible.builtin.uri:
|
||||
url: "https://{{ gitea_domain }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
validate_certs: false
|
||||
timeout: 10
|
||||
register: gitea_api_test
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Get Gitea logs for workflow activity (last {{ monitoring_lookback_hours }} hours)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose logs gitea --since "{{ monitoring_lookback_hours }}h" 2>&1 | \
|
||||
grep -iE "workflow|action|runner" | \
|
||||
tail -50 || echo "No workflow activity found"
|
||||
register: gitea_workflow_logs
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Count workflow-related log entries
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose logs gitea --since "{{ monitoring_lookback_hours }}h" 2>&1 | \
|
||||
grep -iE "workflow|action|runner" | \
|
||||
wc -l
|
||||
register: workflow_log_count
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Get disk usage for Gitea data
|
||||
ansible.builtin.shell: |
|
||||
du -sh {{ gitea_stack_path }}/data 2>/dev/null | awk '{print $1}' || echo "N/A"
|
||||
register: gitea_data_size
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Get Docker system disk usage
|
||||
ansible.builtin.shell: |
|
||||
docker system df --format "{{ '{{' }}.Size{{ '}}' }}" 2>/dev/null | head -1 || echo "N/A"
|
||||
register: docker_disk_usage
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Get memory usage
|
||||
ansible.builtin.shell: |
|
||||
free -h | grep Mem | awk '{print $3 "/" $2}'
|
||||
register: memory_usage
|
||||
changed_when: false
|
||||
|
||||
- name: Get CPU usage (1 minute average)
|
||||
ansible.builtin.shell: |
|
||||
top -bn1 | grep "Cpu(s)" | sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | awk '{print 100 - $1}'
|
||||
register: cpu_usage
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Generate metrics JSON
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ monitoring_output_dir }}/workflow_metrics_{{ ansible_date_time.epoch }}.json"
|
||||
content: |
|
||||
{
|
||||
"timestamp": "{{ ansible_date_time.iso8601 }}",
|
||||
"system_metrics": {
|
||||
"load_average": "{{ system_load.stdout }}",
|
||||
"cpu_usage_percent": "{{ cpu_usage.stdout | default('N/A') }}",
|
||||
"memory_usage": "{{ memory_usage.stdout }}",
|
||||
"docker_containers": "{{ docker_container_count.stdout }}",
|
||||
"docker_disk_usage": "{{ docker_disk_usage.stdout }}",
|
||||
"gitea_data_size": "{{ gitea_data_size.stdout }}"
|
||||
},
|
||||
"gitea_metrics": {
|
||||
"runner_status": "{{ gitea_runner_status.stdout }}",
|
||||
"api_response_time_ms": "{{ (gitea_api_test.elapsed * 1000) | default('N/A') | int }}",
|
||||
"workflow_log_entries_last_{{ monitoring_lookback_hours }}h": {{ workflow_log_count.stdout | int }},
|
||||
"container_stats": {
|
||||
"cpu_percent": "{{ gitea_stats.stdout.split(',')[0] if gitea_stats.stdout != 'N/A,N/A,N/A' else 'N/A' }}",
|
||||
"memory_usage": "{{ gitea_stats.stdout.split(',')[1] if gitea_stats.stdout != 'N/A,N/A,N/A' else 'N/A' }}",
|
||||
"memory_percent": "{{ gitea_stats.stdout.split(',')[2] if gitea_stats.stdout != 'N/A,N/A,N/A' else 'N/A' }}"
|
||||
}
|
||||
},
|
||||
"traefik_metrics": {
|
||||
"container_stats": {
|
||||
"cpu_percent": "{{ traefik_stats.stdout.split(',')[0] if traefik_stats.stdout != 'N/A,N/A,N/A' else 'N/A' }}",
|
||||
"memory_usage": "{{ traefik_stats.stdout.split(',')[1] if traefik_stats.stdout != 'N/A,N/A,N/A' else 'N/A' }}",
|
||||
"memory_percent": "{{ traefik_stats.stdout.split(',')[2] if traefik_stats.stdout != 'N/A,N/A,N/A' else 'N/A' }}"
|
||||
}
|
||||
},
|
||||
"optimizations": {
|
||||
"repository_artifact_enabled": true,
|
||||
"helper_script_caching_enabled": true,
|
||||
"combined_deployment_playbook": true,
|
||||
"exponential_backoff_health_checks": true,
|
||||
"concurrency_groups": true
|
||||
}
|
||||
}
|
||||
mode: '0644'
|
||||
|
||||
- name: Display monitoring summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
WORKFLOW PERFORMANCE MONITORING - SUMMARY
|
||||
================================================================================
|
||||
|
||||
System Metrics:
|
||||
- Load Average: {{ system_load.stdout }}
|
||||
- CPU Usage: {{ cpu_usage.stdout | default('N/A') }}%
|
||||
- Memory Usage: {{ memory_usage.stdout }}
|
||||
- Docker Containers: {{ docker_container_count.stdout }}
|
||||
- Docker Disk Usage: {{ docker_disk_usage.stdout }}
|
||||
- Gitea Data Size: {{ gitea_data_size.stdout }}
|
||||
|
||||
Gitea Metrics:
|
||||
- Runner Status: {{ gitea_runner_status.stdout }}
|
||||
- API Response Time: {{ (gitea_api_test.elapsed * 1000) | default('N/A') | int }}ms
|
||||
- Workflow Log Entries (last {{ monitoring_lookback_hours }}h): {{ workflow_log_count.stdout }}
|
||||
- Container CPU: {{ gitea_stats.stdout.split(',')[0] if gitea_stats.stdout != 'N/A,N/A,N/A' else 'N/A' }}
|
||||
- Container Memory: {{ gitea_stats.stdout.split(',')[1] if gitea_stats.stdout != 'N/A,N/A,N/A' else 'N/A' }}
|
||||
|
||||
Traefik Metrics:
|
||||
- Container CPU: {{ traefik_stats.stdout.split(',')[0] if traefik_stats.stdout != 'N/A,N/A,N/A' else 'N/A' }}
|
||||
- Container Memory: {{ traefik_stats.stdout.split(',')[1] if traefik_stats.stdout != 'N/A,N/A,N/A' else 'N/A' }}
|
||||
|
||||
Optimizations Enabled:
|
||||
✅ Repository Artifact Caching
|
||||
✅ Helper Script Caching
|
||||
✅ Combined Deployment Playbook
|
||||
✅ Exponential Backoff Health Checks
|
||||
✅ Concurrency Groups
|
||||
|
||||
Metrics saved to: {{ monitoring_output_dir }}/workflow_metrics_{{ ansible_date_time.epoch }}.json
|
||||
|
||||
================================================================================
|
||||
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
# Recreate Containers with Environment Variables
|
||||
# Wrapper Playbook for application role containers tasks (recreate-with-env action)
|
||||
- hosts: production
|
||||
gather_facts: no
|
||||
become: no
|
||||
vars:
|
||||
application_container_action: recreate-with-env
|
||||
tasks:
|
||||
- name: Include application containers tasks (recreate-with-env)
|
||||
ansible.builtin.include_role:
|
||||
name: application
|
||||
tasks_from: containers
|
||||
tags:
|
||||
- application
|
||||
- containers
|
||||
- recreate
|
||||
@@ -0,0 +1,16 @@
|
||||
---
|
||||
# Recreate Traefik Container with New Configuration
|
||||
# Wrapper Playbook for traefik role restart tasks (with recreate action)
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
traefik_restart_action: recreate
|
||||
tasks:
|
||||
- name: Include traefik restart tasks (recreate)
|
||||
ansible.builtin.include_role:
|
||||
name: traefik
|
||||
tasks_from: restart
|
||||
tags:
|
||||
- traefik
|
||||
- recreate
|
||||
@@ -0,0 +1,363 @@
|
||||
---
|
||||
# Redeploy Traefik and Gitea Stacks
|
||||
# Purpose: Clean redeployment of Traefik and Gitea stacks to fix service discovery issues
|
||||
# This playbook:
|
||||
# - Stops and removes containers (but keeps volumes and acme.json)
|
||||
# - Redeploys both stacks with fresh containers
|
||||
# - Reinitializes service discovery
|
||||
# - Verifies everything works
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook -i inventory/production.yml playbooks/redeploy-traefik-gitea.yml \
|
||||
# --vault-password-file secrets/.vault_pass
|
||||
|
||||
- name: Redeploy Traefik and Gitea Stacks
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
|
||||
vars:
|
||||
traefik_stack_path: "{{ stacks_base_path }}/traefik"
|
||||
gitea_stack_path: "{{ stacks_base_path }}/gitea"
|
||||
gitea_url: "https://{{ gitea_domain }}"
|
||||
traefik_container_name: "traefik"
|
||||
gitea_container_name: "gitea"
|
||||
|
||||
tasks:
|
||||
# ========================================
|
||||
# 1. PREPARATION
|
||||
# ========================================
|
||||
|
||||
- name: Display redeployment plan
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
TRAEFIK + GITEA REDEPLOYMENT PLAN
|
||||
================================================================================
|
||||
|
||||
This playbook will:
|
||||
1. ✅ Sync latest stack configurations
|
||||
2. ✅ Stop and remove Traefik containers (keeps acme.json)
|
||||
3. ✅ Stop and remove Gitea containers (keeps volumes/data)
|
||||
4. ✅ Redeploy Traefik stack
|
||||
5. ✅ Redeploy Gitea stack
|
||||
6. ✅ Verify service discovery
|
||||
7. ✅ Test Gitea accessibility
|
||||
|
||||
⚠️ IMPORTANT:
|
||||
- SSL certificates (acme.json) will be preserved
|
||||
- Gitea data (volumes) will be preserved
|
||||
- Only containers will be recreated
|
||||
- Expected downtime: ~2-5 minutes
|
||||
|
||||
================================================================================
|
||||
|
||||
- name: Sync infrastructure stacks to server
|
||||
ansible.builtin.include_role:
|
||||
name: traefik
|
||||
tasks_from: deploy
|
||||
vars:
|
||||
traefik_auto_restart: false # Don't restart during sync
|
||||
when: false # Skip for now, we'll do it manually
|
||||
|
||||
- name: Sync stacks directory to production server
|
||||
ansible.builtin.synchronize:
|
||||
src: "{{ playbook_dir }}/../../stacks/"
|
||||
dest: "{{ stacks_base_path }}/"
|
||||
delete: no
|
||||
recursive: yes
|
||||
rsync_opts:
|
||||
- "--chmod=D755,F644"
|
||||
- "--exclude=.git"
|
||||
- "--exclude=*.log"
|
||||
- "--exclude=data/"
|
||||
- "--exclude=volumes/"
|
||||
- "--exclude=acme.json" # Preserve SSL certificates
|
||||
- "--exclude=*.key"
|
||||
- "--exclude=*.pem"
|
||||
|
||||
# ========================================
|
||||
# 2. TRAEFIK REDEPLOYMENT
|
||||
# ========================================
|
||||
|
||||
- name: Check Traefik container status (before)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose ps {{ traefik_container_name }} 2>/dev/null || echo "NOT_RUNNING"
|
||||
register: traefik_status_before
|
||||
changed_when: false
|
||||
|
||||
- name: Display Traefik status (before)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Traefik Status (Before):
|
||||
{{ traefik_status_before.stdout }}
|
||||
|
||||
- name: Check if acme.json exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ traefik_stack_path }}/acme.json"
|
||||
register: acme_json_stat
|
||||
|
||||
- name: Backup acme.json (safety measure)
|
||||
ansible.builtin.copy:
|
||||
src: "{{ traefik_stack_path }}/acme.json"
|
||||
dest: "{{ traefik_stack_path }}/acme.json.backup.{{ ansible_date_time.epoch }}"
|
||||
remote_src: yes
|
||||
mode: '0600'
|
||||
when: acme_json_stat.stat.exists
|
||||
register: acme_backup
|
||||
failed_when: false
|
||||
changed_when: acme_backup.rc == 0
|
||||
|
||||
- name: Stop Traefik stack
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose down
|
||||
register: traefik_stop
|
||||
changed_when: traefik_stop.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Remove Traefik containers (if any remain)
|
||||
ansible.builtin.shell: |
|
||||
docker ps -a --filter "name={{ traefik_container_name }}" --format "{{ '{{' }}.ID{{ '}}' }}" | xargs -r docker rm -f 2>/dev/null || true
|
||||
register: traefik_remove
|
||||
changed_when: traefik_remove.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Ensure acme.json exists and has correct permissions
|
||||
ansible.builtin.file:
|
||||
path: "{{ traefik_stack_path }}/acme.json"
|
||||
state: touch
|
||||
mode: '0600'
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
become: yes
|
||||
register: acme_json_ensure
|
||||
|
||||
- name: Check if acme.json exists after ensure
|
||||
ansible.builtin.stat:
|
||||
path: "{{ traefik_stack_path }}/acme.json"
|
||||
register: acme_json_after_ensure
|
||||
|
||||
- name: Restore acme.json from backup if it was deleted
|
||||
ansible.builtin.copy:
|
||||
src: "{{ traefik_stack_path }}/acme.json.backup.{{ ansible_date_time.epoch }}"
|
||||
dest: "{{ traefik_stack_path }}/acme.json"
|
||||
remote_src: yes
|
||||
mode: '0600'
|
||||
when:
|
||||
- acme_backup.changed | default(false)
|
||||
- acme_json_stat.stat.exists
|
||||
- not acme_json_after_ensure.stat.exists
|
||||
failed_when: false
|
||||
|
||||
- name: Deploy Traefik stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ traefik_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
register: traefik_deploy
|
||||
|
||||
- name: Wait for Traefik to be ready
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose ps {{ traefik_container_name }} | grep -Eiq "Up|running"
|
||||
register: traefik_ready
|
||||
changed_when: false
|
||||
until: traefik_ready.rc == 0
|
||||
retries: 12
|
||||
delay: 5
|
||||
failed_when: traefik_ready.rc != 0
|
||||
|
||||
- name: Check Traefik container status (after)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose ps {{ traefik_container_name }}
|
||||
register: traefik_status_after
|
||||
changed_when: false
|
||||
|
||||
- name: Display Traefik status (after)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Traefik Status (After):
|
||||
{{ traefik_status_after.stdout }}
|
||||
|
||||
# ========================================
|
||||
# 3. GITEA REDEPLOYMENT
|
||||
# ========================================
|
||||
|
||||
- name: Check Gitea container status (before)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose ps {{ gitea_container_name }} 2>/dev/null || echo "NOT_RUNNING"
|
||||
register: gitea_status_before
|
||||
changed_when: false
|
||||
|
||||
- name: Display Gitea status (before)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Gitea Status (Before):
|
||||
{{ gitea_status_before.stdout }}
|
||||
|
||||
- name: Stop Gitea stack (preserves volumes)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose down
|
||||
register: gitea_stop
|
||||
changed_when: gitea_stop.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Remove Gitea containers (if any remain, volumes are preserved)
|
||||
ansible.builtin.shell: |
|
||||
docker ps -a --filter "name={{ gitea_container_name }}" --format "{{ '{{' }}.ID{{ '}}' }}" | xargs -r docker rm -f 2>/dev/null || true
|
||||
register: gitea_remove
|
||||
changed_when: gitea_remove.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Deploy Gitea stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ gitea_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
register: gitea_deploy
|
||||
|
||||
- name: Wait for Gitea to be ready
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose ps {{ gitea_container_name }} | grep -Eiq "Up|running"
|
||||
register: gitea_ready
|
||||
changed_when: false
|
||||
until: gitea_ready.rc == 0
|
||||
retries: 12
|
||||
delay: 5
|
||||
failed_when: gitea_ready.rc != 0
|
||||
|
||||
- name: Wait for Gitea to be healthy
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose exec -T {{ gitea_container_name }} curl -f http://localhost:3000/api/healthz 2>&1 | grep -q "status.*pass" && echo "HEALTHY" || echo "NOT_HEALTHY"
|
||||
register: gitea_health
|
||||
changed_when: false
|
||||
until: gitea_health.stdout == "HEALTHY"
|
||||
retries: 30
|
||||
delay: 2
|
||||
failed_when: false
|
||||
|
||||
- name: Check Gitea container status (after)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose ps {{ gitea_container_name }}
|
||||
register: gitea_status_after
|
||||
changed_when: false
|
||||
|
||||
- name: Display Gitea status (after)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Gitea Status (After):
|
||||
{{ gitea_status_after.stdout }}
|
||||
|
||||
# ========================================
|
||||
# 4. SERVICE DISCOVERY VERIFICATION
|
||||
# ========================================
|
||||
|
||||
- name: Wait for Traefik to discover Gitea (service discovery delay)
|
||||
ansible.builtin.pause:
|
||||
seconds: 15
|
||||
|
||||
- name: Check if Gitea is in traefik-public network
|
||||
ansible.builtin.shell: |
|
||||
docker network inspect traefik-public --format '{{ '{{' }}range .Containers{{ '}}' }}{{ '{{' }}.Name{{ '}}' }} {{ '{{' }}end{{ '}}' }}' 2>/dev/null | grep -q {{ gitea_container_name }} && echo "YES" || echo "NO"
|
||||
register: gitea_in_network
|
||||
changed_when: false
|
||||
|
||||
- name: Check if Traefik is in traefik-public network
|
||||
ansible.builtin.shell: |
|
||||
docker network inspect traefik-public --format '{{ '{{' }}range .Containers{{ '}}' }}{{ '{{' }}.Name{{ '}}' }} {{ '{{' }}end{{ '}}' }}' 2>/dev/null | grep -q {{ traefik_container_name }} && echo "YES" || echo "NO"
|
||||
register: traefik_in_network
|
||||
changed_when: false
|
||||
|
||||
- name: Test direct connection from Traefik to Gitea
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose exec -T {{ traefik_container_name }} wget -qO- --timeout=5 http://{{ gitea_container_name }}:3000/api/healthz 2>&1 | head -5 || echo "CONNECTION_FAILED"
|
||||
register: traefik_gitea_direct
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display network status
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Network Status:
|
||||
- Gitea in traefik-public: {% if gitea_in_network.stdout == 'YES' %}✅{% else %}❌{% endif %}
|
||||
- Traefik in traefik-public: {% if traefik_in_network.stdout == 'YES' %}✅{% else %}❌{% endif %}
|
||||
- Traefik → Gitea (direct): {% if 'CONNECTION_FAILED' not in traefik_gitea_direct.stdout %}✅{% else %}❌{% endif %}
|
||||
|
||||
# ========================================
|
||||
# 5. FINAL VERIFICATION
|
||||
# ========================================
|
||||
|
||||
- name: Test Gitea via HTTPS (with retries)
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
validate_certs: false
|
||||
timeout: 10
|
||||
register: gitea_https_test
|
||||
until: gitea_https_test.status == 200
|
||||
retries: 20
|
||||
delay: 3
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Check SSL certificate status
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
if [ -f acme.json ] && [ -s acme.json ]; then
|
||||
echo "SSL certificates: PRESENT"
|
||||
else
|
||||
echo "SSL certificates: MISSING or EMPTY"
|
||||
fi
|
||||
register: ssl_status
|
||||
changed_when: false
|
||||
|
||||
- name: Final status summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
REDEPLOYMENT SUMMARY
|
||||
================================================================================
|
||||
|
||||
Traefik:
|
||||
- Status: {{ traefik_status_after.stdout | regex_replace('.*(Up|Down|Restarting).*', '\\1') | default('UNKNOWN') }}
|
||||
- SSL Certificates: {{ ssl_status.stdout }}
|
||||
|
||||
Gitea:
|
||||
- Status: {{ gitea_status_after.stdout | regex_replace('.*(Up|Down|Restarting).*', '\\1') | default('UNKNOWN') }}
|
||||
- Health: {% if gitea_health.stdout == 'HEALTHY' %}✅ Healthy{% else %}❌ Not Healthy{% endif %}
|
||||
|
||||
Service Discovery:
|
||||
- Gitea in network: {% if gitea_in_network.stdout == 'YES' %}✅{% else %}❌{% endif %}
|
||||
- Traefik in network: {% if traefik_in_network.stdout == 'YES' %}✅{% else %}❌{% endif %}
|
||||
- Direct connection: {% if 'CONNECTION_FAILED' not in traefik_gitea_direct.stdout %}✅{% else %}❌{% endif %}
|
||||
|
||||
Gitea Accessibility:
|
||||
{% if gitea_https_test.status == 200 %}
|
||||
✅ Gitea is reachable via HTTPS (Status: 200)
|
||||
URL: {{ gitea_url }}
|
||||
{% else %}
|
||||
❌ Gitea is NOT reachable via HTTPS (Status: {{ gitea_https_test.status | default('TIMEOUT') }})
|
||||
|
||||
Possible causes:
|
||||
1. SSL certificate is still being generated (wait 2-5 minutes)
|
||||
2. Service discovery needs more time (wait 1-2 minutes)
|
||||
3. Network configuration issue
|
||||
|
||||
Next steps:
|
||||
- Wait 2-5 minutes and test again: curl -k {{ gitea_url }}/api/healthz
|
||||
- Check Traefik logs: cd {{ traefik_stack_path }} && docker compose logs traefik --tail=50
|
||||
- Check Gitea logs: cd {{ gitea_stack_path }} && docker compose logs gitea --tail=50
|
||||
{% endif %}
|
||||
|
||||
================================================================================
|
||||
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
# Register Gitea Runner with Correct Instance URL
|
||||
# Wrapper Playbook for gitea role runner tasks (register action)
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
gitea_runner_action: register
|
||||
tasks:
|
||||
- name: Include gitea runner tasks (register)
|
||||
ansible.builtin.include_role:
|
||||
name: gitea
|
||||
tasks_from: runner
|
||||
tags:
|
||||
- gitea
|
||||
- runner
|
||||
- register
|
||||
@@ -0,0 +1,14 @@
|
||||
---
|
||||
# Restart Traefik Container
|
||||
# Wrapper Playbook for traefik role restart tasks
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
tasks:
|
||||
- name: Include traefik restart tasks
|
||||
ansible.builtin.include_role:
|
||||
name: traefik
|
||||
tasks_from: restart
|
||||
tags:
|
||||
- traefik
|
||||
- restart
|
||||
166
deployment/legacy/ansible/ansible/playbooks/rollback.yml
Normal file
166
deployment/legacy/ansible/ansible/playbooks/rollback.yml
Normal file
@@ -0,0 +1,166 @@
|
||||
---
|
||||
- name: Rollback Application Deployment
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
|
||||
vars:
|
||||
rollback_to_version: "{{ rollback_to_version | default('previous') }}"
|
||||
# app_stack_path is now defined in group_vars/production.yml
|
||||
|
||||
pre_tasks:
|
||||
- name: Optionally load registry credentials from encrypted vault
|
||||
include_vars:
|
||||
file: "{{ playbook_dir }}/../secrets/production.vault.yml"
|
||||
no_log: yes
|
||||
ignore_errors: yes
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Derive docker registry credentials from vault when not provided
|
||||
set_fact:
|
||||
docker_registry_username: "{{ docker_registry_username | default(vault_docker_registry_username | default(docker_registry_username_default)) }}"
|
||||
docker_registry_password: "{{ docker_registry_password | default(vault_docker_registry_password | default(docker_registry_password_default)) }}"
|
||||
|
||||
- name: Check Docker service
|
||||
systemd:
|
||||
name: docker
|
||||
state: started
|
||||
register: docker_service
|
||||
become: yes
|
||||
|
||||
- name: Fail if Docker is not running
|
||||
fail:
|
||||
msg: "Docker service is not running - cannot perform rollback"
|
||||
when: docker_service.status.ActiveState != 'active'
|
||||
|
||||
- name: Get list of available backups
|
||||
find:
|
||||
paths: "{{ backups_path }}"
|
||||
file_type: directory
|
||||
register: available_backups
|
||||
|
||||
- name: Fail if no backups available
|
||||
fail:
|
||||
msg: "No backup versions available for rollback"
|
||||
when: available_backups.matched == 0
|
||||
|
||||
- name: Sort backups by date (newest first)
|
||||
set_fact:
|
||||
sorted_backups: "{{ available_backups.files | sort(attribute='mtime', reverse=true) }}"
|
||||
|
||||
tasks:
|
||||
- name: Determine rollback target
|
||||
set_fact:
|
||||
rollback_backup: "{{ sorted_backups[1] if rollback_to_version == 'previous' else sorted_backups | selectattr('path', 'search', rollback_to_version) | first }}"
|
||||
when: sorted_backups | length > 1
|
||||
|
||||
- name: Fail if rollback target not found
|
||||
fail:
|
||||
msg: "Cannot determine rollback target. Available backups: {{ sorted_backups | map(attribute='path') | list }}"
|
||||
when: rollback_backup is not defined
|
||||
|
||||
- name: Load rollback metadata
|
||||
slurp:
|
||||
src: "{{ rollback_backup.path }}/deployment_metadata.txt"
|
||||
register: rollback_metadata
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Parse rollback image from metadata
|
||||
set_fact:
|
||||
rollback_image: "{{ rollback_metadata.content | b64decode | regex_search('Deployed Image: ([^\\n]+)', '\\1') | first }}"
|
||||
when: rollback_metadata is succeeded
|
||||
|
||||
- name: Alternative: Parse rollback image from docker-compose config backup
|
||||
set_fact:
|
||||
rollback_image: "{{ rollback_metadata.content | b64decode | regex_search('image:\\s+([^:]+):([^\\n]+)', '\\1:\\2') | first }}"
|
||||
when:
|
||||
- rollback_metadata is succeeded
|
||||
- rollback_image is not defined
|
||||
|
||||
- name: Fail if cannot determine rollback image
|
||||
fail:
|
||||
msg: "Cannot determine image to rollback to from backup: {{ rollback_backup.path }}"
|
||||
when: rollback_image is not defined or rollback_image == ''
|
||||
|
||||
- name: Display rollback information
|
||||
debug:
|
||||
msg:
|
||||
- "Rolling back to previous version"
|
||||
- "Backup: {{ rollback_backup.path }}"
|
||||
- "Image: {{ rollback_image }}"
|
||||
|
||||
- name: Login to Docker registry (if credentials provided)
|
||||
community.docker.docker_login:
|
||||
registry_url: "{{ docker_registry_url }}"
|
||||
username: "{{ docker_registry_username }}"
|
||||
password: "{{ docker_registry_password }}"
|
||||
no_log: yes
|
||||
when:
|
||||
- docker_registry_username is defined
|
||||
- docker_registry_password is defined
|
||||
|
||||
- name: Pull rollback image
|
||||
community.docker.docker_image:
|
||||
name: "{{ rollback_image.split(':')[0] }}"
|
||||
tag: "{{ rollback_image.split(':')[1] }}"
|
||||
source: pull
|
||||
force_source: yes
|
||||
register: image_pull
|
||||
|
||||
- name: Update docker-compose.yml with rollback image
|
||||
replace:
|
||||
path: "{{ app_stack_path }}/docker-compose.yml"
|
||||
regexp: '^(\s+image:\s+){{ app_image }}:.*$'
|
||||
replace: '\1{{ rollback_image }}'
|
||||
register: compose_updated
|
||||
|
||||
- name: Restart application stack with rollback image
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ app_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
recreate: always
|
||||
remove_orphans: yes
|
||||
register: stack_rollback
|
||||
when: compose_updated.changed
|
||||
|
||||
- name: Wait for services to be healthy after rollback
|
||||
wait_for:
|
||||
timeout: 60
|
||||
changed_when: false
|
||||
|
||||
- name: Get current running image
|
||||
shell: |
|
||||
docker compose -f {{ app_stack_path }}/docker-compose.yml config | grep -E "^\s+image:" | head -1 | awk '{print $2}' || echo "unknown"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: current_image
|
||||
changed_when: false
|
||||
|
||||
- name: Record rollback event
|
||||
copy:
|
||||
content: |
|
||||
Rollback Timestamp: {{ ansible_date_time.iso8601 }}
|
||||
Rolled back from: {{ sorted_backups[0].path | basename }}
|
||||
Rolled back to: {{ rollback_backup.path | basename }}
|
||||
Rollback Image: {{ rollback_image }}
|
||||
Current Image: {{ current_image.stdout }}
|
||||
dest: "{{ backups_path }}/rollback_{{ ansible_date_time.epoch }}.txt"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0644'
|
||||
|
||||
post_tasks:
|
||||
- name: Display rollback summary
|
||||
debug:
|
||||
msg:
|
||||
- "✅ Rollback completed successfully!"
|
||||
- "Rolled back to: {{ rollback_image }}"
|
||||
- "From backup: {{ rollback_backup.path }}"
|
||||
- "Current running image: {{ current_image.stdout }}"
|
||||
- "Health check URL: {{ health_check_url }}"
|
||||
|
||||
- name: Recommend health check
|
||||
debug:
|
||||
msg: "⚠️ Please verify application health at {{ health_check_url }}"
|
||||
@@ -0,0 +1,16 @@
|
||||
---
|
||||
# Setup Gitea Repository
|
||||
# Wrapper Playbook for gitea role repository tasks
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
ansible_connection: local
|
||||
tasks:
|
||||
- name: Include gitea repository tasks
|
||||
ansible.builtin.include_role:
|
||||
name: gitea
|
||||
tasks_from: repository
|
||||
tags:
|
||||
- gitea
|
||||
- repository
|
||||
@@ -0,0 +1,156 @@
|
||||
---
|
||||
# Ansible Playbook: Setup Gitea Runner CI Image and Configuration
|
||||
# Purpose: Build CI Docker image, configure runner labels, and update runner registration
|
||||
# Usage:
|
||||
# Local: ansible-playbook -i inventory/local.yml playbooks/setup-gitea-runner-ci.yml
|
||||
# Or: ansible-playbook -i localhost, -c local playbooks/setup-gitea-runner-ci.yml
|
||||
|
||||
- name: Setup Gitea Runner CI Image
|
||||
hosts: localhost
|
||||
connection: local
|
||||
vars:
|
||||
project_root: "{{ lookup('env', 'PWD') | default(playbook_dir + '/../..', true) }}"
|
||||
ci_image_name: "php-ci:latest"
|
||||
ci_image_registry: "{{ ci_registry | default(docker_registry_external) }}"
|
||||
ci_image_registry_path: "{{ ci_registry }}/ci/php-ci:latest"
|
||||
gitea_runner_dir: "{{ project_root }}/deployment/gitea-runner"
|
||||
docker_dind_container: "gitea-runner-dind"
|
||||
push_to_registry: false # Set to true to push to registry after build
|
||||
|
||||
tasks:
|
||||
- name: Verify project root exists
|
||||
stat:
|
||||
path: "{{ project_root }}"
|
||||
register: project_root_stat
|
||||
|
||||
- name: Fail if project root not found
|
||||
fail:
|
||||
msg: "Project root not found at {{ project_root }}. Set project_root variable or run from project root."
|
||||
when: not project_root_stat.stat.exists
|
||||
|
||||
- name: Check if CI Dockerfile exists
|
||||
stat:
|
||||
path: "{{ project_root }}/docker/ci/Dockerfile"
|
||||
register: dockerfile_stat
|
||||
|
||||
- name: Fail if Dockerfile not found
|
||||
fail:
|
||||
msg: "CI Dockerfile not found at {{ project_root }}/docker/ci/Dockerfile"
|
||||
when: not dockerfile_stat.stat.exists
|
||||
|
||||
- name: Check if docker-dind container is running
|
||||
docker_container_info:
|
||||
name: "{{ docker_dind_container }}"
|
||||
register: dind_container_info
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Fail if docker-dind not running
|
||||
fail:
|
||||
msg: "docker-dind container '{{ docker_dind_container }}' is not running. Start it with: cd {{ gitea_runner_dir }} && docker-compose up -d docker-dind"
|
||||
when: dind_container_info.exists is not defined or not dind_container_info.exists
|
||||
|
||||
- name: Build CI Docker image
|
||||
community.docker.docker_image:
|
||||
name: "{{ ci_image_name }}"
|
||||
source: build
|
||||
build:
|
||||
path: "{{ project_root }}"
|
||||
dockerfile: docker/ci/Dockerfile
|
||||
platform: linux/amd64
|
||||
tag: "latest"
|
||||
force_source: "{{ force_rebuild | default(false) }}"
|
||||
register: build_result
|
||||
|
||||
- name: Display build result
|
||||
debug:
|
||||
msg: "✅ CI Docker image built successfully: {{ ci_image_name }}"
|
||||
when: build_result.changed or not build_result.failed
|
||||
|
||||
- name: Tag image for registry
|
||||
community.docker.docker_image:
|
||||
name: "{{ ci_image_registry_path }}"
|
||||
source: "{{ ci_image_name }}"
|
||||
force_source: true
|
||||
when: push_to_registry | bool
|
||||
|
||||
- name: Load image into docker-dind
|
||||
shell: |
|
||||
docker save {{ ci_image_name }} | docker exec -i {{ docker_dind_container }} docker load
|
||||
register: load_result
|
||||
changed_when: "'Loaded image' in load_result.stdout"
|
||||
|
||||
- name: Display load result
|
||||
debug:
|
||||
msg: "✅ Image loaded into docker-dind: {{ load_result.stdout_lines | last }}"
|
||||
when: load_result.changed
|
||||
|
||||
- name: Check if .env file exists
|
||||
stat:
|
||||
path: "{{ gitea_runner_dir }}/.env"
|
||||
register: env_file_stat
|
||||
|
||||
- name: Copy .env.example to .env if not exists
|
||||
copy:
|
||||
src: "{{ gitea_runner_dir }}/.env.example"
|
||||
dest: "{{ gitea_runner_dir }}/.env"
|
||||
mode: '0644'
|
||||
when: not env_file_stat.stat.exists
|
||||
|
||||
- name: Read current .env file
|
||||
slurp:
|
||||
src: "{{ gitea_runner_dir }}/.env"
|
||||
register: env_file_content
|
||||
when: env_file_stat.stat.exists
|
||||
|
||||
- name: Check if php-ci label already exists
|
||||
set_fact:
|
||||
php_ci_label_exists: "{{ 'php-ci:docker://' + ci_image_name in env_file_content.content | b64decode | default('') }}"
|
||||
when: env_file_stat.stat.exists
|
||||
|
||||
- name: Update GITEA_RUNNER_LABELS to include php-ci
|
||||
lineinfile:
|
||||
path: "{{ gitea_runner_dir }}/.env"
|
||||
regexp: '^GITEA_RUNNER_LABELS=(.*)$'
|
||||
line: 'GITEA_RUNNER_LABELS=\1,php-ci:docker://{{ ci_image_name }}'
|
||||
backrefs: yes
|
||||
when:
|
||||
- env_file_stat.stat.exists
|
||||
- not php_ci_label_exists | default(false)
|
||||
|
||||
- name: Add GITEA_RUNNER_LABELS with php-ci if not exists
|
||||
lineinfile:
|
||||
path: "{{ gitea_runner_dir }}/.env"
|
||||
line: 'GITEA_RUNNER_LABELS=php-ci:docker://{{ ci_image_name }}'
|
||||
insertafter: '^# Runner Labels'
|
||||
when:
|
||||
- env_file_stat.stat.exists
|
||||
- "'GITEA_RUNNER_LABELS' not in (env_file_content.content | b64decode | default(''))"
|
||||
|
||||
- name: Display setup summary
|
||||
debug:
|
||||
msg: |
|
||||
✅ Gitea Runner CI Setup Complete!
|
||||
|
||||
Image: {{ ci_image_name }}
|
||||
Loaded into: {{ docker_dind_container }}
|
||||
|
||||
Next steps:
|
||||
1. Verify .env file at {{ gitea_runner_dir }}/.env has php-ci label
|
||||
2. Re-register runner:
|
||||
cd {{ gitea_runner_dir }}
|
||||
./unregister.sh
|
||||
./register.sh
|
||||
|
||||
3. Verify runner in Gitea UI shows php-ci label
|
||||
|
||||
- name: Display push to registry instructions
|
||||
debug:
|
||||
msg: |
|
||||
📤 To push image to registry:
|
||||
|
||||
docker login {{ ci_image_registry }}
|
||||
docker push {{ ci_image_registry_path }}
|
||||
|
||||
Then update .env:
|
||||
GITEA_RUNNER_LABELS=...,php-ci:docker://{{ ci_image_registry_path }}
|
||||
when: not push_to_registry | bool
|
||||
@@ -0,0 +1,80 @@
|
||||
---
|
||||
- name: Setup Local Development Secrets
|
||||
hosts: local
|
||||
gather_facts: yes
|
||||
become: no
|
||||
connection: local
|
||||
|
||||
vars:
|
||||
vault_file: "{{ playbook_dir }}/../secrets/local.vault.yml"
|
||||
|
||||
pre_tasks:
|
||||
- name: Get repository root path
|
||||
shell: |
|
||||
cd "{{ playbook_dir }}/../../.."
|
||||
pwd
|
||||
register: repo_root
|
||||
changed_when: false
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Set repository root as fact
|
||||
set_fact:
|
||||
app_stack_path: "{{ repo_root.stdout }}"
|
||||
|
||||
- name: Verify vault file exists
|
||||
stat:
|
||||
path: "{{ vault_file }}"
|
||||
register: vault_stat
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Fail if vault file missing
|
||||
fail:
|
||||
msg: "Vault file not found at {{ vault_file }}. Please create it from local.vault.yml.example"
|
||||
when: not vault_stat.stat.exists
|
||||
|
||||
tasks:
|
||||
- name: Load encrypted secrets
|
||||
include_vars:
|
||||
file: "{{ vault_file }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Ensure secrets directory exists for Docker Compose secrets
|
||||
file:
|
||||
path: "{{ app_stack_path }}/secrets"
|
||||
state: directory
|
||||
owner: "{{ ansible_user_id | default(ansible_env.USER | default('user')) }}"
|
||||
group: "{{ ansible_user_id | default(ansible_env.USER | default('user')) }}"
|
||||
mode: '0700'
|
||||
|
||||
- name: Create Docker Compose secret files from vault
|
||||
copy:
|
||||
content: "{{ item.value }}"
|
||||
dest: "{{ app_stack_path }}/secrets/{{ item.name }}.txt"
|
||||
owner: "{{ ansible_user_id | default(ansible_env.USER | default('user')) }}"
|
||||
group: "{{ ansible_user_id | default(ansible_env.USER | default('user')) }}"
|
||||
mode: '0600'
|
||||
loop:
|
||||
- name: db_user_password
|
||||
value: "{{ vault_db_password }}"
|
||||
- name: redis_password
|
||||
value: "{{ vault_redis_password }}"
|
||||
- name: app_key
|
||||
value: "{{ vault_app_key }}"
|
||||
- name: vault_encryption_key
|
||||
value: "{{ vault_encryption_key | default(vault_app_key) }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Set secure permissions on secrets directory
|
||||
file:
|
||||
path: "{{ app_stack_path }}/secrets"
|
||||
state: directory
|
||||
owner: "{{ ansible_user_id | default(ansible_env.USER | default('user')) }}"
|
||||
group: "{{ ansible_user_id | default(ansible_env.USER | default('user')) }}"
|
||||
mode: '0700'
|
||||
recurse: yes
|
||||
|
||||
- name: Display secrets setup summary
|
||||
debug:
|
||||
msg: "? Local secrets created in {{ app_stack_path }}/secrets/"
|
||||
@@ -0,0 +1,84 @@
|
||||
---
|
||||
- name: Setup Production Secrets
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: yes
|
||||
|
||||
vars:
|
||||
vault_file: "{{ playbook_dir }}/../secrets/production.vault.yml"
|
||||
|
||||
pre_tasks:
|
||||
- name: Verify vault file exists
|
||||
stat:
|
||||
path: "{{ vault_file }}"
|
||||
register: vault_stat
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Fail if vault file missing
|
||||
fail:
|
||||
msg: "Vault file not found at {{ vault_file }}"
|
||||
when: not vault_stat.stat.exists
|
||||
|
||||
tasks:
|
||||
- name: Detect Docker Swarm mode
|
||||
shell: docker info -f '{{ "{{" }}.Swarm.LocalNodeState{{ "}}" }}'
|
||||
register: swarm_state
|
||||
changed_when: false
|
||||
|
||||
- name: Set fact if swarm is active
|
||||
set_fact:
|
||||
swarm_active: "{{ swarm_state.stdout | lower == 'active' }}"
|
||||
|
||||
- name: Load encrypted secrets
|
||||
include_vars:
|
||||
file: "{{ vault_file }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Ensure secrets directory exists for Docker Compose secrets
|
||||
file:
|
||||
path: "{{ app_stack_path }}/secrets"
|
||||
state: directory
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0700'
|
||||
|
||||
- name: Create Docker Compose secret files from vault
|
||||
copy:
|
||||
content: "{{ item.value }}"
|
||||
dest: "{{ app_stack_path }}/secrets/{{ item.name }}.txt"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0600'
|
||||
loop:
|
||||
- name: db_user_password
|
||||
value: "{{ vault_db_password }}"
|
||||
- name: redis_password
|
||||
value: "{{ vault_redis_password }}"
|
||||
- name: app_key
|
||||
value: "{{ vault_app_key }}"
|
||||
- name: vault_encryption_key
|
||||
value: "{{ vault_encryption_key | default(vault_app_key) }}"
|
||||
- name: git_token
|
||||
value: "{{ vault_git_token | default('') }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Set secure permissions on secrets directory
|
||||
file:
|
||||
path: "{{ app_stack_path }}/secrets"
|
||||
state: directory
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0700'
|
||||
recurse: yes
|
||||
|
||||
- name: Verify Docker secrets (skipped)
|
||||
command: docker secret ls --format '{{ "{{" }}.Name{{ "}}" }}'
|
||||
register: docker_secrets
|
||||
changed_when: false
|
||||
when: false
|
||||
|
||||
- name: Display deployed Docker secrets (skipped)
|
||||
debug:
|
||||
msg: "Deployed secrets: {{ docker_secrets.stdout_lines | default([]) }}"
|
||||
when: false
|
||||
113
deployment/legacy/ansible/ansible/playbooks/setup-staging.yml
Normal file
113
deployment/legacy/ansible/ansible/playbooks/setup-staging.yml
Normal file
@@ -0,0 +1,113 @@
|
||||
---
|
||||
- name: Deploy Staging Infrastructure Stacks on Production Server
|
||||
hosts: production
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
# All deployment variables are now defined in group_vars/staging.yml
|
||||
# Variables can be overridden via -e flag if needed
|
||||
|
||||
tasks:
|
||||
- name: Debug - Show variables
|
||||
debug:
|
||||
msg:
|
||||
- "stacks_base_path: {{ stacks_base_path | default('NOT SET') }}"
|
||||
- "staging_stack_path: {{ staging_stack_path | default('NOT SET') }}"
|
||||
- "postgresql_staging_stack_path: {{ postgresql_staging_stack_path | default('NOT SET') }}"
|
||||
when: true # Debugging enabled
|
||||
|
||||
- name: Check if deployment stacks directory exists
|
||||
stat:
|
||||
path: "{{ stacks_base_path }}"
|
||||
register: stacks_dir
|
||||
|
||||
- name: Create deployment stacks directory if it doesn't exist
|
||||
file:
|
||||
path: "{{ stacks_base_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
when: not stacks_dir.stat.exists
|
||||
|
||||
- name: Ensure system packages are up to date
|
||||
include_role:
|
||||
name: system
|
||||
when: system_update_packages | bool
|
||||
|
||||
# Create external networks required by staging stacks
|
||||
- name: Create traefik-public network (shared with production)
|
||||
community.docker.docker_network:
|
||||
name: traefik-public
|
||||
driver: bridge
|
||||
state: present
|
||||
|
||||
- name: Create staging-internal network
|
||||
community.docker.docker_network:
|
||||
name: staging-internal
|
||||
driver: bridge
|
||||
state: present
|
||||
|
||||
- name: Create postgres-staging-internal network
|
||||
community.docker.docker_network:
|
||||
name: postgres-staging-internal
|
||||
driver: bridge
|
||||
state: present
|
||||
|
||||
# 1. Deploy PostgreSQL Staging Stack (separate from production)
|
||||
- name: Deploy PostgreSQL Staging stack
|
||||
import_role:
|
||||
name: postgresql-staging
|
||||
|
||||
# 2. Deploy Staging Application Stack
|
||||
- name: Deploy Staging Application Stack
|
||||
import_role:
|
||||
name: application
|
||||
vars:
|
||||
application_stack_src: "{{ playbook_dir | default(role_path + '/..') }}/../../stacks/staging"
|
||||
application_stack_dest: "{{ staging_stack_path }}"
|
||||
application_compose_suffix: "staging.yml"
|
||||
application_service_name: "staging-app"
|
||||
application_env_template: "{{ role_path }}/../../templates/application.env.j2"
|
||||
app_env: "staging"
|
||||
app_domain: "staging.michaelschiemer.de"
|
||||
app_debug: "true"
|
||||
db_name: "{{ db_name_default }}"
|
||||
db_host: "{{ db_host_default }}"
|
||||
|
||||
# Verification
|
||||
- name: List all running staging containers
|
||||
command: >
|
||||
docker ps --format 'table {{ "{{" }}.Names{{ "}}" }}\t{{ "{{" }}.Status{{ "}}" }}\t{{ "{{" }}.Ports{{ "}}" }}' | grep -E "(staging|postgres-staging)" || echo "No staging containers found"
|
||||
register: staging_docker_ps_output
|
||||
|
||||
- name: Display running staging containers
|
||||
debug:
|
||||
msg: "{{ staging_docker_ps_output.stdout_lines }}"
|
||||
|
||||
- name: Verify Staging accessibility via HTTPS
|
||||
uri:
|
||||
url: "https://staging.michaelschiemer.de"
|
||||
method: GET
|
||||
validate_certs: no
|
||||
status_code: [200, 404, 502, 503]
|
||||
timeout: 10
|
||||
register: staging_http_check
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display Staging accessibility status
|
||||
debug:
|
||||
msg: "Staging HTTPS check: {{ 'SUCCESS' if staging_http_check.status == 200 else 'FAILED - Status: ' + (staging_http_check.status|string) }}"
|
||||
|
||||
- name: Display staging deployment summary
|
||||
debug:
|
||||
msg:
|
||||
- "=========================================="
|
||||
- "Staging Deployment Summary"
|
||||
- "=========================================="
|
||||
- "PostgreSQL Staging: {{ 'DEPLOYED' if postgresql_staging_stack_changed | default(false) else 'NO CHANGES' }}"
|
||||
- "Staging Application: {{ 'DEPLOYED' if application_stack_changed | default(false) else 'NO CHANGES' }}"
|
||||
- "Staging URL: https://staging.michaelschiemer.de"
|
||||
- "=========================================="
|
||||
|
||||
@@ -0,0 +1,309 @@
|
||||
---
|
||||
# Ansible Playbook: WireGuard Host-based VPN Setup
|
||||
# Purpose: Deploy minimalistic WireGuard VPN for admin access
|
||||
# Architecture: Host-based (systemd), no Docker, no DNS
|
||||
|
||||
- name: Setup WireGuard VPN (Host-based)
|
||||
hosts: all
|
||||
become: yes
|
||||
|
||||
vars:
|
||||
# WireGuard Configuration
|
||||
wg_interface: wg0
|
||||
wg_network: 10.8.0.0/24
|
||||
wg_server_ip: 10.8.0.1
|
||||
wg_netmask: 24
|
||||
wg_port: 51820
|
||||
|
||||
# Network Configuration
|
||||
wan_interface: eth0 # Change to your WAN interface (eth0, ens3, etc.)
|
||||
|
||||
# Admin Service Ports (VPN-only access)
|
||||
admin_service_ports:
|
||||
- 8080 # Traefik Dashboard
|
||||
- 9090 # Prometheus
|
||||
- 3001 # Grafana
|
||||
- 9000 # Portainer
|
||||
- 8001 # Redis Insight
|
||||
|
||||
# Public Service Ports
|
||||
public_service_ports:
|
||||
- 80 # HTTP
|
||||
- 443 # HTTPS
|
||||
- 22 # SSH
|
||||
|
||||
# Rate Limiting
|
||||
wg_enable_rate_limit: true
|
||||
|
||||
# Paths
|
||||
wg_config_dir: /etc/wireguard
|
||||
wg_backup_dir: /root/wireguard-backup
|
||||
nft_config_file: /etc/nftables.d/wireguard.nft
|
||||
|
||||
tasks:
|
||||
# ========================================
|
||||
# 1. Pre-flight Checks
|
||||
# ========================================
|
||||
|
||||
- name: Check if running as root
|
||||
assert:
|
||||
that: ansible_user_id == 'root'
|
||||
fail_msg: "This playbook must be run as root"
|
||||
|
||||
- name: Detect WAN interface
|
||||
shell: ip route | grep default | awk '{print $5}' | head -n1
|
||||
register: detected_wan_interface
|
||||
changed_when: false
|
||||
|
||||
- name: Set WAN interface if not specified
|
||||
set_fact:
|
||||
wan_interface: "{{ detected_wan_interface.stdout }}"
|
||||
when: wan_interface == 'eth0' and detected_wan_interface.stdout != ''
|
||||
|
||||
- name: Display detected network configuration
|
||||
debug:
|
||||
msg:
|
||||
- "WAN Interface: {{ wan_interface }}"
|
||||
- "VPN Network: {{ wg_network }}"
|
||||
- "VPN Server IP: {{ wg_server_ip }}"
|
||||
|
||||
# ========================================
|
||||
# 2. Backup Existing Configuration
|
||||
# ========================================
|
||||
|
||||
- name: Create backup directory
|
||||
file:
|
||||
path: "{{ wg_backup_dir }}"
|
||||
state: directory
|
||||
mode: '0700'
|
||||
|
||||
- name: Backup existing WireGuard config (if exists)
|
||||
shell: |
|
||||
if [ -d {{ wg_config_dir }} ]; then
|
||||
tar -czf {{ wg_backup_dir }}/wireguard-backup-$(date +%Y%m%d-%H%M%S).tar.gz {{ wg_config_dir }}
|
||||
echo "Backup created"
|
||||
else
|
||||
echo "No existing config"
|
||||
fi
|
||||
register: backup_result
|
||||
changed_when: "'Backup created' in backup_result.stdout"
|
||||
|
||||
# ========================================
|
||||
# 3. Install WireGuard
|
||||
# ========================================
|
||||
|
||||
- name: Update apt cache
|
||||
apt:
|
||||
update_cache: yes
|
||||
cache_valid_time: 3600
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
- name: Install WireGuard and dependencies
|
||||
apt:
|
||||
name:
|
||||
- wireguard
|
||||
- wireguard-tools
|
||||
- qrencode # For QR code generation
|
||||
- nftables
|
||||
state: present
|
||||
when: ansible_os_family == 'Debian'
|
||||
|
||||
- name: Ensure WireGuard kernel module is loaded
|
||||
modprobe:
|
||||
name: wireguard
|
||||
state: present
|
||||
|
||||
- name: Verify WireGuard module is available
|
||||
shell: lsmod | grep -q wireguard
|
||||
register: wg_module_check
|
||||
failed_when: wg_module_check.rc != 0
|
||||
changed_when: false
|
||||
|
||||
# ========================================
|
||||
# 4. Generate Server Keys (if not exist)
|
||||
# ========================================
|
||||
|
||||
- name: Create WireGuard config directory
|
||||
file:
|
||||
path: "{{ wg_config_dir }}"
|
||||
state: directory
|
||||
mode: '0700'
|
||||
|
||||
- name: Check if server private key exists
|
||||
stat:
|
||||
path: "{{ wg_config_dir }}/server_private.key"
|
||||
register: server_private_key_stat
|
||||
|
||||
- name: Generate server private key
|
||||
shell: wg genkey > {{ wg_config_dir }}/server_private.key
|
||||
when: not server_private_key_stat.stat.exists
|
||||
|
||||
- name: Set server private key permissions
|
||||
file:
|
||||
path: "{{ wg_config_dir }}/server_private.key"
|
||||
mode: '0600'
|
||||
|
||||
- name: Generate server public key
|
||||
shell: cat {{ wg_config_dir }}/server_private.key | wg pubkey > {{ wg_config_dir }}/server_public.key
|
||||
when: not server_private_key_stat.stat.exists
|
||||
|
||||
- name: Read server private key
|
||||
slurp:
|
||||
src: "{{ wg_config_dir }}/server_private.key"
|
||||
register: server_private_key_content
|
||||
|
||||
- name: Read server public key
|
||||
slurp:
|
||||
src: "{{ wg_config_dir }}/server_public.key"
|
||||
register: server_public_key_content
|
||||
|
||||
- name: Set server key facts
|
||||
set_fact:
|
||||
wg_server_private_key: "{{ server_private_key_content.content | b64decode | trim }}"
|
||||
wg_server_public_key: "{{ server_public_key_content.content | b64decode | trim }}"
|
||||
|
||||
- name: Display server public key
|
||||
debug:
|
||||
msg: "Server Public Key: {{ wg_server_public_key }}"
|
||||
|
||||
# ========================================
|
||||
# 5. Configure WireGuard
|
||||
# ========================================
|
||||
|
||||
- name: Deploy WireGuard server configuration
|
||||
template:
|
||||
src: ../templates/wg0.conf.j2
|
||||
dest: "{{ wg_config_dir }}/wg0.conf"
|
||||
mode: '0600'
|
||||
notify: restart wireguard
|
||||
|
||||
- name: Enable IP forwarding
|
||||
sysctl:
|
||||
name: net.ipv4.ip_forward
|
||||
value: '1'
|
||||
sysctl_set: yes
|
||||
state: present
|
||||
reload: yes
|
||||
|
||||
# ========================================
|
||||
# 6. Configure nftables Firewall
|
||||
# ========================================
|
||||
|
||||
- name: Create nftables config directory
|
||||
file:
|
||||
path: /etc/nftables.d
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Deploy WireGuard firewall rules
|
||||
template:
|
||||
src: ../templates/wireguard-host-firewall.nft.j2
|
||||
dest: "{{ nft_config_file }}"
|
||||
mode: '0644'
|
||||
notify: reload nftables
|
||||
|
||||
- name: Include WireGuard rules in main nftables config
|
||||
lineinfile:
|
||||
path: /etc/nftables.conf
|
||||
line: 'include "{{ nft_config_file }}"'
|
||||
create: yes
|
||||
state: present
|
||||
notify: reload nftables
|
||||
|
||||
- name: Enable nftables service
|
||||
systemd:
|
||||
name: nftables
|
||||
enabled: yes
|
||||
state: started
|
||||
|
||||
# ========================================
|
||||
# 7. Enable and Start WireGuard
|
||||
# ========================================
|
||||
|
||||
- name: Enable WireGuard interface
|
||||
systemd:
|
||||
name: wg-quick@wg0
|
||||
enabled: yes
|
||||
state: started
|
||||
|
||||
- name: Verify WireGuard is running
|
||||
command: wg show wg0
|
||||
register: wg_status
|
||||
changed_when: false
|
||||
|
||||
- name: Display WireGuard status
|
||||
debug:
|
||||
msg: "{{ wg_status.stdout_lines }}"
|
||||
|
||||
# ========================================
|
||||
# 8. Health Checks
|
||||
# ========================================
|
||||
|
||||
- name: Check WireGuard interface exists
|
||||
command: ip link show wg0
|
||||
register: wg_interface_check
|
||||
failed_when: wg_interface_check.rc != 0
|
||||
changed_when: false
|
||||
|
||||
- name: Check firewall rules applied
|
||||
command: nft list ruleset
|
||||
register: nft_rules
|
||||
failed_when: "'wireguard_firewall' not in nft_rules.stdout"
|
||||
changed_when: false
|
||||
|
||||
- name: Verify admin ports are blocked from public
|
||||
shell: nft list chain inet wireguard_firewall input | grep -q "admin_service_ports.*drop"
|
||||
register: admin_port_block_check
|
||||
failed_when: admin_port_block_check.rc != 0
|
||||
changed_when: false
|
||||
|
||||
# ========================================
|
||||
# 9. Post-Installation Summary
|
||||
# ========================================
|
||||
|
||||
- name: Create post-installation summary
|
||||
debug:
|
||||
msg:
|
||||
- "========================================="
|
||||
- "WireGuard VPN Setup Complete!"
|
||||
- "========================================="
|
||||
- ""
|
||||
- "Server Configuration:"
|
||||
- " Interface: wg0"
|
||||
- " Server IP: {{ wg_server_ip }}/{{ wg_netmask }}"
|
||||
- " Listen Port: {{ wg_port }}"
|
||||
- " Public Key: {{ wg_server_public_key }}"
|
||||
- ""
|
||||
- "Network Configuration:"
|
||||
- " VPN Network: {{ wg_network }}"
|
||||
- " WAN Interface: {{ wan_interface }}"
|
||||
- ""
|
||||
- "Admin Service Access (VPN-only):"
|
||||
- " Traefik Dashboard: https://{{ wg_server_ip }}:8080"
|
||||
- " Prometheus: http://{{ wg_server_ip }}:9090"
|
||||
- " Grafana: https://{{ wg_server_ip }}:3001"
|
||||
- " Portainer: http://{{ wg_server_ip }}:9000"
|
||||
- " Redis Insight: http://{{ wg_server_ip }}:8001"
|
||||
- ""
|
||||
- "Next Steps:"
|
||||
- " 1. Generate client config: ./scripts/generate-client-config.sh <device-name>"
|
||||
- " 2. Import config on client device"
|
||||
- " 3. Connect and verify access"
|
||||
- ""
|
||||
- "Firewall Status: ACTIVE (nftables)"
|
||||
- " - Public ports: 80, 443, 22"
|
||||
- " - VPN port: {{ wg_port }}"
|
||||
- " - Admin services: VPN-only access"
|
||||
- ""
|
||||
- "========================================="
|
||||
|
||||
handlers:
|
||||
- name: restart wireguard
|
||||
systemd:
|
||||
name: wg-quick@wg0
|
||||
state: restarted
|
||||
|
||||
- name: reload nftables
|
||||
systemd:
|
||||
name: nftables
|
||||
state: reloaded
|
||||
@@ -0,0 +1,215 @@
|
||||
# Traefik/Gitea Redeploy Guide
|
||||
|
||||
This guide explains how to perform a clean redeployment of Traefik and Gitea stacks.
|
||||
|
||||
## Overview
|
||||
|
||||
A clean redeploy:
|
||||
- Stops and removes containers (preserves volumes and SSL certificates)
|
||||
- Syncs latest configurations
|
||||
- Redeploys stacks with fresh containers
|
||||
- Restores configurations
|
||||
- Verifies service discovery
|
||||
|
||||
**Expected downtime**: ~2-5 minutes
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Ansible installed locally
|
||||
- SSH access to production server
|
||||
- Vault password file: `deployment/ansible/secrets/.vault_pass`
|
||||
|
||||
## Step-by-Step Guide
|
||||
|
||||
### Step 1: Backup
|
||||
|
||||
**Automatic backup (recommended):**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/maintenance/backup-before-redeploy.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
**Manual backup:**
|
||||
```bash
|
||||
# On server
|
||||
cd /home/deploy/deployment/stacks
|
||||
docker compose -f gitea/docker-compose.yml exec gitea cat /data/gitea/conf/app.ini > /tmp/gitea-app.ini.backup
|
||||
cp traefik/acme.json /tmp/acme.json.backup
|
||||
```
|
||||
|
||||
### Step 2: Verify Backup
|
||||
|
||||
Check backup contents:
|
||||
```bash
|
||||
# Backup location will be shown in output
|
||||
ls -lh /home/deploy/backups/redeploy-backup-*/
|
||||
```
|
||||
|
||||
Verify:
|
||||
- `acme.json` exists
|
||||
- `gitea-app.ini` exists
|
||||
- `gitea-volume-*.tar.gz` exists (if volumes were backed up)
|
||||
|
||||
### Step 3: Redeploy
|
||||
|
||||
**With automatic backup:**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/setup/redeploy-traefik-gitea-clean.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
**With existing backup:**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/setup/redeploy-traefik-gitea-clean.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name=redeploy-backup-1234567890" \
|
||||
-e "skip_backup=true"
|
||||
```
|
||||
|
||||
### Step 4: Verify Deployment
|
||||
|
||||
**Check Gitea accessibility:**
|
||||
```bash
|
||||
curl -k https://git.michaelschiemer.de/api/healthz
|
||||
```
|
||||
|
||||
**Check Traefik service discovery:**
|
||||
```bash
|
||||
# On server
|
||||
cd /home/deploy/deployment/stacks/traefik
|
||||
docker compose exec traefik traefik show providers docker | grep -i gitea
|
||||
```
|
||||
|
||||
**Check container status:**
|
||||
```bash
|
||||
# On server
|
||||
docker ps | grep -E "traefik|gitea"
|
||||
```
|
||||
|
||||
### Step 5: Troubleshooting
|
||||
|
||||
**If Gitea is not reachable:**
|
||||
|
||||
1. Check Gitea logs:
|
||||
```bash
|
||||
cd /home/deploy/deployment/stacks/gitea
|
||||
docker compose logs gitea --tail=50
|
||||
```
|
||||
|
||||
2. Check Traefik logs:
|
||||
```bash
|
||||
cd /home/deploy/deployment/stacks/traefik
|
||||
docker compose logs traefik --tail=50
|
||||
```
|
||||
|
||||
3. Check service discovery:
|
||||
```bash
|
||||
cd /home/deploy/deployment/stacks/traefik
|
||||
docker compose exec traefik traefik show providers docker
|
||||
```
|
||||
|
||||
4. Run diagnosis:
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/diagnose/gitea.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
**If SSL certificate issues:**
|
||||
|
||||
1. Check acme.json permissions:
|
||||
```bash
|
||||
ls -l /home/deploy/deployment/stacks/traefik/acme.json
|
||||
# Should be: -rw------- (600)
|
||||
```
|
||||
|
||||
2. Check Traefik ACME logs:
|
||||
```bash
|
||||
cd /home/deploy/deployment/stacks/traefik
|
||||
docker compose logs traefik | grep -i acme
|
||||
```
|
||||
|
||||
## Rollback Procedure
|
||||
|
||||
If something goes wrong, rollback to the backup:
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/maintenance/rollback-redeploy.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name=redeploy-backup-1234567890"
|
||||
```
|
||||
|
||||
Replace `redeploy-backup-1234567890` with the actual backup name from Step 1.
|
||||
|
||||
## What Gets Preserved
|
||||
|
||||
- ✅ Gitea data (volumes)
|
||||
- ✅ SSL certificates (acme.json)
|
||||
- ✅ Gitea configuration (app.ini)
|
||||
- ✅ Traefik configuration
|
||||
- ✅ PostgreSQL data (if applicable)
|
||||
|
||||
## What Gets Recreated
|
||||
|
||||
- 🔄 Traefik container
|
||||
- 🔄 Gitea container
|
||||
- 🔄 Service discovery
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Issue: Gitea returns 404 after redeploy
|
||||
|
||||
**Solution:**
|
||||
1. Wait 1-2 minutes for service discovery
|
||||
2. Restart Traefik: `cd /home/deploy/deployment/stacks/traefik && docker compose restart traefik`
|
||||
3. Check if Gitea is in traefik-public network: `docker network inspect traefik-public | grep gitea`
|
||||
|
||||
### Issue: SSL certificate errors
|
||||
|
||||
**Solution:**
|
||||
1. Verify acme.json permissions: `chmod 600 /home/deploy/deployment/stacks/traefik/acme.json`
|
||||
2. Check Traefik logs for ACME errors
|
||||
3. Wait 5-10 minutes for certificate renewal
|
||||
|
||||
### Issue: Gitea configuration lost
|
||||
|
||||
**Solution:**
|
||||
1. Restore from backup: `playbooks/maintenance/rollback-redeploy.yml`
|
||||
2. Or manually restore app.ini:
|
||||
```bash
|
||||
cd /home/deploy/deployment/stacks/gitea
|
||||
docker compose exec gitea sh -c "cat > /data/gitea/conf/app.ini" < /path/to/backup/gitea-app.ini
|
||||
docker compose restart gitea
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always backup before redeploy** - Use automatic backup
|
||||
2. **Test in staging first** - If available
|
||||
3. **Monitor during deployment** - Watch logs in separate terminal
|
||||
4. **Have rollback ready** - Know backup name before starting
|
||||
5. **Verify after deployment** - Check all services are accessible
|
||||
|
||||
## Related Playbooks
|
||||
|
||||
- `playbooks/maintenance/backup-before-redeploy.yml` - Create backup
|
||||
- `playbooks/setup/redeploy-traefik-gitea-clean.yml` - Perform redeploy
|
||||
- `playbooks/maintenance/rollback-redeploy.yml` - Rollback from backup
|
||||
- `playbooks/diagnose/gitea.yml` - Diagnose Gitea issues
|
||||
- `playbooks/diagnose/traefik.yml` - Diagnose Traefik issues
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
14
deployment/legacy/ansible/ansible/playbooks/setup/gitea.yml
Normal file
14
deployment/legacy/ansible/ansible/playbooks/setup/gitea.yml
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
# Setup Gitea Initial Configuration
|
||||
# Wrapper Playbook for gitea role setup tasks
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
tasks:
|
||||
- name: Include gitea setup tasks
|
||||
ansible.builtin.include_role:
|
||||
name: gitea
|
||||
tasks_from: setup
|
||||
tags:
|
||||
- gitea
|
||||
- setup
|
||||
@@ -0,0 +1,242 @@
|
||||
---
|
||||
- name: Deploy Infrastructure Stacks on Production Server
|
||||
hosts: production
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
# All deployment variables are now defined in group_vars/production.yml
|
||||
# Variables can be overridden via -e flag if needed
|
||||
vault_file: "{{ playbook_dir }}/../secrets/production.vault.yml"
|
||||
|
||||
pre_tasks:
|
||||
- name: Verify vault file exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ vault_file }}"
|
||||
register: vault_stat
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Load encrypted secrets from vault
|
||||
ansible.builtin.include_vars:
|
||||
file: "{{ vault_file }}"
|
||||
when: vault_stat.stat.exists
|
||||
no_log: yes
|
||||
ignore_errors: yes
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Verify vault secrets were loaded
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Vault secrets loaded:
|
||||
- vault_db_password: {{ 'SET (length: ' + (vault_db_password | default('') | string | length | string) + ')' if (vault_db_password | default('') | string | trim) != '' else 'NOT SET or EMPTY' }}
|
||||
- vault_redis_password: {{ 'SET' if (vault_redis_password | default('') | string | trim) != '' else 'NOT SET' }}
|
||||
- vault_app_key: {{ 'SET' if (vault_app_key | default('') | string | trim) != '' else 'NOT SET' }}
|
||||
- vault_docker_registry_password: {{ 'SET (length: ' + (vault_docker_registry_password | default('') | string | length | string) + ')' if (vault_docker_registry_password | default('') | string | trim) != '' else 'NOT SET or EMPTY' }}
|
||||
when: vault_stat.stat.exists
|
||||
no_log: yes
|
||||
|
||||
- name: Warn if vault file is missing
|
||||
ansible.builtin.debug:
|
||||
msg: "WARNING: Vault file not found at {{ vault_file }}. Some roles may fail if they require vault secrets."
|
||||
when: not vault_stat.stat.exists
|
||||
|
||||
tasks:
|
||||
- name: Debug - Show variables
|
||||
debug:
|
||||
msg:
|
||||
- "stacks_base_path: {{ stacks_base_path | default('NOT SET') }}"
|
||||
- "deploy_user_home: {{ deploy_user_home | default('NOT SET') }}"
|
||||
when: true # Debugging enabled
|
||||
|
||||
- name: Check if deployment stacks directory exists
|
||||
stat:
|
||||
path: "{{ stacks_base_path }}"
|
||||
register: stacks_dir
|
||||
|
||||
- name: Create deployment stacks directory if it doesn't exist
|
||||
file:
|
||||
path: "{{ stacks_base_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
when: not stacks_dir.stat.exists
|
||||
|
||||
- name: Ensure rsync is installed (required for synchronize)
|
||||
ansible.builtin.apt:
|
||||
name: rsync
|
||||
state: present
|
||||
update_cache: no
|
||||
become: yes
|
||||
|
||||
- name: Sync infrastructure stacks to server
|
||||
synchronize:
|
||||
src: "{{ playbook_dir }}/../../stacks/"
|
||||
dest: "{{ stacks_base_path }}/"
|
||||
delete: no
|
||||
recursive: yes
|
||||
rsync_opts:
|
||||
- "--chmod=D755,F644"
|
||||
- "--exclude=.git"
|
||||
- "--exclude=*.log"
|
||||
- "--exclude=data/"
|
||||
- "--exclude=volumes/"
|
||||
- "--exclude=acme.json"
|
||||
- "--exclude=*.key"
|
||||
- "--exclude=*.pem"
|
||||
|
||||
- name: Ensure executable permissions on PostgreSQL backup scripts
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
mode: '0755'
|
||||
loop:
|
||||
- "{{ stacks_base_path }}/postgresql-production/scripts/backup-entrypoint.sh"
|
||||
- "{{ stacks_base_path }}/postgresql-production/scripts/backup.sh"
|
||||
- "{{ stacks_base_path }}/postgresql-production/scripts/restore.sh"
|
||||
- "{{ stacks_base_path }}/postgresql-staging/scripts/backup-entrypoint.sh"
|
||||
- "{{ stacks_base_path }}/postgresql-staging/scripts/backup.sh"
|
||||
- "{{ stacks_base_path }}/postgresql-staging/scripts/restore.sh"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Ensure system packages are up to date
|
||||
include_role:
|
||||
name: system
|
||||
when: system_update_packages | bool
|
||||
|
||||
# Create external networks required by all stacks
|
||||
- name: Create traefik-public network
|
||||
community.docker.docker_network:
|
||||
name: traefik-public
|
||||
driver: bridge
|
||||
state: present
|
||||
|
||||
- name: Create app-internal network
|
||||
community.docker.docker_network:
|
||||
name: app-internal
|
||||
driver: bridge
|
||||
state: present
|
||||
|
||||
# 1. Deploy Traefik (Reverse Proxy & SSL)
|
||||
- name: Deploy Traefik stack
|
||||
import_role:
|
||||
name: traefik
|
||||
|
||||
# 2. Deploy PostgreSQL Production (Database)
|
||||
- name: Deploy PostgreSQL Production stack
|
||||
import_role:
|
||||
name: postgresql-production
|
||||
|
||||
# 3. Deploy Redis (Cache & Session Store)
|
||||
- name: Deploy Redis stack
|
||||
import_role:
|
||||
name: redis
|
||||
|
||||
# 4. Deploy Docker Registry (Private Registry)
|
||||
- name: Deploy Docker Registry stack
|
||||
import_role:
|
||||
name: registry
|
||||
|
||||
# 5. Deploy MinIO (Object Storage)
|
||||
- name: Deploy MinIO stack
|
||||
import_role:
|
||||
name: minio
|
||||
|
||||
# 6. Deploy Gitea (CRITICAL - Git Server + MySQL)
|
||||
- name: Deploy Gitea stack
|
||||
import_role:
|
||||
name: gitea
|
||||
|
||||
# 7. Deploy Monitoring (Portainer + Grafana + Prometheus)
|
||||
- name: Deploy Monitoring stack
|
||||
import_role:
|
||||
name: monitoring
|
||||
|
||||
# 8. Deploy Production Stack
|
||||
- name: Deploy Production Stack
|
||||
import_role:
|
||||
name: application
|
||||
vars:
|
||||
application_stack_src: "{{ playbook_dir | default(role_path + '/..') }}/../../stacks/production"
|
||||
application_stack_dest: "{{ app_stack_path | default(stacks_base_path + '/production') }}"
|
||||
application_compose_suffix: "production.yml"
|
||||
application_service_name: "php"
|
||||
application_env_template: "{{ role_path }}/../../templates/application.env.j2"
|
||||
app_env: "production"
|
||||
# Explicitly pass vault variables to the role
|
||||
vault_docker_registry_password: "{{ vault_docker_registry_password | default('') }}"
|
||||
app_domain: "michaelschiemer.de"
|
||||
app_debug: "false"
|
||||
db_name: "{{ db_name_default }}"
|
||||
db_host: "{{ db_host_default }}"
|
||||
|
||||
# Verification
|
||||
- name: List all running containers
|
||||
command: >
|
||||
docker ps --format 'table {{ "{{" }}.Names{{ "}}" }}\t{{ "{{" }}.Status{{ "}}" }}\t{{ "{{" }}.Ports{{ "}}" }}'
|
||||
register: docker_ps_output
|
||||
|
||||
- name: Display running containers
|
||||
debug:
|
||||
msg: "{{ docker_ps_output.stdout_lines }}"
|
||||
|
||||
- name: Verify Gitea accessibility via HTTPS
|
||||
uri:
|
||||
url: "https://{{ gitea_domain }}"
|
||||
method: GET
|
||||
validate_certs: no
|
||||
status_code: 200
|
||||
timeout: 10
|
||||
register: gitea_http_check
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display Gitea accessibility status
|
||||
debug:
|
||||
msg: "Gitea HTTPS check: {{ 'SUCCESS' if gitea_http_check.status == 200 else 'FAILED - Status: ' + (gitea_http_check.status|string) }}"
|
||||
|
||||
# 8. Deploy Production Stack
|
||||
- name: Deploy Production Stack
|
||||
import_role:
|
||||
name: application
|
||||
|
||||
- name: Display application health status
|
||||
debug:
|
||||
msg: "Application health: {{ application_health_output if application_health_output != '' else 'All services healthy or starting' }}"
|
||||
|
||||
- name: Display migration result
|
||||
debug:
|
||||
msg: |
|
||||
Migration Result:
|
||||
{{ application_migration_stdout if application_migration_stdout != '' else 'Migration may have failed - check logs with: docker compose -f ' + application_stack_dest + '/docker-compose.yml logs app' }}
|
||||
when: application_stack_changed and application_run_migrations
|
||||
|
||||
- name: Display application accessibility status
|
||||
debug:
|
||||
msg: >-
|
||||
Application health check: {{
|
||||
'SUCCESS (HTTP ' + (application_healthcheck_status | string) + ')'
|
||||
if application_healthcheck_status == 200 else
|
||||
'FAILED or not ready yet (HTTP ' + (application_healthcheck_status | string) + ')'
|
||||
}}
|
||||
when: application_stack_changed and application_healthcheck_url | length > 0
|
||||
|
||||
- name: Summary
|
||||
debug:
|
||||
msg:
|
||||
- "=== Infrastructure Deployment Complete ==="
|
||||
- "Traefik: {{ 'Deployed' if traefik_stack_changed is defined and traefik_stack_changed else 'Already running' }}"
|
||||
- "PostgreSQL: {{ 'Deployed' if postgresql_stack_changed is defined and postgresql_stack_changed else 'Already running' }}"
|
||||
- "Redis: {{ 'Deployed' if redis_stack_changed is defined and redis_stack_changed else 'Already running' }}"
|
||||
- "Docker Registry: {{ 'Deployed' if registry_stack_changed is defined and registry_stack_changed else 'Already running' }}"
|
||||
- "MinIO: {{ 'Deployed' if minio_stack_changed is defined and minio_stack_changed else 'Already running' }}"
|
||||
- "Gitea: {{ 'Deployed' if gitea_stack_changed is defined and gitea_stack_changed else 'Already running' }}"
|
||||
- "Monitoring: {{ 'Deployed' if monitoring_stack_changed is defined and monitoring_stack_changed else 'Already running' }}"
|
||||
- "Application: {{ 'Deployed' if application_stack_changed is defined and application_stack_changed else 'Already running' }}"
|
||||
- ""
|
||||
- "Next Steps:"
|
||||
- "1. Access Gitea at: https://{{ gitea_domain }}"
|
||||
- "2. Complete Gitea setup wizard if first-time deployment"
|
||||
- "3. Navigate to Admin > Actions > Runners to get registration token"
|
||||
- "4. Continue with Phase 1 - Gitea Runner Setup"
|
||||
- "5. Access Application at: https://{{ app_domain }}"
|
||||
@@ -0,0 +1,321 @@
|
||||
---
|
||||
# Clean Redeploy Traefik and Gitea Stacks
|
||||
# Complete redeployment with backup, container recreation, and verification
|
||||
#
|
||||
# Usage:
|
||||
# # With automatic backup
|
||||
# ansible-playbook -i inventory/production.yml playbooks/setup/redeploy-traefik-gitea-clean.yml \
|
||||
# --vault-password-file secrets/.vault_pass
|
||||
#
|
||||
# # With existing backup
|
||||
# ansible-playbook -i inventory/production.yml playbooks/setup/redeploy-traefik-gitea-clean.yml \
|
||||
# --vault-password-file secrets/.vault_pass \
|
||||
# -e "backup_name=redeploy-backup-1234567890" \
|
||||
# -e "skip_backup=true"
|
||||
|
||||
- name: Clean Redeploy Traefik and Gitea
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
traefik_stack_path: "{{ stacks_base_path }}/traefik"
|
||||
gitea_stack_path: "{{ stacks_base_path }}/gitea"
|
||||
gitea_url: "https://{{ gitea_domain }}"
|
||||
traefik_container_name: "traefik"
|
||||
gitea_container_name: "gitea"
|
||||
backup_base_path: "{{ backups_path | default('/home/deploy/backups') }}"
|
||||
skip_backup: "{{ skip_backup | default(false) | bool }}"
|
||||
backup_name: "{{ backup_name | default('') }}"
|
||||
|
||||
tasks:
|
||||
# ========================================
|
||||
# 1. BACKUP (unless skipped)
|
||||
# ========================================
|
||||
- name: Set backup name fact
|
||||
ansible.builtin.set_fact:
|
||||
actual_backup_name: "{{ backup_name | default('redeploy-backup-' + ansible_date_time.epoch) }}"
|
||||
when: not skip_backup
|
||||
|
||||
- name: Display backup note
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
⚠️ NOTE: Backup should be run separately before redeploy:
|
||||
ansible-playbook -i inventory/production.yml playbooks/maintenance/backup-before-redeploy.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name={{ actual_backup_name }}"
|
||||
|
||||
Or use existing backup with: -e "backup_name=redeploy-backup-XXXXX" -e "skip_backup=true"
|
||||
when: not skip_backup
|
||||
|
||||
- name: Display redeployment plan
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
CLEAN REDEPLOY TRAEFIK AND GITEA
|
||||
================================================================================
|
||||
|
||||
This playbook will:
|
||||
1. ✅ Backup ({% if skip_backup %}SKIPPED{% else %}Performed{% endif %})
|
||||
2. ✅ Stop and remove Traefik containers (keeps acme.json)
|
||||
3. ✅ Stop and remove Gitea containers (keeps volumes/data)
|
||||
4. ✅ Sync latest stack configurations
|
||||
5. ✅ Redeploy Traefik stack
|
||||
6. ✅ Redeploy Gitea stack
|
||||
7. ✅ Restore Gitea configuration (app.ini)
|
||||
8. ✅ Verify service discovery
|
||||
9. ✅ Test Gitea accessibility
|
||||
|
||||
⚠️ IMPORTANT:
|
||||
- SSL certificates (acme.json) will be preserved
|
||||
- Gitea data (volumes) will be preserved
|
||||
- Only containers will be recreated
|
||||
- Expected downtime: ~2-5 minutes
|
||||
{% if not skip_backup %}
|
||||
- Backup location: {{ backup_base_path }}/{{ actual_backup_name }}
|
||||
{% endif %}
|
||||
|
||||
================================================================================
|
||||
|
||||
# ========================================
|
||||
# 2. STOP AND REMOVE CONTAINERS
|
||||
# ========================================
|
||||
- name: Stop Traefik stack
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose down
|
||||
register: traefik_stop
|
||||
changed_when: traefik_stop.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Remove Traefik containers (if any remain)
|
||||
ansible.builtin.shell: |
|
||||
docker ps -a --filter "name={{ traefik_container_name }}" --format "{{ '{{' }}.ID{{ '}}' }}" | xargs -r docker rm -f 2>/dev/null || true
|
||||
register: traefik_remove
|
||||
changed_when: traefik_remove.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Stop Gitea stack (preserves volumes)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose down
|
||||
register: gitea_stop
|
||||
changed_when: gitea_stop.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Remove Gitea containers (if any remain, volumes are preserved)
|
||||
ansible.builtin.shell: |
|
||||
docker ps -a --filter "name={{ gitea_container_name }}" --format "{{ '{{' }}.ID{{ '}}' }}" | xargs -r docker rm -f 2>/dev/null || true
|
||||
register: gitea_remove
|
||||
changed_when: gitea_remove.rc == 0
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# 3. SYNC CONFIGURATIONS
|
||||
# ========================================
|
||||
- name: Get stacks directory path
|
||||
ansible.builtin.set_fact:
|
||||
stacks_source_path: "{{ playbook_dir | dirname | dirname | dirname }}/stacks"
|
||||
delegate_to: localhost
|
||||
run_once: true
|
||||
|
||||
- name: Sync stacks directory to production server
|
||||
ansible.builtin.synchronize:
|
||||
src: "{{ stacks_source_path }}/"
|
||||
dest: "{{ stacks_base_path }}/"
|
||||
delete: no
|
||||
recursive: yes
|
||||
rsync_opts:
|
||||
- "--chmod=D755,F644"
|
||||
- "--exclude=.git"
|
||||
- "--exclude=*.log"
|
||||
- "--exclude=data/"
|
||||
- "--exclude=volumes/"
|
||||
- "--exclude=acme.json" # Preserve SSL certificates
|
||||
- "--exclude=*.key"
|
||||
- "--exclude=*.pem"
|
||||
|
||||
# ========================================
|
||||
# 4. ENSURE ACME.JSON EXISTS
|
||||
# ========================================
|
||||
- name: Check if acme.json exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ traefik_stack_path }}/acme.json"
|
||||
register: acme_json_stat
|
||||
|
||||
- name: Ensure acme.json exists and has correct permissions
|
||||
ansible.builtin.file:
|
||||
path: "{{ traefik_stack_path }}/acme.json"
|
||||
state: touch
|
||||
mode: '0600'
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
become: yes
|
||||
register: acme_json_ensure
|
||||
|
||||
# ========================================
|
||||
# 5. REDEPLOY TRAEFIK
|
||||
# ========================================
|
||||
- name: Deploy Traefik stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ traefik_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
register: traefik_deploy
|
||||
|
||||
- name: Wait for Traefik to be ready
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose ps {{ traefik_container_name }} | grep -Eiq "Up|running"
|
||||
register: traefik_ready
|
||||
changed_when: false
|
||||
until: traefik_ready.rc == 0
|
||||
retries: 12
|
||||
delay: 5
|
||||
failed_when: traefik_ready.rc != 0
|
||||
|
||||
# ========================================
|
||||
# 6. REDEPLOY GITEA
|
||||
# ========================================
|
||||
- name: Deploy Gitea stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ gitea_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
register: gitea_deploy
|
||||
|
||||
- name: Wait for Gitea to be ready
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose ps {{ gitea_container_name }} | grep -Eiq "Up|running"
|
||||
register: gitea_ready
|
||||
changed_when: false
|
||||
until: gitea_ready.rc == 0
|
||||
retries: 12
|
||||
delay: 5
|
||||
failed_when: gitea_ready.rc != 0
|
||||
|
||||
- name: Wait for Gitea to be healthy
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose exec -T {{ gitea_container_name }} curl -f http://localhost:3000/api/healthz 2>&1 | grep -q "status.*pass" && echo "HEALTHY" || echo "NOT_HEALTHY"
|
||||
register: gitea_health
|
||||
changed_when: false
|
||||
until: gitea_health.stdout == "HEALTHY"
|
||||
retries: 30
|
||||
delay: 2
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# 7. RESTORE GITEA CONFIGURATION
|
||||
# ========================================
|
||||
- name: Restore Gitea app.ini from backup
|
||||
ansible.builtin.shell: |
|
||||
if [ -f "{{ backup_base_path }}/{{ actual_backup_name }}/gitea-app.ini" ]; then
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose exec -T {{ gitea_container_name }} sh -c "cat > /data/gitea/conf/app.ini" < "{{ backup_base_path }}/{{ actual_backup_name }}/gitea-app.ini"
|
||||
docker compose restart {{ gitea_container_name }}
|
||||
echo "app.ini restored and Gitea restarted"
|
||||
else
|
||||
echo "No app.ini backup found, using default configuration"
|
||||
fi
|
||||
when: not skip_backup
|
||||
register: gitea_app_ini_restore
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# 8. VERIFY SERVICE DISCOVERY
|
||||
# ========================================
|
||||
- name: Wait for service discovery (Traefik needs time to discover Gitea)
|
||||
ansible.builtin.pause:
|
||||
seconds: 15
|
||||
|
||||
- name: Check if Gitea is in traefik-public network
|
||||
ansible.builtin.shell: |
|
||||
docker network inspect traefik-public --format '{{ '{{' }}range .Containers{{ '}}' }}{{ '{{' }}.Name{{ '}}' }} {{ '{{' }}end{{ '}}' }}' 2>/dev/null | grep -q {{ gitea_container_name }} && echo "YES" || echo "NO"
|
||||
register: gitea_in_network
|
||||
changed_when: false
|
||||
|
||||
- name: Test direct connection from Traefik to Gitea
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose exec -T {{ traefik_container_name }} wget -qO- --timeout=5 http://{{ gitea_container_name }}:3000/api/healthz 2>&1 | head -5 || echo "CONNECTION_FAILED"
|
||||
register: traefik_gitea_direct
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# 9. FINAL VERIFICATION
|
||||
# ========================================
|
||||
- name: Test Gitea via HTTPS (with retries)
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
validate_certs: false
|
||||
timeout: 10
|
||||
register: gitea_https_test
|
||||
until: gitea_https_test.status == 200
|
||||
retries: 20
|
||||
delay: 3
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Check SSL certificate status
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
if [ -f acme.json ] && [ -s acme.json ]; then
|
||||
echo "SSL certificates: PRESENT"
|
||||
else
|
||||
echo "SSL certificates: MISSING or EMPTY"
|
||||
fi
|
||||
register: ssl_status
|
||||
changed_when: false
|
||||
|
||||
- name: Final status summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
REDEPLOYMENT SUMMARY
|
||||
================================================================================
|
||||
|
||||
Traefik:
|
||||
- Status: {{ traefik_ready.rc | ternary('Up', 'Down') }}
|
||||
- SSL Certificates: {{ ssl_status.stdout }}
|
||||
|
||||
Gitea:
|
||||
- Status: {{ gitea_ready.rc | ternary('Up', 'Down') }}
|
||||
- Health: {% if gitea_health.stdout == 'HEALTHY' %}✅ Healthy{% else %}❌ Not Healthy{% endif %}
|
||||
- Configuration: {% if gitea_app_ini_restore.changed %}✅ Restored{% else %}ℹ️ Using default{% endif %}
|
||||
|
||||
Service Discovery:
|
||||
- Gitea in network: {% if gitea_in_network.stdout == 'YES' %}✅{% else %}❌{% endif %}
|
||||
- Direct connection: {% if 'CONNECTION_FAILED' not in traefik_gitea_direct.stdout %}✅{% else %}❌{% endif %}
|
||||
|
||||
Gitea Accessibility:
|
||||
{% if gitea_https_test.status == 200 %}
|
||||
✅ Gitea is reachable via HTTPS (Status: 200)
|
||||
URL: {{ gitea_url }}
|
||||
{% else %}
|
||||
❌ Gitea is NOT reachable via HTTPS (Status: {{ gitea_https_test.status | default('TIMEOUT') }})
|
||||
|
||||
Possible causes:
|
||||
1. SSL certificate is still being generated (wait 2-5 minutes)
|
||||
2. Service discovery needs more time (wait 1-2 minutes)
|
||||
3. Network configuration issue
|
||||
|
||||
Next steps:
|
||||
- Wait 2-5 minutes and test again: curl -k {{ gitea_url }}/api/healthz
|
||||
- Check Traefik logs: cd {{ traefik_stack_path }} && docker compose logs {{ traefik_container_name }} --tail=50
|
||||
- Check Gitea logs: cd {{ gitea_stack_path }} && docker compose logs {{ gitea_container_name }} --tail=50
|
||||
{% endif %}
|
||||
|
||||
{% if not skip_backup %}
|
||||
Backup location: {{ backup_base_path }}/{{ actual_backup_name }}
|
||||
To rollback: ansible-playbook -i inventory/production.yml playbooks/maintenance/rollback-redeploy.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name={{ actual_backup_name }}"
|
||||
{% endif %}
|
||||
|
||||
================================================================================
|
||||
|
||||
19
deployment/legacy/ansible/ansible/playbooks/setup/ssl.yml
Normal file
19
deployment/legacy/ansible/ansible/playbooks/setup/ssl.yml
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
# Setup Let's Encrypt SSL Certificates via Traefik
|
||||
# Wrapper Playbook for traefik role ssl tasks
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
# ssl_domains and acme_email are defined in group_vars/production.yml
|
||||
# Can be overridden via -e flag if needed
|
||||
traefik_ssl_domains: "{{ ssl_domains | default([gitea_domain, app_domain]) }}"
|
||||
tasks:
|
||||
- name: Include traefik ssl tasks
|
||||
ansible.builtin.include_role:
|
||||
name: traefik
|
||||
tasks_from: ssl
|
||||
tags:
|
||||
- traefik
|
||||
- ssl
|
||||
- certificates
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
# Sync docker-compose files and recreate containers
|
||||
# Wrapper Playbook for application role containers tasks (sync-recreate action)
|
||||
- hosts: production
|
||||
gather_facts: no
|
||||
become: no
|
||||
vars:
|
||||
application_container_action: sync-recreate
|
||||
tasks:
|
||||
- name: Include application containers tasks (sync-recreate)
|
||||
ansible.builtin.include_role:
|
||||
name: application
|
||||
tasks_from: containers
|
||||
tags:
|
||||
- application
|
||||
- containers
|
||||
- sync
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
# Synchronize Application Code to Production Server
|
||||
# Wrapper Playbook for application role deploy_code tasks (rsync method)
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
application_deployment_method: rsync
|
||||
tasks:
|
||||
- name: Include application deploy_code tasks (rsync)
|
||||
ansible.builtin.include_role:
|
||||
name: application
|
||||
tasks_from: deploy_code
|
||||
tags:
|
||||
- application
|
||||
- sync
|
||||
- code
|
||||
63
deployment/legacy/ansible/ansible/playbooks/sync-stacks.yml
Normal file
63
deployment/legacy/ansible/ansible/playbooks/sync-stacks.yml
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
- name: Sync Infrastructure Stacks to Production Server
|
||||
hosts: production
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
local_stacks_path: "{{ playbook_dir }}/../../stacks"
|
||||
remote_stacks_path: "{{ stacks_base_path | default('/home/deploy/deployment/stacks') }}"
|
||||
|
||||
tasks:
|
||||
- name: Ensure deployment directory exists on production
|
||||
file:
|
||||
path: "{{ remote_stacks_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
|
||||
- name: Sync stacks directory to production server
|
||||
synchronize:
|
||||
src: "{{ local_stacks_path }}/"
|
||||
dest: "{{ remote_stacks_path }}/"
|
||||
delete: no
|
||||
recursive: yes
|
||||
rsync_opts:
|
||||
- "--chmod=D755,F644"
|
||||
- "--exclude=.git"
|
||||
- "--exclude=*.log"
|
||||
- "--exclude=data/"
|
||||
- "--exclude=volumes/"
|
||||
- "--exclude=acme.json"
|
||||
- "--exclude=*.key"
|
||||
- "--exclude=*.pem"
|
||||
- "--exclude=app.ini"
|
||||
- "--exclude=app.ini.minimal"
|
||||
|
||||
- name: Ensure executable permissions on PostgreSQL backup scripts
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
mode: '0755'
|
||||
loop:
|
||||
- "{{ remote_stacks_path }}/postgresql-production/scripts/backup-entrypoint.sh"
|
||||
- "{{ remote_stacks_path }}/postgresql-production/scripts/backup.sh"
|
||||
- "{{ remote_stacks_path }}/postgresql-production/scripts/restore.sh"
|
||||
- "{{ remote_stacks_path }}/postgresql-staging/scripts/backup-entrypoint.sh"
|
||||
- "{{ remote_stacks_path }}/postgresql-staging/scripts/backup.sh"
|
||||
- "{{ remote_stacks_path }}/postgresql-staging/scripts/restore.sh"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Verify stacks directory exists on production
|
||||
stat:
|
||||
path: "{{ remote_stacks_path }}"
|
||||
register: stacks_dir
|
||||
|
||||
- name: Display sync results
|
||||
debug:
|
||||
msg:
|
||||
- "=== Stacks Synchronization Complete ==="
|
||||
- "Stacks directory exists: {{ stacks_dir.stat.exists }}"
|
||||
- "Path: {{ remote_stacks_path }}/stacks"
|
||||
- ""
|
||||
- "Next: Run infrastructure deployment playbook"
|
||||
@@ -0,0 +1,11 @@
|
||||
---
|
||||
- name: Apply system maintenance on production hosts
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: yes
|
||||
|
||||
tasks:
|
||||
- name: Run system maintenance role
|
||||
include_role:
|
||||
name: system
|
||||
when: system_update_packages | bool
|
||||
47
deployment/legacy/ansible/ansible/playbooks/troubleshoot.yml
Normal file
47
deployment/legacy/ansible/ansible/playbooks/troubleshoot.yml
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
- name: Application Troubleshooting
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
|
||||
# All variables are now defined in group_vars/production.yml
|
||||
|
||||
tasks:
|
||||
- name: Check container health
|
||||
include_tasks: tasks/check-health.yml
|
||||
tags: ['health', 'check', 'all']
|
||||
|
||||
- name: Diagnose 404 errors
|
||||
include_tasks: tasks/diagnose-404.yml
|
||||
tags: ['404', 'diagnose', 'all']
|
||||
|
||||
- name: Fix container health checks
|
||||
include_tasks: tasks/fix-health-checks.yml
|
||||
tags: ['health', 'fix', 'all']
|
||||
|
||||
- name: Fix nginx 404
|
||||
include_tasks: tasks/fix-nginx-404.yml
|
||||
tags: ['nginx', '404', 'fix', 'all']
|
||||
|
||||
- name: Display usage information
|
||||
debug:
|
||||
msg:
|
||||
- "=== Troubleshooting Playbook ==="
|
||||
- ""
|
||||
- "Usage examples:"
|
||||
- " # Check health only:"
|
||||
- " ansible-playbook troubleshoot.yml --tags health,check"
|
||||
- ""
|
||||
- " # Diagnose 404 only:"
|
||||
- " ansible-playbook troubleshoot.yml --tags 404,diagnose"
|
||||
- ""
|
||||
- " # Fix health checks:"
|
||||
- " ansible-playbook troubleshoot.yml --tags health,fix"
|
||||
- ""
|
||||
- " # Fix nginx 404:"
|
||||
- " ansible-playbook troubleshoot.yml --tags nginx,404,fix"
|
||||
- ""
|
||||
- " # Run all checks:"
|
||||
- " ansible-playbook troubleshoot.yml --tags all"
|
||||
when: true
|
||||
tags: ['never']
|
||||
@@ -0,0 +1,14 @@
|
||||
---
|
||||
# Update Gitea Configuration
|
||||
# Wrapper Playbook for gitea role config tasks
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
tasks:
|
||||
- name: Include gitea config tasks
|
||||
ansible.builtin.include_role:
|
||||
name: gitea
|
||||
tasks_from: config
|
||||
tags:
|
||||
- gitea
|
||||
- config
|
||||
@@ -0,0 +1,58 @@
|
||||
---
|
||||
- name: Verify Environment Variable Loading
|
||||
hosts: production
|
||||
gather_facts: no
|
||||
become: no
|
||||
|
||||
vars:
|
||||
application_stack_dest: "{{ app_stack_path | default(stacks_base_path + '/production') }}"
|
||||
application_compose_suffix: "production.yml"
|
||||
|
||||
tasks:
|
||||
- name: Check if docker-compose.production.yml has env_file with absolute path
|
||||
shell: |
|
||||
grep -A 2 "env_file:" {{ application_stack_dest }}/docker-compose.production.yml | head -5
|
||||
delegate_to: "{{ inventory_hostname }}"
|
||||
register: env_file_check
|
||||
changed_when: false
|
||||
|
||||
- name: Display env_file configuration
|
||||
debug:
|
||||
msg: |
|
||||
env_file configuration in docker-compose.production.yml:
|
||||
{{ env_file_check.stdout }}
|
||||
|
||||
- name: Wait for container to be running
|
||||
pause:
|
||||
seconds: 3
|
||||
|
||||
- name: Check all environment variables in queue-worker container
|
||||
shell: |
|
||||
timeout 5 docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} exec -T queue-worker env 2>&1 | grep -E "^DB_|^APP_|^REDIS_" | sort || echo "CONTAINER_NOT_RUNNING"
|
||||
register: all_env_vars
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
ignore_errors: yes
|
||||
retries: 3
|
||||
delay: 2
|
||||
|
||||
- name: Display all environment variables
|
||||
debug:
|
||||
msg: |
|
||||
All DB/APP/REDIS Environment Variables in queue-worker:
|
||||
{{ all_env_vars.stdout }}
|
||||
|
||||
- name: Test if we can read .env file from container
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} exec -T queue-worker cat /home/deploy/deployment/stacks/production/.env 2>&1 | head -20 || echo "FILE_NOT_READABLE"
|
||||
register: env_file_read_test
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display .env file read test
|
||||
debug:
|
||||
msg: |
|
||||
.env file read test from container:
|
||||
{{ env_file_read_test.stdout }}
|
||||
|
||||
@@ -0,0 +1,136 @@
|
||||
---
|
||||
- name: Verify Production Environment
|
||||
hosts: production
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
# All deployment variables are now defined in group_vars/production.yml
|
||||
|
||||
tasks:
|
||||
- name: Debug - Show variables
|
||||
debug:
|
||||
msg:
|
||||
- "app_stack_path: {{ app_stack_path | default('NOT SET') }}"
|
||||
- "postgresql_production_stack_path: {{ postgresql_production_stack_path | default('NOT SET') }}"
|
||||
when: false # Disable by default, enable for debugging
|
||||
|
||||
- name: Check if PostgreSQL-Production Stack exists
|
||||
stat:
|
||||
path: "{{ postgresql_production_stack_path }}"
|
||||
register: postgresql_production_stack_dir
|
||||
|
||||
- name: Fail if PostgreSQL-Production Stack doesn't exist
|
||||
fail:
|
||||
msg: "PostgreSQL-Production Stack not found at {{ postgresql_production_stack_path }}"
|
||||
when: not postgresql_production_stack_dir.stat.exists
|
||||
|
||||
- name: Check PostgreSQL-Production container status
|
||||
shell: |
|
||||
docker compose -f {{ postgresql_production_stack_path }}/docker-compose.yml ps postgres-production 2>/dev/null | grep -Eiq "Up|running" || echo "not_running"
|
||||
register: postgresql_production_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display PostgreSQL-Production status
|
||||
debug:
|
||||
msg: "PostgreSQL-Production: {{ 'RUNNING' if 'Up' in postgresql_production_status.stdout or 'running' in postgresql_production_status.stdout else 'NOT RUNNING' }}"
|
||||
|
||||
- name: Verify PostgreSQL-Production connection
|
||||
shell: |
|
||||
docker exec postgres-production pg_isready -U postgres -d michaelschiemer 2>/dev/null || echo "not_ready"
|
||||
register: postgresql_production_ready
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: "'Up' in postgresql_production_status.stdout or 'running' in postgresql_production_status.stdout"
|
||||
|
||||
- name: Display PostgreSQL-Production connection status
|
||||
debug:
|
||||
msg: "PostgreSQL-Production Connection: {{ 'READY' if 'accepting connections' in postgresql_production_ready.stdout else 'NOT READY' }}"
|
||||
when: postgresql_production_ready is defined
|
||||
|
||||
- name: Check if Production Application Stack exists
|
||||
stat:
|
||||
path: "{{ app_stack_path | default(stacks_base_path + '/production') }}"
|
||||
register: production_stack_dir
|
||||
|
||||
- name: Fail if Production Application Stack doesn't exist
|
||||
fail:
|
||||
msg: "Production Application Stack not found at {{ app_stack_path | default(stacks_base_path + '/production') }}"
|
||||
when: not production_stack_dir.stat.exists
|
||||
|
||||
- name: Check production application container status
|
||||
shell: |
|
||||
docker ps --format "{{ '{{' }}.Names{{ '}}' }}" | grep -E "^(app|php)" | head -1 || echo "not_running"
|
||||
register: production_app_container
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display production application container status
|
||||
debug:
|
||||
msg: "Production App Container: {{ production_app_container.stdout if production_app_container.stdout != 'not_running' else 'NOT RUNNING' }}"
|
||||
|
||||
- name: Verify Networks
|
||||
shell: |
|
||||
docker network ls --format "{{ '{{' }}.Name{{ '}}' }}" | grep -E "(traefik-public|postgres-production-internal|app-internal)" || echo "networks_missing"
|
||||
register: networks_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display Networks status
|
||||
debug:
|
||||
msg: "{{ networks_status.stdout_lines }}"
|
||||
|
||||
- name: Test Network connectivity from production app to postgres-production
|
||||
shell: |
|
||||
docker exec {{ production_app_container.stdout }} nc -zv postgres-production 5432 2>&1 || echo "connection_failed"
|
||||
register: network_test
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: production_app_container.stdout != 'not_running'
|
||||
|
||||
- name: Display Network connectivity status
|
||||
debug:
|
||||
msg: "Network connectivity: {{ 'SUCCESS' if 'succeeded' in network_test.stdout or 'open' in network_test.stdout else 'FAILED' }}"
|
||||
when: network_test is defined
|
||||
|
||||
- name: Basic Health Check
|
||||
uri:
|
||||
url: "https://michaelschiemer.de/health"
|
||||
method: GET
|
||||
validate_certs: no
|
||||
status_code: [200, 404, 502, 503]
|
||||
timeout: 10
|
||||
register: basic_health_check
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display Basic Health Check status
|
||||
debug:
|
||||
msg: "Basic Health Check: {{ 'SUCCESS' if basic_health_check.status == 200 else 'FAILED - Status: ' + (basic_health_check.status|string) }}"
|
||||
|
||||
- name: Extended Health Check
|
||||
uri:
|
||||
url: "https://michaelschiemer.de/admin/health/api/summary"
|
||||
method: GET
|
||||
validate_certs: no
|
||||
status_code: [200, 404, 502, 503]
|
||||
timeout: 10
|
||||
register: extended_health_check
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display Extended Health Check status
|
||||
debug:
|
||||
msg: "Extended Health Check: {{ 'SUCCESS' if extended_health_check.status == 200 else 'NOT AVAILABLE' }}"
|
||||
when: extended_health_check.status is defined
|
||||
|
||||
- name: Display verification summary
|
||||
debug:
|
||||
msg:
|
||||
- "=========================================="
|
||||
- "Production Verification Summary"
|
||||
- "=========================================="
|
||||
- "PostgreSQL-Production: {{ 'RUNNING' if 'Up' in postgresql_production_status.stdout or 'running' in postgresql_production_status.stdout else 'NOT RUNNING' }}"
|
||||
- "Production App: {{ production_app_container.stdout if production_app_container.stdout != 'not_running' else 'NOT RUNNING' }}"
|
||||
- "Basic Health Check: {{ 'SUCCESS' if basic_health_check.status == 200 else 'FAILED' }}"
|
||||
- "=========================================="
|
||||
|
||||
136
deployment/legacy/ansible/ansible/playbooks/verify-staging.yml
Normal file
136
deployment/legacy/ansible/ansible/playbooks/verify-staging.yml
Normal file
@@ -0,0 +1,136 @@
|
||||
---
|
||||
- name: Verify Staging Environment
|
||||
hosts: production
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
# All deployment variables are now defined in group_vars/staging.yml
|
||||
|
||||
tasks:
|
||||
- name: Debug - Show variables
|
||||
debug:
|
||||
msg:
|
||||
- "staging_stack_path: {{ staging_stack_path | default('NOT SET') }}"
|
||||
- "postgresql_staging_stack_path: {{ postgresql_staging_stack_path | default('NOT SET') }}"
|
||||
when: false # Disable by default, enable for debugging
|
||||
|
||||
- name: Check if PostgreSQL-Staging Stack exists
|
||||
stat:
|
||||
path: "{{ postgresql_staging_stack_path }}"
|
||||
register: postgresql_staging_stack_dir
|
||||
|
||||
- name: Fail if PostgreSQL-Staging Stack doesn't exist
|
||||
fail:
|
||||
msg: "PostgreSQL-Staging Stack not found at {{ postgresql_staging_stack_path }}"
|
||||
when: not postgresql_staging_stack_dir.stat.exists
|
||||
|
||||
- name: Check PostgreSQL-Staging container status
|
||||
shell: |
|
||||
docker compose -f {{ postgresql_staging_stack_path }}/docker-compose.yml ps postgres-staging 2>/dev/null | grep -Eiq "Up|running" || echo "not_running"
|
||||
register: postgresql_staging_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display PostgreSQL-Staging status
|
||||
debug:
|
||||
msg: "PostgreSQL-Staging: {{ 'RUNNING' if 'Up' in postgresql_staging_status.stdout or 'running' in postgresql_staging_status.stdout else 'NOT RUNNING' }}"
|
||||
|
||||
- name: Verify PostgreSQL-Staging connection
|
||||
shell: |
|
||||
docker exec postgres-staging pg_isready -U postgres -d michaelschiemer_staging 2>/dev/null || echo "not_ready"
|
||||
register: postgresql_staging_ready
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: "'Up' in postgresql_staging_status.stdout or 'running' in postgresql_staging_status.stdout"
|
||||
|
||||
- name: Display PostgreSQL-Staging connection status
|
||||
debug:
|
||||
msg: "PostgreSQL-Staging Connection: {{ 'READY' if 'accepting connections' in postgresql_staging_ready.stdout else 'NOT READY' }}"
|
||||
when: postgresql_staging_ready is defined
|
||||
|
||||
- name: Check if Staging Application Stack exists
|
||||
stat:
|
||||
path: "{{ staging_stack_path }}"
|
||||
register: staging_stack_dir
|
||||
|
||||
- name: Fail if Staging Application Stack doesn't exist
|
||||
fail:
|
||||
msg: "Staging Application Stack not found at {{ staging_stack_path }}"
|
||||
when: not staging_stack_dir.stat.exists
|
||||
|
||||
- name: Check staging-app container status
|
||||
shell: |
|
||||
docker compose -f {{ staging_stack_path }}/docker-compose.base.yml -f {{ staging_stack_path }}/docker-compose.staging.yml ps staging-app 2>/dev/null | grep -Eiq "Up|running" || echo "not_running"
|
||||
register: staging_app_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display staging-app status
|
||||
debug:
|
||||
msg: "staging-app: {{ 'RUNNING' if 'Up' in staging_app_status.stdout or 'running' in staging_app_status.stdout else 'NOT RUNNING' }}"
|
||||
|
||||
- name: Verify Networks
|
||||
shell: |
|
||||
docker network ls --format "{{ '{{' }}.Name{{ '}}' }}" | grep -E "(traefik-public|staging-internal|postgres-staging-internal)" || echo "networks_missing"
|
||||
register: networks_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display Networks status
|
||||
debug:
|
||||
msg: "{{ networks_status.stdout_lines }}"
|
||||
|
||||
- name: Test Network connectivity from staging-app to postgres-staging
|
||||
shell: |
|
||||
docker exec staging-app nc -zv postgres-staging 5432 2>&1 || echo "connection_failed"
|
||||
register: network_test
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: "'Up' in staging_app_status.stdout or 'running' in staging_app_status.stdout"
|
||||
|
||||
- name: Display Network connectivity status
|
||||
debug:
|
||||
msg: "Network connectivity: {{ 'SUCCESS' if 'succeeded' in network_test.stdout or 'open' in network_test.stdout else 'FAILED' }}"
|
||||
when: network_test is defined
|
||||
|
||||
- name: Basic Health Check
|
||||
uri:
|
||||
url: "https://staging.michaelschiemer.de/health"
|
||||
method: GET
|
||||
validate_certs: no
|
||||
status_code: [200, 404, 502, 503]
|
||||
timeout: 10
|
||||
register: basic_health_check
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display Basic Health Check status
|
||||
debug:
|
||||
msg: "Basic Health Check: {{ 'SUCCESS' if basic_health_check.status == 200 else 'FAILED - Status: ' + (basic_health_check.status|string) }}"
|
||||
|
||||
- name: Extended Health Check
|
||||
uri:
|
||||
url: "https://staging.michaelschiemer.de/admin/health/api/summary"
|
||||
method: GET
|
||||
validate_certs: no
|
||||
status_code: [200, 404, 502, 503]
|
||||
timeout: 10
|
||||
register: extended_health_check
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display Extended Health Check status
|
||||
debug:
|
||||
msg: "Extended Health Check: {{ 'SUCCESS' if extended_health_check.status == 200 else 'NOT AVAILABLE' }}"
|
||||
when: extended_health_check.status is defined
|
||||
|
||||
- name: Display verification summary
|
||||
debug:
|
||||
msg:
|
||||
- "=========================================="
|
||||
- "Staging Verification Summary"
|
||||
- "=========================================="
|
||||
- "PostgreSQL-Staging: {{ 'RUNNING' if 'Up' in postgresql_staging_status.stdout or 'running' in postgresql_staging_status.stdout else 'NOT RUNNING' }}"
|
||||
- "staging-app: {{ 'RUNNING' if 'Up' in staging_app_status.stdout or 'running' in staging_app_status.stdout else 'NOT RUNNING' }}"
|
||||
- "Basic Health Check: {{ 'SUCCESS' if basic_health_check.status == 200 else 'FAILED' }}"
|
||||
- "=========================================="
|
||||
|
||||
@@ -0,0 +1,212 @@
|
||||
---
|
||||
- name: Configure WireGuard split tunnel routing
|
||||
hosts: production
|
||||
become: true
|
||||
gather_facts: true
|
||||
|
||||
vars:
|
||||
wg_interface: wg0
|
||||
wg_addr: 10.8.0.1/24
|
||||
wg_net: 10.8.0.0/24
|
||||
wan_interface: eth0
|
||||
listening_port: 51820
|
||||
extra_nets:
|
||||
- 192.168.178.0/24
|
||||
- 172.20.0.0/16
|
||||
firewall_backend: iptables # or nftables
|
||||
manage_ufw: false
|
||||
manage_firewalld: false
|
||||
firewalld_zone: public
|
||||
|
||||
pre_tasks:
|
||||
- name: Ensure required collections are installed (documentation note)
|
||||
debug:
|
||||
msg: >
|
||||
Install collections if missing:
|
||||
ansible-galaxy collection install ansible.posix community.general
|
||||
when: false
|
||||
|
||||
tasks:
|
||||
- name: Ensure WireGuard config directory exists
|
||||
ansible.builtin.file:
|
||||
path: "/etc/wireguard"
|
||||
state: directory
|
||||
mode: "0700"
|
||||
owner: root
|
||||
group: root
|
||||
|
||||
- name: Persist IPv4 forwarding
|
||||
ansible.builtin.copy:
|
||||
dest: "/etc/sysctl.d/99-{{ wg_interface }}-forward.conf"
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0644"
|
||||
content: |
|
||||
# Managed by Ansible - WireGuard {{ wg_interface }}
|
||||
net.ipv4.ip_forward=1
|
||||
|
||||
- name: Enable IPv4 forwarding runtime
|
||||
ansible.posix.sysctl:
|
||||
name: net.ipv4.ip_forward
|
||||
value: "1"
|
||||
state: present
|
||||
reload: true
|
||||
|
||||
- name: Configure MASQUERADE (iptables)
|
||||
community.general.iptables:
|
||||
table: nat
|
||||
chain: POSTROUTING
|
||||
out_interface: "{{ wan_interface }}"
|
||||
source: "{{ wg_net }}"
|
||||
jump: MASQUERADE
|
||||
state: present
|
||||
when: firewall_backend == "iptables"
|
||||
|
||||
- name: Allow forwarding wg -> wan (iptables)
|
||||
community.general.iptables:
|
||||
table: filter
|
||||
chain: FORWARD
|
||||
in_interface: "{{ wg_interface }}"
|
||||
out_interface: "{{ wan_interface }}"
|
||||
source: "{{ wg_net }}"
|
||||
jump: ACCEPT
|
||||
state: present
|
||||
when: firewall_backend == "iptables"
|
||||
|
||||
- name: Allow forwarding wan -> wg (iptables)
|
||||
community.general.iptables:
|
||||
table: filter
|
||||
chain: FORWARD
|
||||
out_interface: "{{ wg_interface }}"
|
||||
destination: "{{ wg_net }}"
|
||||
ctstate: RELATED,ESTABLISHED
|
||||
jump: ACCEPT
|
||||
state: present
|
||||
when: firewall_backend == "iptables"
|
||||
|
||||
- name: Allow forwarding to extra nets (iptables)
|
||||
community.general.iptables:
|
||||
table: filter
|
||||
chain: FORWARD
|
||||
in_interface: "{{ wg_interface }}"
|
||||
destination: "{{ item }}"
|
||||
jump: ACCEPT
|
||||
state: present
|
||||
loop: "{{ extra_nets }}"
|
||||
when: firewall_backend == "iptables"
|
||||
|
||||
- name: Allow return from extra nets (iptables)
|
||||
community.general.iptables:
|
||||
table: filter
|
||||
chain: FORWARD
|
||||
source: "{{ item }}"
|
||||
out_interface: "{{ wg_interface }}"
|
||||
ctstate: RELATED,ESTABLISHED
|
||||
jump: ACCEPT
|
||||
state: present
|
||||
loop: "{{ extra_nets }}"
|
||||
when: firewall_backend == "iptables"
|
||||
|
||||
- name: Deploy nftables WireGuard rules
|
||||
ansible.builtin.template:
|
||||
src: "{{ playbook_dir }}/../templates/wireguard-nftables.nft.j2"
|
||||
dest: "/etc/nftables.d/wireguard-{{ wg_interface }}.nft"
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0644"
|
||||
when: firewall_backend == "nftables"
|
||||
notify: Reload nftables
|
||||
|
||||
- name: Ensure nftables main config includes WireGuard rules
|
||||
ansible.builtin.lineinfile:
|
||||
path: /etc/nftables.conf
|
||||
regexp: '^include "/etc/nftables.d/wireguard-{{ wg_interface }}.nft";$'
|
||||
line: 'include "/etc/nftables.d/wireguard-{{ wg_interface }}.nft";'
|
||||
create: true
|
||||
when: firewall_backend == "nftables"
|
||||
notify: Reload nftables
|
||||
|
||||
- name: Manage UFW forward policy
|
||||
ansible.builtin.lineinfile:
|
||||
path: /etc/default/ufw
|
||||
regexp: '^DEFAULT_FORWARD_POLICY='
|
||||
line: 'DEFAULT_FORWARD_POLICY="ACCEPT"'
|
||||
when: manage_ufw
|
||||
|
||||
- name: Allow WireGuard port in UFW
|
||||
community.general.ufw:
|
||||
rule: allow
|
||||
port: "{{ listening_port }}"
|
||||
proto: udp
|
||||
comment: "WireGuard VPN"
|
||||
when: manage_ufw
|
||||
|
||||
- name: Allow routed traffic via UFW (wg -> wan)
|
||||
ansible.builtin.command:
|
||||
cmd: "ufw route allow in on {{ wg_interface }} out on {{ wan_interface }} to any"
|
||||
register: ufw_route_result
|
||||
changed_when: "'Skipping' not in ufw_route_result.stdout"
|
||||
when: manage_ufw
|
||||
|
||||
- name: Allow extra nets via UFW
|
||||
ansible.builtin.command:
|
||||
cmd: "ufw route allow in on {{ wg_interface }} to {{ item }}"
|
||||
loop: "{{ extra_nets }}"
|
||||
register: ufw_extra_result
|
||||
changed_when: "'Skipping' not in ufw_extra_result.stdout"
|
||||
when: manage_ufw
|
||||
|
||||
- name: Allow WireGuard port in firewalld
|
||||
ansible.posix.firewalld:
|
||||
zone: "{{ firewalld_zone }}"
|
||||
port: "{{ listening_port }}/udp"
|
||||
permanent: true
|
||||
state: enabled
|
||||
when: manage_firewalld
|
||||
|
||||
- name: Enable firewalld masquerade
|
||||
ansible.posix.firewalld:
|
||||
zone: "{{ firewalld_zone }}"
|
||||
masquerade: true
|
||||
permanent: true
|
||||
state: enabled
|
||||
when: manage_firewalld
|
||||
|
||||
- name: Allow forwarding from WireGuard via firewalld
|
||||
ansible.posix.firewalld:
|
||||
permanent: true
|
||||
state: enabled
|
||||
immediate: false
|
||||
rich_rule: 'rule family="ipv4" source address="{{ wg_net }}" accept'
|
||||
when: manage_firewalld
|
||||
|
||||
- name: Allow extra nets via firewalld
|
||||
ansible.posix.firewalld:
|
||||
permanent: true
|
||||
state: enabled
|
||||
immediate: false
|
||||
rich_rule: 'rule family="ipv4" source address="{{ item }}" accept'
|
||||
loop: "{{ extra_nets }}"
|
||||
when: manage_firewalld
|
||||
|
||||
- name: Ensure wg-quick service enabled and restarted
|
||||
ansible.builtin.systemd:
|
||||
name: "wg-quick@{{ wg_interface }}"
|
||||
enabled: true
|
||||
state: restarted
|
||||
|
||||
- name: Show WireGuard status
|
||||
ansible.builtin.command: "wg show {{ wg_interface }}"
|
||||
register: wg_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Render routing summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
WireGuard routing updated for {{ wg_interface }}
|
||||
{{ wg_status.stdout }}
|
||||
|
||||
handlers:
|
||||
- name: Reload nftables
|
||||
ansible.builtin.command: nft -f /etc/nftables.conf
|
||||
@@ -0,0 +1,116 @@
|
||||
---
|
||||
# Source path for production stack files on the control node
|
||||
# Use playbook_dir as base, then go to ../stacks/production
|
||||
# This assumes playbooks are in deployment/ansible/playbooks
|
||||
# Note: Use ~ for string concatenation in Jinja2 templates
|
||||
# Note: Don't use application_stack_src in the default chain to avoid recursion
|
||||
application_stack_src: "{{ (playbook_dir | default(role_path + '/..') | dirname | dirname | dirname) ~ '/stacks/production' }}"
|
||||
|
||||
# Destination path on the target host (defaults to configured app_stack_path)
|
||||
# Note: Don't use application_stack_dest in the default chain to avoid recursion
|
||||
# Note: Use ~ for string concatenation in Jinja2 templates
|
||||
application_stack_dest: "{{ app_stack_path | default((stacks_base_path | default('/home/deploy/deployment/stacks')) ~ '/production') }}"
|
||||
|
||||
# Template used to generate the application .env file
|
||||
application_env_template: "{{ role_path }}/../../templates/application.env.j2"
|
||||
|
||||
# Optional vault file containing secrets (loaded if present)
|
||||
application_vault_file: "{{ role_path }}/../../secrets/production.vault.yml"
|
||||
|
||||
# Whether to synchronize stack files from repository
|
||||
application_sync_files: true
|
||||
|
||||
# Compose recreate strategy ("auto", "always", "never")
|
||||
application_compose_recreate: "auto"
|
||||
|
||||
# Whether to remove orphaned containers during compose up
|
||||
application_remove_orphans: false
|
||||
|
||||
# Whether to run database migrations after (re)deploying the stack
|
||||
application_run_migrations: true
|
||||
|
||||
# Optional health check URL to verify after deployment
|
||||
application_healthcheck_url: "{{ health_check_url | default('') }}"
|
||||
|
||||
# Timeout used for waits in this role
|
||||
application_wait_timeout: "{{ wait_timeout | default(60) }}"
|
||||
application_wait_interval: 5
|
||||
|
||||
# Command executed inside the app container to run migrations
|
||||
application_migration_command: "php console.php db:migrate"
|
||||
|
||||
# Environment (production, staging, local)
|
||||
# Determines which compose files to use and service names
|
||||
application_environment: "{{ APP_ENV | default('production') }}"
|
||||
|
||||
# Compose file suffix based on environment
|
||||
application_compose_suffix: "{{ 'staging.yml' if application_environment == 'staging' else 'production.yml' }}"
|
||||
|
||||
# Service names based on environment
|
||||
application_service_name: "{{ 'staging-app' if application_environment == 'staging' else 'php' }}"
|
||||
application_php_service_name: "{{ application_service_name }}"
|
||||
|
||||
# Code Deployment Configuration
|
||||
application_code_dest: "/home/deploy/michaelschiemer/current"
|
||||
application_deployment_method: "git" # Options: git, rsync
|
||||
application_git_repository_url_default: "https://git.michaelschiemer.de/michael/michaelschiemer.git"
|
||||
application_git_branch: "{{ 'staging' if application_environment == 'staging' else 'main' }}"
|
||||
application_git_retries: 5
|
||||
application_git_retry_delay: 10
|
||||
application_rsync_source: "{{ playbook_dir | default('') | dirname | dirname | dirname }}"
|
||||
application_rsync_opts:
|
||||
- "--chmod=D755,F644"
|
||||
- "--exclude=.git"
|
||||
- "--exclude=.gitignore"
|
||||
- "--exclude=node_modules"
|
||||
- "--exclude=vendor"
|
||||
- "--exclude=.env"
|
||||
- "--exclude=.env.*"
|
||||
- "--exclude=*.log"
|
||||
- "--exclude=.idea"
|
||||
- "--exclude=.vscode"
|
||||
- "--exclude=.DS_Store"
|
||||
- "--exclude=*.swp"
|
||||
- "--exclude=*.swo"
|
||||
- "--exclude=*~"
|
||||
- "--exclude=.phpunit.result.cache"
|
||||
- "--exclude=coverage"
|
||||
- "--exclude=.phpunit.cache"
|
||||
- "--exclude=public/assets"
|
||||
- "--exclude=storage/logs"
|
||||
- "--exclude=storage/framework/cache"
|
||||
- "--exclude=storage/framework/sessions"
|
||||
- "--exclude=storage/framework/views"
|
||||
- "--exclude=deployment"
|
||||
- "--exclude=docker"
|
||||
- "--exclude=.deployment-archive-*"
|
||||
- "--exclude=docs"
|
||||
- "--exclude=tests"
|
||||
application_php_scripts:
|
||||
- worker.php
|
||||
- console.php
|
||||
application_critical_files:
|
||||
- worker.php
|
||||
- console.php
|
||||
- composer.json
|
||||
|
||||
# Composer Configuration
|
||||
application_restart_workers_after_composer: true
|
||||
|
||||
# Container Management Configuration
|
||||
application_container_action: "fix" # Options: fix, fix-web, recreate, recreate-with-env, sync-recreate
|
||||
application_container_target_services: "queue-worker scheduler"
|
||||
application_container_status_services: "queue-worker web scheduler php"
|
||||
application_container_stabilize_wait: 5
|
||||
|
||||
# Health Check Configuration
|
||||
application_health_check_logs_tail: 20
|
||||
application_health_check_final: false
|
||||
application_show_status: true
|
||||
|
||||
# Logs Configuration
|
||||
application_logs_tail: 50
|
||||
application_logs_check_vendor: true
|
||||
application_logs_check_permissions: true
|
||||
application_logs_check_files: true
|
||||
application_logs_list_files: false
|
||||
@@ -0,0 +1,10 @@
|
||||
---
|
||||
# Handlers for Application Role
|
||||
|
||||
- name: restart application workers
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} restart queue-worker scheduler
|
||||
changed_when: true
|
||||
failed_when: false
|
||||
|
||||
@@ -0,0 +1,87 @@
|
||||
---
|
||||
# Install Composer Dependencies in Application Container
|
||||
|
||||
- name: Check if composer.json exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ application_code_dest }}/composer.json"
|
||||
register: composer_json_exists
|
||||
|
||||
- name: Fail if composer.json is missing
|
||||
ansible.builtin.fail:
|
||||
msg: "composer.json not found at {{ application_code_dest }}/composer.json"
|
||||
when: not composer_json_exists.stat.exists
|
||||
|
||||
- name: Check if container is running
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} ps {{ application_php_service_name }} --format json
|
||||
register: container_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display container status
|
||||
ansible.builtin.debug:
|
||||
msg: "Container status: {{ container_status.stdout }}"
|
||||
when: application_show_status | default(true) | bool
|
||||
|
||||
- name: Fail if container is not running
|
||||
ansible.builtin.fail:
|
||||
msg: |
|
||||
Container '{{ application_php_service_name }}' is not running!
|
||||
|
||||
The container must be started before installing composer dependencies.
|
||||
This is typically done by the 'deploy-image.yml' playbook which should run before this.
|
||||
|
||||
To start the container manually:
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} up -d {{ application_php_service_name }}
|
||||
|
||||
Note: The container requires environment variables (DB_USERNAME, DB_PASSWORD, etc.)
|
||||
which should be set in a .env file or via docker-compose environment configuration.
|
||||
when: container_status.rc != 0 or '"State":"running"' not in container_status.stdout
|
||||
|
||||
- name: Install composer dependencies in PHP container
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} exec -T {{ application_php_service_name }} composer install --no-dev --optimize-autoloader --no-interaction
|
||||
register: composer_install
|
||||
changed_when: true
|
||||
failed_when: composer_install.rc != 0
|
||||
|
||||
- name: Display composer install output
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Composer Install Output:
|
||||
stdout: {{ composer_install.stdout }}
|
||||
stderr: {{ composer_install.stderr }}
|
||||
rc: {{ composer_install.rc }}
|
||||
when:
|
||||
- composer_install.rc != 0
|
||||
- application_show_status | default(true) | bool
|
||||
|
||||
- name: Restart queue-worker and scheduler to pick up vendor directory
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} restart queue-worker scheduler
|
||||
register: restart_workers
|
||||
changed_when: true
|
||||
failed_when: false
|
||||
when: application_restart_workers_after_composer | default(true) | bool
|
||||
|
||||
- name: Verify vendor/autoload.php exists
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} exec -T {{ application_php_service_name }} test -f /var/www/html/vendor/autoload.php && echo "EXISTS" || echo "MISSING"
|
||||
register: autoload_check
|
||||
changed_when: false
|
||||
|
||||
- name: Display autoload verification
|
||||
ansible.builtin.debug:
|
||||
msg: "vendor/autoload.php: {{ autoload_check.stdout.strip() }}"
|
||||
when: application_show_status | default(true) | bool
|
||||
|
||||
- name: Fail if autoload.php is missing
|
||||
ansible.builtin.fail:
|
||||
msg: "vendor/autoload.php was not created after composer install"
|
||||
when: "autoload_check.stdout.strip() != 'EXISTS'"
|
||||
|
||||
@@ -0,0 +1,86 @@
|
||||
---
|
||||
# Container Management Tasks (Fix, Recreate, etc.)
|
||||
|
||||
- name: Check if vendor directory exists on host
|
||||
ansible.builtin.stat:
|
||||
path: "{{ application_code_dest }}/vendor"
|
||||
register: vendor_dir_exists
|
||||
|
||||
- name: Display vendor directory status
|
||||
ansible.builtin.debug:
|
||||
msg: "vendor directory on host: {{ 'EXISTS' if vendor_dir_exists.stat.exists else 'MISSING' }}"
|
||||
when: application_show_status | default(true) | bool
|
||||
|
||||
- name: Install composer dependencies in PHP container (if vendor missing)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} exec -T {{ application_php_service_name }} composer install --no-dev --optimize-autoloader --no-interaction
|
||||
register: composer_install
|
||||
changed_when: true
|
||||
failed_when: composer_install.rc != 0
|
||||
when:
|
||||
- application_container_action | default('fix') == 'fix'
|
||||
- not vendor_dir_exists.stat.exists
|
||||
|
||||
- name: Verify vendor/autoload.php exists in container
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} exec -T {{ application_php_service_name }} test -f /var/www/html/vendor/autoload.php && echo "EXISTS" || echo "MISSING"
|
||||
register: autoload_check
|
||||
changed_when: false
|
||||
when: application_container_action | default('fix') == 'fix'
|
||||
|
||||
- name: Display autoload verification
|
||||
ansible.builtin.debug:
|
||||
msg: "vendor/autoload.php in container: {{ autoload_check.stdout.strip() }}"
|
||||
when:
|
||||
- application_container_action | default('fix') == 'fix'
|
||||
- application_show_status | default(true) | bool
|
||||
|
||||
- name: Recreate web container with new security settings
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} up -d --force-recreate --no-deps web
|
||||
register: recreate_web
|
||||
changed_when: true
|
||||
when:
|
||||
- application_container_action | default('fix') in ['fix', 'fix-web']
|
||||
|
||||
- name: Recreate queue-worker and scheduler containers
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} up -d --force-recreate {{ application_container_target_services | default('queue-worker scheduler') }}
|
||||
register: recreate_containers
|
||||
changed_when: true
|
||||
when:
|
||||
- application_container_action | default('fix') in ['recreate', 'recreate-with-env', 'sync-recreate']
|
||||
|
||||
- name: Restart queue-worker and scheduler to pick up vendor directory
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} restart queue-worker scheduler
|
||||
register: restart_workers
|
||||
changed_when: true
|
||||
failed_when: false
|
||||
when:
|
||||
- application_container_action | default('fix') == 'fix'
|
||||
- application_restart_workers_after_composer | default(true) | bool
|
||||
|
||||
- name: Wait for containers to stabilize
|
||||
ansible.builtin.pause:
|
||||
seconds: "{{ application_container_stabilize_wait | default(5) }}"
|
||||
when: application_container_action | default('fix') in ['fix', 'recreate', 'recreate-with-env', 'sync-recreate']
|
||||
|
||||
- name: Get final container status
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} ps {{ application_container_status_services | default('queue-worker web scheduler php') }}
|
||||
register: final_status
|
||||
changed_when: false
|
||||
|
||||
- name: Display final container status
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
{{ final_status.stdout }}
|
||||
when: application_show_status | default(true) | bool
|
||||
|
||||
@@ -0,0 +1,425 @@
|
||||
---
|
||||
- name: Debug all available variables before password determination
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Available variables for registry password:
|
||||
- docker_registry_password_default defined: {{ docker_registry_password_default is defined }}
|
||||
- vault_docker_registry_password defined: {{ vault_docker_registry_password is defined }}
|
||||
- All vault_* variable names: {{ vars.keys() | select('match', '^vault_.*') | list | join(', ') }}
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Check if docker_registry_password_default is set (safe check)
|
||||
ansible.builtin.set_fact:
|
||||
_docker_registry_password_default_set: "{{ 'YES' if (docker_registry_password_default is defined and docker_registry_password_default | string | trim != '') else 'NO' }}"
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
when: docker_registry_password_default is defined
|
||||
|
||||
- name: Check if vault_docker_registry_password is set (safe check)
|
||||
ansible.builtin.set_fact:
|
||||
_vault_docker_registry_password_set: "{{ 'YES' if (vault_docker_registry_password is defined and vault_docker_registry_password | string | trim != '') else 'NO' }}"
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
when: vault_docker_registry_password is defined
|
||||
|
||||
- name: Debug password status
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Password status:
|
||||
- docker_registry_password_default: {{ _docker_registry_password_default_set | default('NOT DEFINED') }}
|
||||
- vault_docker_registry_password: {{ _vault_docker_registry_password_set | default('NOT DEFINED') }}
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Determine Docker registry password from vault or defaults
|
||||
ansible.builtin.set_fact:
|
||||
registry_password: >-
|
||||
{%- if docker_registry_password_default is defined and docker_registry_password_default | string | trim != '' -%}
|
||||
{{ docker_registry_password_default }}
|
||||
{%- elif vault_docker_registry_password is defined and vault_docker_registry_password | string | trim != '' -%}
|
||||
{{ vault_docker_registry_password }}
|
||||
{%- else -%}
|
||||
{{ '' }}
|
||||
{%- endif -%}
|
||||
no_log: yes
|
||||
|
||||
- name: Debug registry password source after determination
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Registry password determination result:
|
||||
- docker_registry_password_default: {{ 'SET (length: ' + (docker_registry_password_default | default('') | string | length | string) + ')' if (docker_registry_password_default | default('') | string | trim) != '' else 'NOT SET' }}
|
||||
- vault_docker_registry_password defined: {{ vault_docker_registry_password is defined }}
|
||||
- vault_docker_registry_password set: {{ 'YES (length: ' + (vault_docker_registry_password | default('') | string | length | string) + ')' if (vault_docker_registry_password | default('') | string | trim) != '' else 'NO' }}
|
||||
- registry_password set: {{ 'YES (length: ' + (registry_password | default('') | string | length | string) + ')' if (registry_password | default('') | string | trim) != '' else 'NO' }}
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Debug vault loading
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Vault loading status:
|
||||
- Vault file exists: {{ application_vault_stat.stat.exists | default(false) }}
|
||||
- vault_docker_registry_password defined: {{ vault_docker_registry_password is defined }}
|
||||
- vault_docker_registry_password value: {{ 'SET (length: ' + (vault_docker_registry_password | default('') | string | length | string) + ')' if (vault_docker_registry_password | default('') | string | trim) != '' else 'NOT SET or EMPTY' }}
|
||||
- registry_password: {{ 'SET (length: ' + (registry_password | default('') | string | length | string) + ')' if (registry_password | default('') | string | trim) != '' else 'NOT SET or EMPTY' }}
|
||||
when: true
|
||||
no_log: yes
|
||||
|
||||
- name: Check if registry is accessible
|
||||
ansible.builtin.uri:
|
||||
url: "http://{{ docker_registry | default('localhost:5000') }}/v2/"
|
||||
method: GET
|
||||
status_code: [200, 401]
|
||||
timeout: 5
|
||||
register: registry_check
|
||||
ignore_errors: yes
|
||||
delegate_to: "{{ inventory_hostname }}"
|
||||
become: no
|
||||
|
||||
- name: Debug registry accessibility
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Registry accessibility check:
|
||||
- Registry URL: http://{{ docker_registry | default('localhost:5000') }}/v2/
|
||||
- Status code: {{ registry_check.status | default('UNKNOWN') }}
|
||||
- Accessible: {{ 'YES' if registry_check.status | default(0) in [200, 401] else 'NO' }}
|
||||
- Note: Status 401 means registry requires authentication (expected)
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Login to Docker registry
|
||||
community.docker.docker_login:
|
||||
registry_url: "{{ docker_registry | default('localhost:5000') }}"
|
||||
username: "{{ docker_registry_username_default | default('admin') }}"
|
||||
password: "{{ registry_password }}"
|
||||
when:
|
||||
- registry_password | string | trim != ''
|
||||
- registry_check.status | default(0) in [200, 401]
|
||||
no_log: yes
|
||||
ignore_errors: yes
|
||||
register: docker_login_result
|
||||
|
||||
- name: Warn if Docker registry login failed
|
||||
ansible.builtin.debug:
|
||||
msg: "WARNING: Docker registry login failed or skipped. Images may not be pullable without authentication."
|
||||
when:
|
||||
- registry_password | string | trim != ''
|
||||
- docker_login_result.failed | default(false)
|
||||
|
||||
- name: Debug registry authentication status
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Registry authentication status:
|
||||
- Registry: {{ docker_registry | default('localhost:5000') }}
|
||||
- Password set: {{ 'YES' if (registry_password | string | trim) != '' else 'NO' }}
|
||||
- Login result: {{ 'SUCCESS' if (docker_login_result.failed | default(true) == false) else 'FAILED or SKIPPED' }}
|
||||
- Username: {{ docker_registry_username_default | default('admin') }}
|
||||
when: true
|
||||
|
||||
- name: Fail if registry password is not set
|
||||
ansible.builtin.fail:
|
||||
msg: |
|
||||
Docker registry authentication required but password not set!
|
||||
|
||||
The registry at {{ docker_registry | default('localhost:5000') }} requires authentication.
|
||||
|
||||
Please set the password in one of these ways:
|
||||
|
||||
1. Set in vault file (recommended):
|
||||
ansible-vault edit {{ vault_file | default('inventory/group_vars/production/vault.yml') }}
|
||||
# Add: vault_docker_registry_password: "your-password"
|
||||
|
||||
2. Pass via extra vars:
|
||||
-e "docker_registry_password_default=your-password"
|
||||
|
||||
3. Use init-secrets.sh script to generate all passwords:
|
||||
cd deployment/ansible
|
||||
./scripts/init-secrets.sh
|
||||
|
||||
Note: The registry password was likely generated when the registry stack was deployed.
|
||||
Check the registry role output or the vault file for the generated password.
|
||||
when:
|
||||
- registry_password | string | trim == ''
|
||||
- docker_registry | default('localhost:5000') == 'localhost:5000'
|
||||
|
||||
- name: Check registry htpasswd file to verify password
|
||||
ansible.builtin.shell: |
|
||||
if [ -f "{{ registry_auth_path | default('/home/deploy/deployment/stacks/registry/auth') }}/htpasswd" ]; then
|
||||
cat "{{ registry_auth_path | default('/home/deploy/deployment/stacks/registry/auth') }}/htpasswd"
|
||||
else
|
||||
echo "htpasswd file not found"
|
||||
fi
|
||||
register: registry_htpasswd_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
delegate_to: "{{ inventory_hostname }}"
|
||||
become: no
|
||||
when: docker_login_result.failed | default(false)
|
||||
|
||||
- name: Debug registry password mismatch
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Registry authentication failed!
|
||||
|
||||
Registry: {{ docker_registry | default('localhost:5000') }}
|
||||
Username: {{ docker_registry_username_default | default('admin') }}
|
||||
|
||||
Possible causes:
|
||||
1. The password in vault does not match the password used during registry deployment
|
||||
2. The registry was deployed with a different password (generated by registry role)
|
||||
3. The username is incorrect
|
||||
|
||||
To fix:
|
||||
1. Check the registry htpasswd file on the server:
|
||||
cat {{ registry_auth_path | default('/home/deploy/deployment/stacks/registry/auth') }}/htpasswd
|
||||
|
||||
2. Extract the password from the registry .env file (if available):
|
||||
grep REGISTRY_AUTH {{ registry_stack_path | default('/home/deploy/deployment/stacks/registry') }}/.env
|
||||
|
||||
3. Update the vault file with the correct password:
|
||||
ansible-vault edit {{ vault_file | default('inventory/group_vars/production/vault.yml') }}
|
||||
# Set: vault_docker_registry_password: "correct-password"
|
||||
|
||||
4. Or re-deploy the registry stack with the password from vault:
|
||||
ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml --tags registry
|
||||
|
||||
Registry htpasswd file content:
|
||||
{{ registry_htpasswd_check.stdout | default('NOT FOUND') }}
|
||||
when:
|
||||
- registry_password | string | trim != ''
|
||||
- docker_login_result.failed | default(false)
|
||||
|
||||
- name: Fail if registry authentication failed and password was provided
|
||||
ansible.builtin.fail:
|
||||
msg: |
|
||||
Docker registry authentication failed!
|
||||
|
||||
Registry: {{ docker_registry | default('localhost:5000') }}
|
||||
Username: {{ docker_registry_username_default | default('admin') }}
|
||||
|
||||
The password in the vault file does not match the password used during registry deployment.
|
||||
Please check the debug output above for instructions on how to fix this.
|
||||
when:
|
||||
- registry_password | string | trim != ''
|
||||
- docker_login_result.failed | default(false)
|
||||
|
||||
- name: Force pull latest Docker images before deployment
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} pull --ignore-pull-failures
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: not ansible_check_mode
|
||||
|
||||
- name: Verify entrypoint script exists in Docker image (method 1 - file check)
|
||||
shell: |
|
||||
docker run --rm --entrypoint=/bin/sh {{ docker_registry | default('localhost:5000') }}/{{ app_name | default('framework') }}:latest -c "test -f /usr/local/bin/entrypoint.sh && ls -la /usr/local/bin/entrypoint.sh || echo 'FILE_NOT_FOUND'"
|
||||
register: entrypoint_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Verify entrypoint script exists in Docker image (method 2 - inspect image)
|
||||
shell: |
|
||||
docker image inspect {{ docker_registry | default('localhost:5000') }}/{{ app_name | default('framework') }}:latest --format '{{ "{{" }}.Config.Entrypoint{{ "}}" }}' 2>&1 || echo "INSPECT_FAILED"
|
||||
register: entrypoint_inspect
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Verify entrypoint script exists in Docker image (method 3 - extract and check)
|
||||
shell: |
|
||||
CONTAINER_ID=$(docker create {{ docker_registry | default('localhost:5000') }}/{{ app_name | default('framework') }}:latest 2>/dev/null) && \
|
||||
docker cp $CONTAINER_ID:/usr/local/bin/entrypoint.sh /tmp/entrypoint_check.sh 2>&1 && \
|
||||
if [ -f /tmp/entrypoint_check.sh ]; then \
|
||||
echo "FILE_EXISTS"; \
|
||||
ls -la /tmp/entrypoint_check.sh; \
|
||||
head -5 /tmp/entrypoint_check.sh; \
|
||||
rm -f /tmp/entrypoint_check.sh; \
|
||||
else \
|
||||
echo "FILE_NOT_FOUND"; \
|
||||
fi && \
|
||||
docker rm $CONTAINER_ID >/dev/null 2>&1 || true
|
||||
register: entrypoint_extract
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Set entrypoint verification message
|
||||
set_fact:
|
||||
entrypoint_verification_msg: |
|
||||
==========================================
|
||||
Entrypoint Script Verification
|
||||
==========================================
|
||||
Image: {{ docker_registry | default('localhost:5000') }}/{{ app_name | default('framework') }}:latest
|
||||
|
||||
Method 1 - File Check:
|
||||
Return Code: {{ entrypoint_check.rc | default('unknown') }}
|
||||
Output: {{ entrypoint_check.stdout | default('No output') }}
|
||||
|
||||
Method 2 - Image Inspect:
|
||||
Entrypoint Config: {{ entrypoint_inspect.stdout | default('Not available') }}
|
||||
|
||||
Method 3 - Extract and Check:
|
||||
{{ entrypoint_extract.stdout | default('Check not performed') }}
|
||||
|
||||
{% if 'FILE_NOT_FOUND' in entrypoint_check.stdout or 'FILE_NOT_FOUND' in entrypoint_extract.stdout %}
|
||||
⚠️ WARNING: Entrypoint script NOT FOUND in image!
|
||||
|
||||
This means the Docker image was built without the entrypoint script.
|
||||
Possible causes:
|
||||
1. The entrypoint script was not copied during rsync to build directory
|
||||
2. The Dockerfile COPY command failed silently
|
||||
3. The image needs to be rebuilt with --no-cache
|
||||
|
||||
Next steps:
|
||||
1. Rebuild the image: ansible-playbook -i inventory/production.yml playbooks/build-initial-image.yml --vault-password-file secrets/.vault_pass -e "build_no_cache=true"
|
||||
2. Check if docker/entrypoint.sh exists on server: ls -la /home/deploy/michaelschiemer/docker/entrypoint.sh
|
||||
3. Manually check image: docker run --rm --entrypoint=/bin/sh localhost:5000/framework:latest -c "ls -la /usr/local/bin/entrypoint.sh"
|
||||
{% elif entrypoint_check.rc == 0 %}
|
||||
✅ Entrypoint script found in image
|
||||
File details: {{ entrypoint_check.stdout }}
|
||||
{% if '\r' in entrypoint_extract.stdout %}
|
||||
⚠️ CRITICAL: Entrypoint script has CRLF line endings!
|
||||
The script contains \r characters which will cause "no such file or directory" errors.
|
||||
The script needs to be converted to LF line endings before building the image.
|
||||
{% endif %}
|
||||
{% else %}
|
||||
⚠️ Could not verify entrypoint script (check may have failed)
|
||||
{% endif %}
|
||||
==========================================
|
||||
|
||||
- name: Display entrypoint script verification result
|
||||
debug:
|
||||
var: entrypoint_verification_msg
|
||||
|
||||
- name: Deploy application stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ application_stack_dest }}"
|
||||
files:
|
||||
- docker-compose.base.yml
|
||||
- "docker-compose.{{ application_compose_suffix }}"
|
||||
state: present
|
||||
pull: always
|
||||
recreate: "{{ application_compose_recreate }}"
|
||||
remove_orphans: "{{ application_remove_orphans | bool }}"
|
||||
register: application_compose_result
|
||||
failed_when: false
|
||||
|
||||
- name: Show PHP container logs if deployment failed
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} logs --tail=50 {{ application_service_name }} 2>&1 || true
|
||||
register: application_php_logs
|
||||
changed_when: false
|
||||
when: application_compose_result.failed | default(false)
|
||||
|
||||
- name: Display PHP container logs on failure
|
||||
debug:
|
||||
msg: |
|
||||
PHP Container Logs (last 50 lines):
|
||||
{{ application_php_logs.stdout | default('No logs available') }}
|
||||
when: application_compose_result.failed | default(false)
|
||||
|
||||
- name: Fail if deployment failed
|
||||
fail:
|
||||
msg: "Application stack deployment failed. Check logs above for details."
|
||||
when: application_compose_result.failed | default(false)
|
||||
|
||||
- name: Wait for application container to report Up
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} ps {{ application_service_name }} | grep -Eiq "Up|running"
|
||||
register: application_app_running
|
||||
changed_when: false
|
||||
until: application_app_running.rc == 0
|
||||
retries: "{{ ((application_wait_timeout | int) + (application_wait_interval | int) - 1) // (application_wait_interval | int) }}"
|
||||
delay: "{{ application_wait_interval | int }}"
|
||||
when: application_compose_result.changed
|
||||
failed_when: false
|
||||
|
||||
- name: Show container status when container doesn't start
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} ps {{ application_service_name }}
|
||||
register: application_container_status
|
||||
changed_when: false
|
||||
when:
|
||||
- application_compose_result.changed
|
||||
- application_app_running.rc != 0
|
||||
|
||||
- name: Show PHP container logs when container doesn't start
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} logs --tail=100 {{ application_service_name }} 2>&1 || true
|
||||
register: application_php_logs_failed
|
||||
changed_when: false
|
||||
when:
|
||||
- application_compose_result.changed
|
||||
- application_app_running.rc != 0
|
||||
|
||||
- name: Display container status and logs when startup failed
|
||||
debug:
|
||||
msg: |
|
||||
Container Status:
|
||||
{{ application_container_status.stdout | default('Container not found') }}
|
||||
|
||||
Container Logs (last 100 lines):
|
||||
{{ application_php_logs_failed.stdout | default('No logs available') }}
|
||||
when:
|
||||
- application_compose_result.changed
|
||||
- application_app_running.rc != 0
|
||||
|
||||
- name: Fail if container didn't start
|
||||
fail:
|
||||
msg: |
|
||||
Application container '{{ application_service_name }}' failed to start.
|
||||
Check the logs above for details.
|
||||
You can also check manually with:
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} logs {{ application_service_name }}
|
||||
when:
|
||||
- application_compose_result.changed
|
||||
- application_app_running.rc != 0
|
||||
|
||||
- name: Ensure app container is running before migrations
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} ps {{ application_service_name }} | grep -Eiq "Up|running"
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: application_app_container_running
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: application_compose_result.changed
|
||||
|
||||
- name: Run database migrations
|
||||
shell: |
|
||||
docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} exec -T {{ application_service_name }} {{ application_migration_command }}
|
||||
args:
|
||||
executable: /bin/bash
|
||||
register: application_migration_result
|
||||
changed_when: true
|
||||
failed_when: false
|
||||
ignore_errors: yes
|
||||
when:
|
||||
- application_run_migrations
|
||||
- application_compose_result.changed
|
||||
- application_app_container_running.rc == 0
|
||||
|
||||
- name: Collect application container status
|
||||
shell: docker compose -f {{ application_stack_dest }}/docker-compose.base.yml -f {{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }} ps
|
||||
register: application_ps
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Perform application health check
|
||||
uri:
|
||||
url: "{{ application_healthcheck_url }}"
|
||||
method: GET
|
||||
validate_certs: no
|
||||
status_code: [200, 404, 502, 503]
|
||||
timeout: 10
|
||||
register: application_healthcheck_result
|
||||
ignore_errors: yes
|
||||
when:
|
||||
- application_healthcheck_url | length > 0
|
||||
- application_compose_result.changed
|
||||
|
||||
- name: Set application role summary facts
|
||||
set_fact:
|
||||
application_stack_changed: "{{ application_compose_result.changed | default(false) }}"
|
||||
application_health_output: "{{ application_ps.stdout | default('') }}"
|
||||
application_healthcheck_status: "{{ application_healthcheck_result.status | default('unknown') }}"
|
||||
application_migration_stdout: "{{ application_migration_result.stdout | default('') }}"
|
||||
@@ -0,0 +1,236 @@
|
||||
---
|
||||
# Deploy Application Code via Git or Rsync
|
||||
|
||||
- name: Set git_repo_url from provided value or default
|
||||
ansible.builtin.set_fact:
|
||||
git_repo_url: "{{ application_git_repository_url if (application_git_repository_url is defined and application_git_repository_url != '') else application_git_repository_url_default }}"
|
||||
|
||||
- name: Determine deployment method
|
||||
ansible.builtin.set_fact:
|
||||
deployment_method: "{{ application_deployment_method | default('git') }}"
|
||||
when: application_deployment_method is not defined
|
||||
|
||||
- name: Ensure Git is installed (for Git deployment)
|
||||
ansible.builtin.apt:
|
||||
name: git
|
||||
state: present
|
||||
update_cache: no
|
||||
become: yes
|
||||
when: deployment_method == 'git'
|
||||
|
||||
- name: Ensure application code directory exists
|
||||
ansible.builtin.file:
|
||||
path: "{{ application_code_dest }}"
|
||||
state: directory
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0755'
|
||||
become: yes
|
||||
|
||||
# Git Deployment Tasks
|
||||
- name: Check if repository already exists (Git)
|
||||
ansible.builtin.stat:
|
||||
path: "{{ application_code_dest }}/.git"
|
||||
register: git_repo_exists
|
||||
when: deployment_method == 'git'
|
||||
|
||||
- name: Check if destination directory exists (Git)
|
||||
ansible.builtin.stat:
|
||||
path: "{{ application_code_dest }}"
|
||||
register: dest_dir_exists
|
||||
when: deployment_method == 'git'
|
||||
|
||||
- name: Remove destination directory if it exists but is not a git repo (Git)
|
||||
ansible.builtin.file:
|
||||
path: "{{ application_code_dest }}"
|
||||
state: absent
|
||||
when:
|
||||
- deployment_method == 'git'
|
||||
- dest_dir_exists.stat.exists
|
||||
- not git_repo_exists.stat.exists
|
||||
become: yes
|
||||
|
||||
- name: Clone repository (if not exists) (Git)
|
||||
ansible.builtin.git:
|
||||
repo: "{{ git_repo_url }}"
|
||||
dest: "{{ application_code_dest }}"
|
||||
version: "{{ application_git_branch }}"
|
||||
force: no
|
||||
update: no
|
||||
when:
|
||||
- deployment_method == 'git'
|
||||
- not git_repo_exists.stat.exists
|
||||
environment:
|
||||
GIT_TERMINAL_PROMPT: "0"
|
||||
vars:
|
||||
ansible_become: no
|
||||
register: git_clone_result
|
||||
retries: "{{ application_git_retries | default(5) }}"
|
||||
delay: "{{ application_git_retry_delay | default(10) }}"
|
||||
until: git_clone_result is succeeded
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Fail if git clone failed after retries (Git)
|
||||
ansible.builtin.fail:
|
||||
msg: "Failed to clone repository after {{ application_git_retries | default(5) }} retries. Gitea may be unreachable or overloaded. Last error: {{ git_clone_result.msg | default('Unknown error') }}"
|
||||
when:
|
||||
- deployment_method == 'git'
|
||||
- not git_repo_exists.stat.exists
|
||||
- git_clone_result is failed
|
||||
|
||||
- name: Check if repository is already on correct branch (Git)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "")
|
||||
TARGET_BRANCH="{{ application_git_branch | default('main') }}"
|
||||
if [ "$CURRENT_BRANCH" = "$TARGET_BRANCH" ] || [ "$CURRENT_BRANCH" = "HEAD" ]; then
|
||||
echo "ALREADY_ON_BRANCH"
|
||||
else
|
||||
echo "NEEDS_UPDATE"
|
||||
fi
|
||||
register: git_branch_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when:
|
||||
- deployment_method == 'git'
|
||||
- git_repo_exists.stat.exists
|
||||
- application_skip_git_update | default(false) | bool == false
|
||||
|
||||
- name: Update repository (if exists and not already on correct branch) (Git)
|
||||
ansible.builtin.git:
|
||||
repo: "{{ git_repo_url }}"
|
||||
dest: "{{ application_code_dest }}"
|
||||
version: "{{ application_git_branch }}"
|
||||
force: yes
|
||||
update: yes
|
||||
when:
|
||||
- deployment_method == 'git'
|
||||
- git_repo_exists.stat.exists
|
||||
- application_skip_git_update | default(false) | bool == false
|
||||
- git_branch_check.stdout | default('NEEDS_UPDATE') == 'NEEDS_UPDATE'
|
||||
environment:
|
||||
GIT_TERMINAL_PROMPT: "0"
|
||||
vars:
|
||||
ansible_become: no
|
||||
register: git_update_result
|
||||
retries: "{{ application_git_retries | default(5) }}"
|
||||
delay: "{{ application_git_retry_delay | default(10) }}"
|
||||
until: git_update_result is succeeded
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Skip git update (repository already on correct branch or skip flag set)
|
||||
ansible.builtin.debug:
|
||||
msg: "Skipping git update - repository already on correct branch or skip_git_update is set"
|
||||
when:
|
||||
- deployment_method == 'git'
|
||||
- git_repo_exists.stat.exists
|
||||
- (application_skip_git_update | default(false) | bool == true) or (git_branch_check.stdout | default('NEEDS_UPDATE') == 'ALREADY_ON_BRANCH')
|
||||
|
||||
- name: Fail if git update failed after retries (Git)
|
||||
ansible.builtin.fail:
|
||||
msg: "Failed to update repository after {{ application_git_retries | default(5) }} retries. Gitea may be unreachable or overloaded. Last error: {{ git_update_result.msg | default('Unknown error') }}"
|
||||
when:
|
||||
- deployment_method == 'git'
|
||||
- git_repo_exists.stat.exists
|
||||
- application_skip_git_update | default(false) | bool == false
|
||||
- git_branch_check.stdout | default('NEEDS_UPDATE') == 'NEEDS_UPDATE'
|
||||
- git_update_result is defined
|
||||
- git_update_result is failed
|
||||
|
||||
- name: Set ownership of repository files (Git)
|
||||
ansible.builtin.file:
|
||||
path: "{{ application_code_dest }}"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
recurse: yes
|
||||
become: yes
|
||||
when: deployment_method == 'git'
|
||||
|
||||
# Rsync Deployment Tasks
|
||||
- name: Clear destination directory before sync (Rsync)
|
||||
ansible.builtin.shell: |
|
||||
# Remove all files and directories except .git (if it exists)
|
||||
find {{ application_code_dest }} -mindepth 1 -maxdepth 1 -not -name '.git' -exec rm -rf {} + 2>/dev/null || true
|
||||
become: yes
|
||||
changed_when: true
|
||||
failed_when: false
|
||||
register: clear_result
|
||||
when: deployment_method == 'rsync'
|
||||
|
||||
- name: Display clear status (Rsync)
|
||||
ansible.builtin.debug:
|
||||
msg: "Cleared destination directory before sync (preserved .git if present)"
|
||||
when:
|
||||
- deployment_method == 'rsync'
|
||||
- clear_result.rc | default(0) == 0
|
||||
- application_show_status | default(true) | bool
|
||||
|
||||
- name: Synchronize application code from repository root (Rsync)
|
||||
ansible.builtin.synchronize:
|
||||
src: "{{ application_rsync_source }}/"
|
||||
dest: "{{ application_code_dest }}/"
|
||||
delete: no
|
||||
recursive: yes
|
||||
rsync_opts: "{{ application_rsync_opts | default(['--chmod=D755,F644', '--exclude=.git', '--exclude=.gitignore', '--exclude=node_modules', '--exclude=vendor', '--exclude=.env', '--exclude=.env.*', '--exclude=*.log', '--exclude=.idea', '--exclude=.vscode', '--exclude=.DS_Store', '--exclude=*.swp', '--exclude=*.swo', '--exclude=*~', '--exclude=.phpunit.result.cache', '--exclude=coverage', '--exclude=.phpunit.cache', '--exclude=public/assets', '--exclude=storage/logs', '--exclude=storage/framework/cache', '--exclude=storage/framework/sessions', '--exclude=storage/framework/views', '--exclude=deployment', '--exclude=docker', '--exclude=.deployment-archive-*', '--exclude=docs', '--exclude=tests']) }}"
|
||||
when: deployment_method == 'rsync'
|
||||
delegate_to: localhost
|
||||
run_once: true
|
||||
|
||||
- name: Ensure executable permissions on PHP scripts (Rsync)
|
||||
ansible.builtin.file:
|
||||
path: "{{ application_code_dest }}/{{ item }}"
|
||||
mode: '0755'
|
||||
loop: "{{ application_php_scripts | default(['worker.php', 'console.php']) }}"
|
||||
when:
|
||||
- deployment_method == 'rsync'
|
||||
- item is defined
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Verify critical files exist (Rsync)
|
||||
ansible.builtin.stat:
|
||||
path: "{{ application_code_dest }}/{{ item }}"
|
||||
register: critical_files_check
|
||||
loop: "{{ application_critical_files | default(['worker.php', 'console.php', 'composer.json']) }}"
|
||||
when: deployment_method == 'rsync'
|
||||
|
||||
- name: Display file verification results (Rsync)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
File Verification:
|
||||
{% for result in critical_files_check.results | default([]) %}
|
||||
- {{ result.item }}: {{ 'EXISTS' if result.stat.exists else 'MISSING' }}
|
||||
{% endfor %}
|
||||
when:
|
||||
- deployment_method == 'rsync'
|
||||
- application_show_status | default(true) | bool
|
||||
- critical_files_check is defined
|
||||
|
||||
- name: Fail if critical files are missing (Rsync)
|
||||
ansible.builtin.fail:
|
||||
msg: |
|
||||
Critical files are missing after sync:
|
||||
{% for result in critical_files_check.results | default([]) %}
|
||||
{% if not result.stat.exists %}- {{ result.item }}{% endif %}
|
||||
{% endfor %}
|
||||
when:
|
||||
- deployment_method == 'rsync'
|
||||
- critical_files_check is defined
|
||||
- critical_files_check.results | selectattr('stat.exists', 'equalto', false) | list | length > 0
|
||||
|
||||
- name: Display deployment summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
========================================
|
||||
Application Code Deployment Summary
|
||||
========================================
|
||||
Method: {{ deployment_method | upper }}
|
||||
Destination: {{ application_code_dest }}
|
||||
{% if deployment_method == 'git' %}
|
||||
Repository: {{ git_repo_url }}
|
||||
Branch: {{ application_git_branch }}
|
||||
{% elif deployment_method == 'rsync' %}
|
||||
Source: {{ application_rsync_source }}
|
||||
{% endif %}
|
||||
========================================
|
||||
when: application_show_status | default(true) | bool
|
||||
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
- name: Synchronize application stack files
|
||||
include_tasks: sync.yml
|
||||
when: application_sync_files | bool
|
||||
|
||||
- name: Deploy application stack
|
||||
include_tasks: deploy.yml
|
||||
@@ -0,0 +1,80 @@
|
||||
---
|
||||
# Health Check Tasks
|
||||
|
||||
- name: Get container status
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} ps {{ application_container_status_services | default('queue-worker web scheduler php') }}
|
||||
register: container_status
|
||||
changed_when: false
|
||||
|
||||
- name: Display container status
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
{{ container_status.stdout }}
|
||||
when: application_show_status | default(true) | bool
|
||||
|
||||
- name: Get queue-worker logs (last N lines)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} logs --tail={{ application_health_check_logs_tail | default(20) }} queue-worker 2>&1 || true
|
||||
register: queue_worker_logs
|
||||
changed_when: false
|
||||
|
||||
- name: Display queue-worker logs
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================
|
||||
Queue-Worker Logs:
|
||||
================
|
||||
{{ queue_worker_logs.stdout }}
|
||||
when: application_show_status | default(true) | bool
|
||||
|
||||
- name: Get scheduler logs (last N lines)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} logs --tail={{ application_health_check_logs_tail | default(20) }} scheduler 2>&1 || true
|
||||
register: scheduler_logs
|
||||
changed_when: false
|
||||
|
||||
- name: Display scheduler logs
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================
|
||||
Scheduler Logs:
|
||||
================
|
||||
{{ scheduler_logs.stdout }}
|
||||
when: application_show_status | default(true) | bool
|
||||
|
||||
- name: Get web container logs (last N lines)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} logs --tail={{ application_health_check_logs_tail | default(20) }} web 2>&1 || true
|
||||
register: web_logs
|
||||
changed_when: false
|
||||
|
||||
- name: Display web container logs
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================
|
||||
Web Container Logs:
|
||||
================
|
||||
{{ web_logs.stdout }}
|
||||
when: application_show_status | default(true) | bool
|
||||
|
||||
- name: Get all container status (final status check)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} ps
|
||||
register: all_containers
|
||||
changed_when: false
|
||||
when: application_health_check_final | default(false) | bool
|
||||
|
||||
- name: Display all container status (final)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
{{ all_containers.stdout }}
|
||||
when:
|
||||
- application_health_check_final | default(false) | bool
|
||||
- application_show_status | default(true) | bool
|
||||
|
||||
@@ -0,0 +1,155 @@
|
||||
---
|
||||
# Log Analysis Tasks
|
||||
|
||||
- name: Get queue-worker logs
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} logs --tail={{ application_logs_tail | default(50) }} queue-worker 2>&1 || true
|
||||
register: queue_worker_logs
|
||||
changed_when: false
|
||||
|
||||
- name: Display queue-worker logs
|
||||
ansible.builtin.debug:
|
||||
var: queue_worker_logs.stdout_lines
|
||||
when: application_show_status | default(true) | bool
|
||||
|
||||
- name: Get scheduler logs
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} logs --tail={{ application_logs_tail | default(50) }} scheduler 2>&1 || true
|
||||
register: scheduler_logs
|
||||
changed_when: false
|
||||
|
||||
- name: Display scheduler logs
|
||||
ansible.builtin.debug:
|
||||
var: scheduler_logs.stdout_lines
|
||||
when: application_show_status | default(true) | bool
|
||||
|
||||
- name: Get web container logs
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} logs --tail={{ application_logs_tail | default(50) }} web 2>&1 || true
|
||||
register: web_logs
|
||||
changed_when: false
|
||||
|
||||
- name: Display web container logs
|
||||
ansible.builtin.debug:
|
||||
var: web_logs.stdout_lines
|
||||
when: application_show_status | default(true) | bool
|
||||
|
||||
- name: Check if vendor/autoload.php exists in queue-worker container
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} exec -T queue-worker test -f /var/www/html/vendor/autoload.php && echo "EXISTS" || echo "MISSING"
|
||||
register: queue_worker_vendor_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
ignore_errors: yes
|
||||
when: application_logs_check_vendor | default(true) | bool
|
||||
|
||||
- name: Display queue-worker vendor check
|
||||
ansible.builtin.debug:
|
||||
msg: "vendor/autoload.php in queue-worker: {{ queue_worker_vendor_check.stdout | default('CHECK_FAILED') }}"
|
||||
when:
|
||||
- application_logs_check_vendor | default(true) | bool
|
||||
- application_show_status | default(true) | bool
|
||||
|
||||
- name: Check if vendor/autoload.php exists in scheduler container
|
||||
ansible.builtin.shell: |
|
||||
cd {{ application_code_dest }}
|
||||
docker compose -f docker-compose.base.yml -f docker-compose.{{ application_compose_suffix }} exec -T scheduler test -f /var/www/html/vendor/autoload.php && echo "EXISTS" || echo "MISSING"
|
||||
register: scheduler_vendor_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
ignore_errors: yes
|
||||
when: application_logs_check_vendor | default(true) | bool
|
||||
|
||||
- name: Display scheduler vendor check
|
||||
ansible.builtin.debug:
|
||||
msg: "vendor/autoload.php in scheduler: {{ scheduler_vendor_check.stdout | default('CHECK_FAILED') }}"
|
||||
when:
|
||||
- application_logs_check_vendor | default(true) | bool
|
||||
- application_show_status | default(true) | bool
|
||||
|
||||
- name: Check vendor directory permissions on host
|
||||
ansible.builtin.shell: |
|
||||
ls -la {{ application_code_dest }}/vendor 2>&1 | head -5 || echo "DIRECTORY_NOT_FOUND"
|
||||
register: vendor_perms
|
||||
changed_when: false
|
||||
when: application_logs_check_permissions | default(true) | bool
|
||||
|
||||
- name: Display vendor directory permissions
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Vendor directory permissions on host:
|
||||
{{ vendor_perms.stdout }}
|
||||
when:
|
||||
- application_logs_check_permissions | default(true) | bool
|
||||
- application_show_status | default(true) | bool
|
||||
|
||||
- name: Check if worker.php exists on host
|
||||
ansible.builtin.stat:
|
||||
path: "{{ application_code_dest }}/worker.php"
|
||||
register: worker_file_host
|
||||
when: application_logs_check_files | default(true) | bool
|
||||
|
||||
- name: Display worker.php host check result
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
worker.php on host:
|
||||
- Exists: {{ worker_file_host.stat.exists | default(false) }}
|
||||
{% if worker_file_host.stat.exists %}
|
||||
- Path: {{ worker_file_host.stat.path }}
|
||||
- Size: {{ worker_file_host.stat.size | default(0) }} bytes
|
||||
{% endif %}
|
||||
when:
|
||||
- application_logs_check_files | default(true) | bool
|
||||
- application_show_status | default(true) | bool
|
||||
|
||||
- name: Check if console.php exists on host
|
||||
ansible.builtin.stat:
|
||||
path: "{{ application_code_dest }}/console.php"
|
||||
register: console_file_host
|
||||
when: application_logs_check_files | default(true) | bool
|
||||
|
||||
- name: Display console.php host check result
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
console.php on host:
|
||||
- Exists: {{ console_file_host.stat.exists | default(false) }}
|
||||
{% if console_file_host.stat.exists %}
|
||||
- Path: {{ console_file_host.stat.path }}
|
||||
- Size: {{ console_file_host.stat.size | default(0) }} bytes
|
||||
{% endif %}
|
||||
when:
|
||||
- application_logs_check_files | default(true) | bool
|
||||
- application_show_status | default(true) | bool
|
||||
|
||||
- name: List files in application directory
|
||||
ansible.builtin.shell: |
|
||||
ls -la {{ application_code_dest }}/ | head -20
|
||||
register: app_dir_listing
|
||||
changed_when: false
|
||||
when: application_logs_list_files | default(false) | bool
|
||||
|
||||
- name: Display application directory listing
|
||||
ansible.builtin.debug:
|
||||
var: app_dir_listing.stdout_lines
|
||||
when:
|
||||
- application_logs_list_files | default(false) | bool
|
||||
- application_show_status | default(true) | bool
|
||||
|
||||
- name: Check what PHP files exist in application directory
|
||||
ansible.builtin.shell: |
|
||||
find {{ application_code_dest }} -maxdepth 1 -name "*.php" -type f 2>/dev/null | head -20
|
||||
register: php_files
|
||||
changed_when: false
|
||||
when: application_logs_list_files | default(false) | bool
|
||||
|
||||
- name: Display PHP files found
|
||||
ansible.builtin.debug:
|
||||
var: php_files.stdout_lines
|
||||
when:
|
||||
- application_logs_list_files | default(false) | bool
|
||||
- application_show_status | default(true) | bool
|
||||
|
||||
@@ -0,0 +1,338 @@
|
||||
---
|
||||
- name: Ensure application stack destination directory exists
|
||||
file:
|
||||
path: "{{ application_stack_dest }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Ensure secrets directory exists for Docker Compose secrets
|
||||
file:
|
||||
path: "{{ application_stack_dest }}/secrets"
|
||||
state: directory
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0700'
|
||||
|
||||
- name: Ensure parent directory exists for application code
|
||||
file:
|
||||
path: "/home/deploy/michaelschiemer"
|
||||
state: directory
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0755'
|
||||
when: application_compose_suffix == 'production.yml'
|
||||
become: yes
|
||||
|
||||
- name: Ensure application code directory exists
|
||||
file:
|
||||
path: "/home/deploy/michaelschiemer/current"
|
||||
state: directory
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0755'
|
||||
when: application_compose_suffix == 'production.yml'
|
||||
become: yes
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Fix ownership of application code directory if needed
|
||||
command: chown -R {{ ansible_user }}:{{ ansible_user }} /home/deploy/michaelschiemer/current
|
||||
when:
|
||||
- application_compose_suffix == 'production.yml'
|
||||
- ansible_check_mode is not defined or not ansible_check_mode
|
||||
become: yes
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Check if vault file exists locally
|
||||
stat:
|
||||
path: "{{ application_vault_file }}"
|
||||
delegate_to: localhost
|
||||
register: application_vault_stat
|
||||
become: no
|
||||
|
||||
- name: Optionally load application secrets from vault
|
||||
include_vars:
|
||||
file: "{{ application_vault_file }}"
|
||||
when: application_vault_stat.stat.exists
|
||||
no_log: yes
|
||||
ignore_errors: yes
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Check if PostgreSQL Production .env exists on target host
|
||||
stat:
|
||||
path: "{{ stacks_base_path }}/postgresql-production/.env"
|
||||
register: application_postgres_production_env_file
|
||||
changed_when: false
|
||||
|
||||
- name: Check if PostgreSQL Staging .env exists on target host (for staging deployments)
|
||||
stat:
|
||||
path: "{{ stacks_base_path }}/postgresql-staging/.env"
|
||||
register: application_postgres_staging_env_file
|
||||
changed_when: false
|
||||
when: application_compose_suffix == 'staging'
|
||||
|
||||
- name: Extract PostgreSQL Production password from .env file
|
||||
shell: "grep '^POSTGRES_PASSWORD=' {{ stacks_base_path }}/postgresql-production/.env 2>/dev/null | cut -d'=' -f2- || echo ''"
|
||||
register: application_postgres_production_password
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: application_postgres_production_env_file.stat.exists
|
||||
no_log: yes
|
||||
|
||||
- name: Extract PostgreSQL Staging password from .env file
|
||||
shell: "grep '^POSTGRES_PASSWORD=' {{ stacks_base_path }}/postgresql-staging/.env 2>/dev/null | cut -d'=' -f2- || echo ''"
|
||||
register: application_postgres_staging_password
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when:
|
||||
- application_compose_suffix == 'staging'
|
||||
- application_postgres_staging_env_file.stat.exists
|
||||
no_log: yes
|
||||
|
||||
- name: "Fallback: Check if legacy PostgreSQL .env exists on target host"
|
||||
stat:
|
||||
path: "{{ stacks_base_path }}/postgresql/.env"
|
||||
register: application_postgres_env_file
|
||||
changed_when: false
|
||||
when: not (application_postgres_production_env_file.stat.exists | default(false))
|
||||
|
||||
- name: "Fallback: Extract PostgreSQL password from legacy .env file"
|
||||
shell: "grep '^POSTGRES_PASSWORD=' {{ stacks_base_path }}/postgresql/.env 2>/dev/null | cut -d'=' -f2- || echo ''"
|
||||
register: application_postgres_password
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when:
|
||||
- not (application_postgres_production_env_file.stat.exists | default(false))
|
||||
- application_postgres_env_file.stat.exists
|
||||
no_log: yes
|
||||
|
||||
- name: Determine application database password
|
||||
set_fact:
|
||||
application_db_password: >-
|
||||
{% if application_compose_suffix == 'staging' %}
|
||||
{{ (application_postgres_staging_env_file.stat.exists | default(false) and application_postgres_staging_password.stdout | default('') != '') |
|
||||
ternary(application_postgres_staging_password.stdout,
|
||||
(application_postgres_env_file.stat.exists | default(false) and application_postgres_password.stdout | default('') != '') |
|
||||
ternary(application_postgres_password.stdout,
|
||||
vault_db_root_password | default(lookup('password', '/dev/null length=32 chars=ascii_letters,digits,punctuation')))) }}
|
||||
{% else %}
|
||||
{{ (application_postgres_production_env_file.stat.exists | default(false) and application_postgres_production_password.stdout | default('') != '') |
|
||||
ternary(application_postgres_production_password.stdout,
|
||||
(application_postgres_env_file.stat.exists | default(false) and application_postgres_password.stdout | default('') != '') |
|
||||
ternary(application_postgres_password.stdout,
|
||||
vault_db_root_password | default(lookup('password', '/dev/null length=32 chars=ascii_letters,digits,punctuation')))) }}
|
||||
{% endif %}
|
||||
no_log: yes
|
||||
|
||||
- name: Determine application redis password
|
||||
set_fact:
|
||||
application_redis_password: "{{ redis_password | default(vault_redis_password | default('')) }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Ensure redis password provided via vault
|
||||
fail:
|
||||
msg: >-
|
||||
Redis credentials are missing. Define vault_redis_password in
|
||||
{{ application_vault_file }} (encrypted with ansible-vault) or pass
|
||||
redis_password via extra vars.
|
||||
when: (application_redis_password | string | trim) == ''
|
||||
|
||||
- name: Determine application app key
|
||||
set_fact:
|
||||
application_app_key: "{{ app_key | default(vault_app_key | default('')) }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Ensure application app key provided via vault
|
||||
fail:
|
||||
msg: >-
|
||||
Application key missing. Define vault_app_key in
|
||||
{{ application_vault_file }} (ansible-vault) or pass app_key via extra vars.
|
||||
when: (application_app_key | string | trim) == ''
|
||||
|
||||
- name: Determine encryption key (optional)
|
||||
set_fact:
|
||||
application_encryption_key: "{{ encryption_key | default(vault_encryption_key | default('')) }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Determine project root directory
|
||||
set_fact:
|
||||
project_root: "{{ playbook_dir | default(role_path + '/..') | dirname | dirname | dirname }}"
|
||||
changed_when: false
|
||||
|
||||
- name: Check if application docker-compose.base.yml source exists locally (in project root)
|
||||
stat:
|
||||
path: "{{ project_root }}/docker-compose.base.yml"
|
||||
delegate_to: localhost
|
||||
register: application_compose_base_src
|
||||
become: no
|
||||
|
||||
- name: Check if application docker-compose override file exists locally (production or staging)
|
||||
stat:
|
||||
path: "{{ project_root }}/docker-compose.{{ application_compose_suffix }}"
|
||||
delegate_to: localhost
|
||||
register: application_compose_override_src
|
||||
become: no
|
||||
|
||||
- name: Check if production-base.yml exists (preferred for production/staging)
|
||||
stat:
|
||||
path: "{{ project_root }}/docker-compose.production-base.yml"
|
||||
delegate_to: localhost
|
||||
register: application_compose_production_base_src
|
||||
become: no
|
||||
|
||||
- name: Copy application docker-compose.production-base.yml to target host (production/staging)
|
||||
copy:
|
||||
src: "{{ project_root }}/docker-compose.production-base.yml"
|
||||
dest: "{{ application_stack_dest }}/docker-compose.base.yml"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0644'
|
||||
when: application_compose_production_base_src.stat.exists
|
||||
|
||||
- name: Copy application docker-compose.base.yml to target host (fallback)
|
||||
copy:
|
||||
src: "{{ project_root }}/docker-compose.base.yml"
|
||||
dest: "{{ application_stack_dest }}/docker-compose.base.yml"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0644'
|
||||
when:
|
||||
- not application_compose_production_base_src.stat.exists
|
||||
- application_compose_base_src.stat.exists
|
||||
|
||||
- name: Copy application docker-compose override file to target host (production or staging)
|
||||
copy:
|
||||
src: "{{ project_root }}/docker-compose.{{ application_compose_suffix }}"
|
||||
dest: "{{ application_stack_dest }}/docker-compose.{{ application_compose_suffix }}"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0644'
|
||||
when: application_compose_override_src.stat.exists
|
||||
|
||||
- name: Check if legacy docker-compose.yml exists (fallback)
|
||||
stat:
|
||||
path: "{{ application_stack_src }}/docker-compose.yml"
|
||||
delegate_to: localhost
|
||||
register: application_compose_src
|
||||
become: no
|
||||
when: not (application_compose_base_src.stat.exists | default(false))
|
||||
|
||||
- name: Copy application docker-compose.yml to target host (fallback for legacy)
|
||||
copy:
|
||||
src: "{{ application_stack_src }}/docker-compose.yml"
|
||||
dest: "{{ application_stack_dest }}/docker-compose.yml"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0644'
|
||||
when:
|
||||
- application_compose_src is defined
|
||||
- application_compose_src.stat.exists | default(false)
|
||||
- not (application_compose_base_src.stat.exists | default(false))
|
||||
|
||||
- name: Check if nginx configuration exists locally
|
||||
stat:
|
||||
path: "{{ application_stack_src }}/nginx"
|
||||
delegate_to: localhost
|
||||
register: application_nginx_src
|
||||
become: no
|
||||
|
||||
- name: Synchronize nginx configuration
|
||||
copy:
|
||||
src: "{{ application_stack_src }}/nginx/"
|
||||
dest: "{{ application_stack_dest }}/nginx/"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0644'
|
||||
when: application_nginx_src.stat.exists
|
||||
|
||||
- name: Debug - Check available variables before set_fact
|
||||
debug:
|
||||
msg:
|
||||
- "application_environment: {{ application_environment | default('NOT SET') }}"
|
||||
- "app_env: {{ app_env | default('NOT SET') }}"
|
||||
- "application_compose_suffix: {{ application_compose_suffix | default('NOT SET') }}"
|
||||
- "app_domain (from vars): {{ 'DEFINED' if app_domain is defined else 'NOT SET' }}"
|
||||
- "db_user_default: {{ db_user_default | default('NOT SET') }}"
|
||||
- "db_name_default: {{ db_name_default | default('NOT SET') }}"
|
||||
- "db_host_default: {{ db_host_default | default('NOT SET') }}"
|
||||
- "application_db_password: {{ 'SET (length: ' + (application_db_password | default('') | string | length | string) + ')' if (application_db_password | default('') | string | trim) != '' else 'NOT SET' }}"
|
||||
- "application_redis_password: {{ 'SET (length: ' + (application_redis_password | default('') | string | length | string) + ')' if (application_redis_password | default('') | string | trim) != '' else 'NOT SET' }}"
|
||||
- "application_app_key: {{ 'SET (length: ' + (application_app_key | default('') | string | length | string) + ')' if (application_app_key | default('') | string | trim) != '' else 'NOT SET' }}"
|
||||
changed_when: false
|
||||
|
||||
- name: Determine application environment for domain resolution
|
||||
set_fact:
|
||||
_app_env: "{{ app_env | default(application_environment | default('production')) }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Expose secrets for template rendering (step 1 - basic vars)
|
||||
set_fact:
|
||||
db_password: "{{ application_db_password | default('') }}"
|
||||
redis_password: "{{ application_redis_password | default('') }}"
|
||||
app_key: "{{ application_app_key | default('') }}"
|
||||
encryption_key: "{{ application_encryption_key | default('') }}"
|
||||
app_env: "{{ _app_env }}"
|
||||
minio_root_user: "{{ minio_root_user | default('minioadmin') }}"
|
||||
minio_root_password: "{{ minio_root_password | default('') }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Expose secrets for template rendering (step 2 - db vars)
|
||||
set_fact:
|
||||
db_username: "{{ db_user | default(db_user_default | default('postgres')) }}"
|
||||
db_name: "{{ db_name | default(db_name_default | default('michaelschiemer')) }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Expose secrets for template rendering (step 3 - db_host with conditional)
|
||||
set_fact:
|
||||
db_host: >-
|
||||
{%- if db_host is defined and db_host | string | trim != '' -%}
|
||||
{{ db_host }}
|
||||
{%- elif db_host_default is defined and db_host_default | string | trim != '' -%}
|
||||
{{ db_host_default }}
|
||||
{%- elif application_compose_suffix == 'production.yml' -%}
|
||||
postgres-production
|
||||
{%- elif application_compose_suffix == 'staging.yml' -%}
|
||||
postgres-staging
|
||||
{%- else -%}
|
||||
postgres
|
||||
{%- endif -%}
|
||||
no_log: yes
|
||||
|
||||
- name: Expose secrets for template rendering (step 4 - app_domain)
|
||||
set_fact:
|
||||
app_domain: >-
|
||||
{%- if app_domain is defined and app_domain | string | trim != '' -%}
|
||||
{{ app_domain }}
|
||||
{%- elif _app_env == 'production' -%}
|
||||
michaelschiemer.de
|
||||
{%- else -%}
|
||||
staging.michaelschiemer.de
|
||||
{%- endif -%}
|
||||
no_log: yes
|
||||
|
||||
- name: Render application environment file
|
||||
template:
|
||||
src: "{{ application_env_template }}"
|
||||
dest: "{{ application_stack_dest }}/.env"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0600'
|
||||
|
||||
- name: Create Docker Compose secret files from determined passwords
|
||||
copy:
|
||||
content: "{{ item.value }}"
|
||||
dest: "{{ application_stack_dest }}/secrets/{{ item.name }}.txt"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0600'
|
||||
loop:
|
||||
- name: db_user_password
|
||||
value: "{{ application_db_password }}"
|
||||
- name: redis_password
|
||||
value: "{{ application_redis_password }}"
|
||||
- name: app_key
|
||||
value: "{{ application_app_key }}"
|
||||
- name: vault_encryption_key
|
||||
value: "{{ application_encryption_key | default(application_app_key) }}"
|
||||
no_log: yes
|
||||
@@ -0,0 +1,9 @@
|
||||
---
|
||||
dns_stack_path: "{{ stacks_base_path }}/dns"
|
||||
dns_corefile_template: "{{ role_path }}/../../templates/dns-Corefile.j2"
|
||||
dns_forwarders:
|
||||
- 1.1.1.1
|
||||
- 8.8.8.8
|
||||
dns_records:
|
||||
- host: "grafana.{{ app_domain }}"
|
||||
address: "{{ wireguard_server_ip_default | default('10.8.0.1') }}"
|
||||
33
deployment/legacy/ansible/ansible/roles/dns/tasks/main.yml
Normal file
33
deployment/legacy/ansible/ansible/roles/dns/tasks/main.yml
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
- name: Ensure DNS stack directory exists
|
||||
file:
|
||||
path: "{{ dns_stack_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
tags:
|
||||
- dns
|
||||
|
||||
- name: Render CoreDNS configuration
|
||||
template:
|
||||
src: "{{ dns_corefile_template }}"
|
||||
dest: "{{ dns_stack_path }}/Corefile"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0644'
|
||||
tags:
|
||||
- dns
|
||||
|
||||
- name: Deploy DNS stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ dns_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
register: dns_compose_result
|
||||
tags:
|
||||
- dns
|
||||
|
||||
- name: Record DNS deployment facts
|
||||
set_fact:
|
||||
dns_stack_changed: "{{ dns_compose_result.changed | default(false) }}"
|
||||
tags:
|
||||
- dns
|
||||
@@ -0,0 +1,62 @@
|
||||
---
|
||||
# Gitea Stack Configuration
|
||||
gitea_stack_path: "{{ stacks_base_path }}/gitea"
|
||||
gitea_container_name: "gitea"
|
||||
gitea_url: "https://{{ gitea_domain | default('git.michaelschiemer.de') }}"
|
||||
gitea_domain: "{{ gitea_domain | default('git.michaelschiemer.de') }}"
|
||||
|
||||
# Wait Configuration
|
||||
gitea_wait_timeout: "{{ wait_timeout | default(60) }}"
|
||||
gitea_wait_interval: 5
|
||||
gitea_restart_wait_timeout: 30
|
||||
gitea_restart_retries: 30
|
||||
gitea_restart_delay: 2
|
||||
|
||||
# Health Check Configuration
|
||||
gitea_health_check_timeout: 10
|
||||
gitea_check_health: true
|
||||
gitea_show_status: true
|
||||
gitea_show_logs: true
|
||||
gitea_logs_tail: 50
|
||||
|
||||
# Auto-Restart Configuration
|
||||
# Set to false to prevent automatic restarts when healthcheck fails
|
||||
# This prevents restart loops when Gitea is temporarily unavailable
|
||||
gitea_auto_restart: true
|
||||
|
||||
# Config Update Configuration
|
||||
gitea_app_ini_path: "{{ gitea_stack_path }}/app.ini"
|
||||
gitea_app_ini_container_path: "/data/gitea/conf/app.ini"
|
||||
gitea_app_ini_template: "../../templates/gitea-app.ini.j2"
|
||||
gitea_config_retries: 30
|
||||
gitea_config_delay: 2
|
||||
|
||||
# Setup Configuration
|
||||
gitea_admin_username: "{{ vault_gitea_admin_username | default('admin') }}"
|
||||
gitea_admin_password: "{{ vault_gitea_admin_password | default('') }}"
|
||||
gitea_admin_email: "{{ vault_gitea_admin_email | default(acme_email) }}"
|
||||
gitea_force_update_app_ini: false
|
||||
gitea_setup_health_retries: 30
|
||||
gitea_setup_health_delay: 5
|
||||
gitea_setup_db_wait: 10
|
||||
|
||||
# Runner Configuration
|
||||
gitea_runner_path: "{{ runner_path | default('/home/deploy/deployment/gitea-runner') }}"
|
||||
gitea_runner_container_name: "gitea-runner"
|
||||
gitea_instance_url: "https://git.michaelschiemer.de"
|
||||
gitea_runner_action: "fix" # Options: fix, register
|
||||
gitea_runner_registration_token: ""
|
||||
gitea_runner_name: "dev-runner-01"
|
||||
gitea_runner_labels: "ubuntu-latest:docker://node:16-bullseye,ubuntu-22.04:docker://node:16-bullseye,php-ci:docker://php-ci:latest"
|
||||
gitea_runner_show_status: true
|
||||
gitea_runner_wait_seconds: 5
|
||||
|
||||
# Repository Configuration
|
||||
gitea_repo_name: "michaelschiemer"
|
||||
gitea_repo_owner: "michael"
|
||||
gitea_repo_private: false
|
||||
gitea_repo_description: "Main application repository"
|
||||
gitea_repo_auto_init: false
|
||||
gitea_configure_git_remote: true
|
||||
gitea_git_repo_path: "/home/michael/dev/michaelschiemer"
|
||||
gitea_force_create_repo: false
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
# Handlers for Gitea Role
|
||||
|
||||
- name: wait for gitea
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
validate_certs: false
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
register: gitea_health_handler
|
||||
until: gitea_health_handler.status == 200
|
||||
retries: "{{ gitea_restart_retries | default(30) }}"
|
||||
delay: "{{ gitea_restart_delay | default(2) }}"
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
119
deployment/legacy/ansible/ansible/roles/gitea/tasks/config.yml
Normal file
119
deployment/legacy/ansible/ansible/roles/gitea/tasks/config.yml
Normal file
@@ -0,0 +1,119 @@
|
||||
---
|
||||
# Update Gitea Configuration (app.ini)
|
||||
|
||||
- name: Verify Gitea container exists
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml ps {{ gitea_container_name }} | grep -q "{{ gitea_container_name }}"
|
||||
register: gitea_exists
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Fail if Gitea container does not exist
|
||||
ansible.builtin.fail:
|
||||
msg: "Gitea container does not exist. Please deploy Gitea stack first."
|
||||
when: gitea_exists.rc != 0
|
||||
|
||||
# Configuration is now read from Ansible variables or defaults
|
||||
# Since environment variables are removed, we use defaults from docker-compose.yml
|
||||
# (which are hardcoded: POSTGRES_DB=gitea, POSTGRES_USER=gitea, POSTGRES_PASSWORD=gitea_password)
|
||||
- name: Set database configuration (from docker-compose.yml defaults)
|
||||
ansible.builtin.set_fact:
|
||||
gitea_db_type: "postgres"
|
||||
gitea_db_host: "postgres:5432"
|
||||
gitea_db_name: "gitea"
|
||||
gitea_db_user: "gitea"
|
||||
gitea_db_passwd: "gitea_password"
|
||||
|
||||
- name: Set server configuration from Ansible variables or defaults
|
||||
ansible.builtin.set_fact:
|
||||
gitea_domain: "{{ gitea_domain | default('git.michaelschiemer.de') }}"
|
||||
ssh_port: "{{ ssh_port | default('2222') }}"
|
||||
ssh_listen_port: "{{ ssh_listen_port | default('2222') }}"
|
||||
|
||||
- name: Set Redis password for Gitea Redis instance
|
||||
ansible.builtin.set_fact:
|
||||
# Gitea uses its own Redis instance (gitea-redis) with default password 'gitea_redis_password'
|
||||
# unless vault_gitea_redis_password is explicitly set
|
||||
# Note: vault_redis_password is for the application Redis stack, not Gitea Redis
|
||||
redis_password: "{{ vault_gitea_redis_password | default('gitea_redis_password') }}"
|
||||
|
||||
- name: Generate app.ini from template
|
||||
ansible.builtin.template:
|
||||
src: "{{ gitea_app_ini_template | default('../../templates/gitea-app.ini.j2') }}"
|
||||
dest: "{{ gitea_app_ini_path }}"
|
||||
mode: '0644'
|
||||
vars:
|
||||
postgres_db: "{{ gitea_db_name }}"
|
||||
postgres_user: "{{ gitea_db_user }}"
|
||||
postgres_password: "{{ gitea_db_passwd }}"
|
||||
gitea_domain: "{{ gitea_domain }}"
|
||||
ssh_port: "{{ ssh_port }}"
|
||||
ssh_listen_port: "{{ ssh_listen_port }}"
|
||||
disable_registration: true
|
||||
redis_password: "{{ redis_password }}"
|
||||
|
||||
- name: Copy app.ini to Gitea container
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml cp {{ gitea_app_ini_path }} {{ gitea_container_name }}:{{ gitea_app_ini_container_path }}
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Wait for container to be ready for exec
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T {{ gitea_container_name }} true
|
||||
register: container_ready
|
||||
until: container_ready.rc == 0
|
||||
retries: "{{ gitea_config_retries | default(30) }}"
|
||||
delay: "{{ gitea_config_delay | default(2) }}"
|
||||
changed_when: false
|
||||
|
||||
- name: Set correct permissions on app.ini in container
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T --user git {{ gitea_container_name }} chown 1000:1000 {{ gitea_app_ini_container_path }} && \
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T --user git {{ gitea_container_name }} chmod 644 {{ gitea_app_ini_container_path }}
|
||||
|
||||
- name: Restart Gitea container
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml restart {{ gitea_container_name }}
|
||||
register: gitea_restart
|
||||
changed_when: gitea_restart.rc == 0
|
||||
notify: wait for gitea
|
||||
|
||||
- name: Wait for Gitea to be ready after restart
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
validate_certs: false
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
register: gitea_health_after_restart
|
||||
until: gitea_health_after_restart.status == 200
|
||||
retries: "{{ gitea_restart_retries | default(30) }}"
|
||||
delay: "{{ gitea_restart_delay | default(5) }}"
|
||||
when: gitea_restart.changed | default(false)
|
||||
changed_when: false
|
||||
|
||||
- name: Display success message
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
========================================
|
||||
Gitea Configuration Update Complete
|
||||
========================================
|
||||
Gitea configuration has been updated successfully!
|
||||
|
||||
Configuration migrated to app.ini:
|
||||
- All settings now in app.ini (versioned in Git)
|
||||
- Redis cache enabled (now works correctly in app.ini)
|
||||
- Redis sessions enabled (better performance and scalability)
|
||||
- Redis queue enabled (persistent job processing)
|
||||
- Database connection pooling configured
|
||||
- Connection limits set to prevent "Timeout before authentication" errors
|
||||
|
||||
Benefits:
|
||||
- Cache now works correctly (environment variables had a bug in Gitea 1.25)
|
||||
- All settings are versioned and documented
|
||||
- Better maintainability and reliability
|
||||
|
||||
Gitea should now be more stable and perform better with Redis cache enabled.
|
||||
========================================
|
||||
when: gitea_show_status | default(true) | bool
|
||||
|
||||
@@ -0,0 +1,35 @@
|
||||
---
|
||||
- name: Deploy Gitea stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ gitea_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
register: gitea_compose_result
|
||||
|
||||
- name: Check Gitea container status
|
||||
shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml ps gitea | grep -Eiq "Up|running"
|
||||
register: gitea_state
|
||||
changed_when: false
|
||||
until: gitea_state.rc == 0
|
||||
retries: "{{ ((gitea_wait_timeout | int) + (gitea_wait_interval | int) - 1) // (gitea_wait_interval | int) }}"
|
||||
delay: "{{ gitea_wait_interval | int }}"
|
||||
failed_when: gitea_state.rc != 0
|
||||
when: not ansible_check_mode
|
||||
|
||||
- name: Check Gitea logs for readiness
|
||||
shell: docker compose logs gitea 2>&1 | grep -Ei "(Listen:|Server is running|Starting server)" || true
|
||||
args:
|
||||
chdir: "{{ gitea_stack_path }}"
|
||||
register: gitea_logs
|
||||
until: gitea_logs.stdout != ""
|
||||
retries: 12
|
||||
delay: 10
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: not ansible_check_mode
|
||||
|
||||
- name: Record Gitea deployment facts
|
||||
set_fact:
|
||||
gitea_stack_changed: "{{ gitea_compose_result.changed | default(false) }}"
|
||||
gitea_log_hint: "{{ gitea_logs.stdout | default('') }}"
|
||||
@@ -0,0 +1,258 @@
|
||||
---
|
||||
# Setup Gitea Repository
|
||||
|
||||
- name: Set repository variables from parameters
|
||||
ansible.builtin.set_fact:
|
||||
repo_name: "{{ gitea_repo_name | default('michaelschiemer') }}"
|
||||
repo_owner: "{{ gitea_repo_owner | default('michael') }}"
|
||||
repo_private: "{{ gitea_repo_private | default(false) | bool }}"
|
||||
repo_description: "{{ gitea_repo_description | default('Main application repository') }}"
|
||||
repo_auto_init: "{{ gitea_repo_auto_init | default(false) | bool }}"
|
||||
configure_git_remote: "{{ gitea_configure_git_remote | default(true) | bool }}"
|
||||
git_repo_path: "{{ gitea_git_repo_path | default('/home/michael/dev/michaelschiemer') }}"
|
||||
|
||||
- name: Verify Gitea is accessible
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}"
|
||||
method: GET
|
||||
status_code: [200, 302, 502]
|
||||
validate_certs: false
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
register: gitea_health
|
||||
failed_when: false
|
||||
|
||||
- name: Debug Gitea health status
|
||||
ansible.builtin.debug:
|
||||
msg: "Gitea health check returned status: {{ gitea_health.status }}"
|
||||
when: gitea_show_status | default(true) | bool
|
||||
|
||||
- name: Fail if Gitea is not accessible
|
||||
ansible.builtin.fail:
|
||||
msg: "Gitea is not accessible at {{ gitea_url }}. Status: {{ gitea_health.status }}. Please check if Gitea is running."
|
||||
when: gitea_health.status not in [200, 302, 502]
|
||||
|
||||
- name: Check if API token exists in vault
|
||||
ansible.builtin.set_fact:
|
||||
has_vault_token: "{{ vault_git_token is defined and vault_git_token | length > 0 }}"
|
||||
no_log: true
|
||||
|
||||
- name: Get or create Gitea API token
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/v1/users/{{ gitea_admin_username }}/tokens"
|
||||
method: POST
|
||||
user: "{{ gitea_admin_username }}"
|
||||
password: "{{ gitea_admin_password }}"
|
||||
body_format: json
|
||||
body:
|
||||
name: "ansible-repo-setup-{{ ansible_date_time.epoch }}"
|
||||
scopes:
|
||||
- write:repository
|
||||
- read:repository
|
||||
- admin:repo
|
||||
status_code: [201, 400, 401, 502]
|
||||
validate_certs: false
|
||||
force_basic_auth: yes
|
||||
register: api_token_result
|
||||
failed_when: false
|
||||
when: not has_vault_token
|
||||
no_log: true
|
||||
|
||||
- name: Extract API token from response
|
||||
ansible.builtin.set_fact:
|
||||
gitea_api_token: "{{ api_token_result.json.sha1 | default('') }}"
|
||||
when:
|
||||
- not has_vault_token
|
||||
- api_token_result.status == 201
|
||||
- api_token_result.json.sha1 is defined
|
||||
no_log: true
|
||||
|
||||
- name: Use existing API token from vault
|
||||
ansible.builtin.set_fact:
|
||||
gitea_api_token: "{{ vault_git_token }}"
|
||||
when: has_vault_token
|
||||
no_log: true
|
||||
|
||||
- name: Set flag to use basic auth if token creation failed
|
||||
ansible.builtin.set_fact:
|
||||
use_basic_auth: "{{ gitea_api_token | default('') | length == 0 }}"
|
||||
no_log: true
|
||||
|
||||
- name: Fail if no authentication method available
|
||||
ansible.builtin.fail:
|
||||
msg: "Could not create or retrieve Gitea API token, and admin credentials are not available. Please create a token manually or set vault_git_token in vault."
|
||||
when:
|
||||
- use_basic_auth | bool
|
||||
- gitea_admin_password | default('') | length == 0
|
||||
|
||||
- name: Initialize repo_check variable
|
||||
ansible.builtin.set_fact:
|
||||
repo_check: {"status": 0}
|
||||
when: repo_check is not defined
|
||||
|
||||
- name: Check if repository already exists (with token)
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/v1/repos/{{ repo_owner }}/{{ repo_name }}"
|
||||
method: GET
|
||||
headers:
|
||||
Authorization: "token {{ gitea_api_token }}"
|
||||
status_code: [200, 404, 502]
|
||||
validate_certs: false
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
register: repo_check_token
|
||||
when: not use_basic_auth
|
||||
failed_when: false
|
||||
|
||||
- name: Set repo_check from token result
|
||||
ansible.builtin.set_fact:
|
||||
repo_check: "{{ repo_check_token }}"
|
||||
when:
|
||||
- not use_basic_auth
|
||||
- repo_check_token is defined
|
||||
|
||||
- name: Check if repository already exists (with basic auth)
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/v1/repos/{{ repo_owner }}/{{ repo_name }}"
|
||||
method: GET
|
||||
user: "{{ gitea_admin_username }}"
|
||||
password: "{{ gitea_admin_password }}"
|
||||
status_code: [200, 404, 502]
|
||||
validate_certs: false
|
||||
force_basic_auth: yes
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
register: repo_check_basic
|
||||
when: use_basic_auth
|
||||
failed_when: false
|
||||
no_log: true
|
||||
|
||||
- name: Set repo_check from basic auth result
|
||||
ansible.builtin.set_fact:
|
||||
repo_check: "{{ repo_check_basic }}"
|
||||
when:
|
||||
- use_basic_auth
|
||||
- repo_check_basic is defined
|
||||
|
||||
- name: Debug repo_check status
|
||||
ansible.builtin.debug:
|
||||
msg: "Repository check - Status: {{ repo_check.status | default('undefined') }}, use_basic_auth: {{ use_basic_auth | default('undefined') }}"
|
||||
when: gitea_show_status | default(true) | bool
|
||||
|
||||
- name: Create repository in Gitea (with token)
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/v1/user/repos"
|
||||
method: POST
|
||||
headers:
|
||||
Authorization: "token {{ gitea_api_token }}"
|
||||
Content-Type: "application/json"
|
||||
body_format: json
|
||||
body:
|
||||
name: "{{ repo_name }}"
|
||||
description: "{{ repo_description }}"
|
||||
private: "{{ repo_private }}"
|
||||
auto_init: "{{ repo_auto_init }}"
|
||||
status_code: [201, 409, 502]
|
||||
validate_certs: false
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
register: repo_create_result
|
||||
when:
|
||||
- (repo_check.status | default(0)) in [404, 502, 0] or (gitea_force_create_repo | default(false) | bool)
|
||||
- not use_basic_auth
|
||||
failed_when: false
|
||||
|
||||
- name: Create repository in Gitea (with basic auth)
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/v1/user/repos"
|
||||
method: POST
|
||||
user: "{{ gitea_admin_username }}"
|
||||
password: "{{ gitea_admin_password }}"
|
||||
body_format: json
|
||||
body:
|
||||
name: "{{ repo_name }}"
|
||||
description: "{{ repo_description }}"
|
||||
private: "{{ repo_private }}"
|
||||
auto_init: "{{ repo_auto_init }}"
|
||||
status_code: [201, 409]
|
||||
validate_certs: false
|
||||
force_basic_auth: yes
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
register: repo_create_result
|
||||
when:
|
||||
- ((repo_check.status | default(0)) != 200) or (gitea_force_create_repo | default(false) | bool)
|
||||
- use_basic_auth
|
||||
no_log: true
|
||||
|
||||
- name: Debug repository creation result
|
||||
ansible.builtin.debug:
|
||||
msg: "Repository creation - Status: {{ repo_create_result.status | default('undefined') }}, Response: {{ repo_create_result.json | default('no json') }}"
|
||||
when:
|
||||
- repo_create_result is defined
|
||||
- gitea_show_status | default(true) | bool
|
||||
failed_when: false
|
||||
|
||||
- name: Display repository creation result
|
||||
ansible.builtin.debug:
|
||||
msg: "Repository {{ repo_owner }}/{{ repo_name }} already exists or was created successfully"
|
||||
when: repo_check.status | default(0) == 200 or (repo_create_result is defined and repo_create_result.status | default(0) == 201)
|
||||
|
||||
- name: Get repository clone URL
|
||||
ansible.builtin.set_fact:
|
||||
repo_clone_url: "{{ gitea_url | replace('https://', '') | replace('http://', '') }}/{{ repo_owner }}/{{ repo_name }}.git"
|
||||
repo_https_url: "https://{{ gitea_admin_username }}:{{ gitea_api_token }}@{{ gitea_url | replace('https://', '') | replace('http://', '') }}/{{ repo_owner }}/{{ repo_name }}.git"
|
||||
|
||||
- name: Check if Git repository exists locally
|
||||
ansible.builtin.stat:
|
||||
path: "{{ git_repo_path }}/.git"
|
||||
register: git_repo_exists
|
||||
when: configure_git_remote | bool
|
||||
delegate_to: localhost
|
||||
run_once: true
|
||||
|
||||
- name: Configure Git remote (local)
|
||||
ansible.builtin.command: >
|
||||
git remote set-url origin {{ repo_clone_url }}
|
||||
args:
|
||||
chdir: "{{ git_repo_path }}"
|
||||
register: git_remote_result
|
||||
when:
|
||||
- configure_git_remote | bool
|
||||
- git_repo_path is defined
|
||||
- git_repo_exists.stat.exists
|
||||
delegate_to: localhost
|
||||
run_once: true
|
||||
changed_when: git_remote_result.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Add Git remote if it doesn't exist
|
||||
ansible.builtin.command: >
|
||||
git remote add origin {{ repo_clone_url }}
|
||||
args:
|
||||
chdir: "{{ git_repo_path }}"
|
||||
register: git_remote_add_result
|
||||
when:
|
||||
- configure_git_remote | bool
|
||||
- git_repo_path is defined
|
||||
- git_repo_exists.stat.exists
|
||||
- git_remote_result.rc != 0
|
||||
delegate_to: localhost
|
||||
run_once: true
|
||||
changed_when: git_remote_add_result.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Display success message
|
||||
ansible.builtin.debug:
|
||||
msg:
|
||||
- "========================================"
|
||||
- "✅ Repository created successfully!"
|
||||
- "========================================"
|
||||
- "Repository URL: {{ gitea_url }}/{{ repo_owner }}/{{ repo_name }}"
|
||||
- "Clone URL: {{ repo_clone_url }}"
|
||||
- ""
|
||||
- "Next steps:"
|
||||
- "1. Push your code: git push -u origin staging"
|
||||
- "2. Monitor pipeline: {{ gitea_url }}/{{ repo_owner }}/{{ repo_name }}/actions"
|
||||
- ""
|
||||
- "Note: If you need to push, you may need to authenticate with:"
|
||||
- " Username: {{ gitea_admin_username }}"
|
||||
- " Password: (use vault_gitea_admin_password or create a Personal Access Token)"
|
||||
- "========================================"
|
||||
when: gitea_show_status | default(true) | bool
|
||||
|
||||
123
deployment/legacy/ansible/ansible/roles/gitea/tasks/restart.yml
Normal file
123
deployment/legacy/ansible/ansible/roles/gitea/tasks/restart.yml
Normal file
@@ -0,0 +1,123 @@
|
||||
---
|
||||
# Check and Restart Gitea if Unhealthy
|
||||
|
||||
- name: Check if Gitea stack directory exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ gitea_stack_path }}"
|
||||
register: gitea_stack_exists
|
||||
|
||||
- name: Fail if Gitea stack directory does not exist
|
||||
ansible.builtin.fail:
|
||||
msg: "Gitea stack directory not found at {{ gitea_stack_path }}"
|
||||
when: not gitea_stack_exists.stat.exists
|
||||
|
||||
- name: Check Gitea container status
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose ps {{ gitea_container_name }} --format json
|
||||
register: gitea_container_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display Gitea container status
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
Gitea Container Status:
|
||||
{{ gitea_container_status.stdout | default('Container not found or error') }}
|
||||
================================================================================
|
||||
when: gitea_show_status | default(true) | bool
|
||||
|
||||
- name: Check Gitea health endpoint
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
validate_certs: false
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
register: gitea_health
|
||||
ignore_errors: yes
|
||||
changed_when: false
|
||||
|
||||
- name: Display Gitea health check result
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================
|
||||
Gitea Health Check:
|
||||
- Status Code: {{ gitea_health.status | default('UNREACHABLE') }}
|
||||
- Response Time: {{ gitea_health.elapsed | default('N/A') }}s
|
||||
- Status: {% if gitea_health.status | default(0) == 200 %}✅ HEALTHY{% else %}❌ UNHEALTHY or TIMEOUT{% endif %}
|
||||
================================
|
||||
when: gitea_show_status | default(true) | bool
|
||||
|
||||
- name: Get Gitea container logs
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose logs --tail={{ gitea_logs_tail | default(50) }} {{ gitea_container_name }} 2>&1 || echo "LOGS_NOT_AVAILABLE"
|
||||
register: gitea_logs
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display Gitea container logs
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
Gitea Container Logs (last {{ gitea_logs_tail | default(50) }} lines):
|
||||
{{ gitea_logs.stdout | default('No logs available') }}
|
||||
================================================================================
|
||||
when: gitea_show_logs | default(true) | bool
|
||||
|
||||
- name: Check if Gitea container is running
|
||||
ansible.builtin.set_fact:
|
||||
gitea_is_running: "{{ 'State\":\"running' in (gitea_container_status.stdout | default('')) }}"
|
||||
|
||||
- name: Check if Gitea is healthy
|
||||
ansible.builtin.set_fact:
|
||||
gitea_is_healthy: "{{ (gitea_health.status | default(0)) == 200 }}"
|
||||
|
||||
- name: Restart Gitea container if unhealthy or not running
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose restart {{ gitea_container_name }}
|
||||
when:
|
||||
- (not gitea_is_healthy | bool or not gitea_is_running | bool)
|
||||
- gitea_auto_restart | default(true) | bool
|
||||
register: gitea_restart
|
||||
changed_when: gitea_restart.rc == 0
|
||||
notify: wait for gitea
|
||||
|
||||
- name: Wait for Gitea to be ready after restart
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
validate_certs: false
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
register: gitea_health_after_restart
|
||||
until: gitea_health_after_restart.status == 200
|
||||
retries: "{{ gitea_restart_retries | default(30) }}"
|
||||
delay: "{{ gitea_restart_delay | default(2) }}"
|
||||
when: gitea_restart.changed | default(false)
|
||||
ignore_errors: yes
|
||||
changed_when: false
|
||||
|
||||
- name: Display final status
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
========================================
|
||||
========================================
|
||||
Gitea Status Summary
|
||||
========================================
|
||||
Container Running: {% if gitea_is_running | bool %}✅ YES{% else %}❌ NO{% endif %}
|
||||
Health Check: {% if gitea_health_after_restart.status | default(0) == 200 %}✅ HEALTHY{% elif gitea_is_healthy | bool %}✅ HEALTHY{% else %}❌ UNHEALTHY{% endif %}
|
||||
Action Taken: {% if gitea_restart.changed | default(false) %}🔄 Container restarted{% else %}ℹ️ No restart needed{% endif %}
|
||||
Final Status: {% if gitea_is_running | bool and (gitea_health_after_restart.status | default(0) == 200 or gitea_is_healthy | bool) %}✅ HEALTHY{% else %}❌ STILL UNHEALTHY{% endif %}
|
||||
========================================
|
||||
{% if gitea_is_running | bool and (gitea_health_after_restart.status | default(0) == 200 or gitea_is_healthy | bool) %}
|
||||
✅ Gitea is now accessible and healthy!
|
||||
{% else %}
|
||||
❌ Gitea is still not fully healthy. Manual intervention may be required.
|
||||
{% endif %}
|
||||
========================================
|
||||
when: gitea_show_status | default(true) | bool
|
||||
|
||||
329
deployment/legacy/ansible/ansible/roles/gitea/tasks/runner.yml
Normal file
329
deployment/legacy/ansible/ansible/roles/gitea/tasks/runner.yml
Normal file
@@ -0,0 +1,329 @@
|
||||
---
|
||||
# Gitea Runner Management Tasks
|
||||
# Supports both fix (diagnose) and register actions
|
||||
|
||||
- name: Check if Gitea runner directory exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ gitea_runner_path }}"
|
||||
register: runner_dir_exists
|
||||
|
||||
- name: Fail if runner directory does not exist
|
||||
ansible.builtin.fail:
|
||||
msg: "Gitea runner directory not found at {{ gitea_runner_path }}"
|
||||
when: not runner_dir_exists.stat.exists
|
||||
|
||||
- name: Check if runner container is running
|
||||
ansible.builtin.shell: |
|
||||
docker ps --format json | jq -r 'select(.Names == "{{ gitea_runner_container_name }}") | .State'
|
||||
register: runner_container_state
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display runner container status
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Runner Container Status: {{ runner_container_state.stdout | default('NOT RUNNING') }}
|
||||
when: gitea_runner_show_status | default(true) | bool
|
||||
|
||||
- name: Check if .runner file exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ gitea_runner_path }}/data/.runner"
|
||||
register: runner_file_exists
|
||||
|
||||
- name: Read .runner file content (if exists)
|
||||
ansible.builtin.slurp:
|
||||
src: "{{ gitea_runner_path }}/data/.runner"
|
||||
register: runner_file_content
|
||||
when: runner_file_exists.stat.exists
|
||||
changed_when: false
|
||||
|
||||
- name: Display .runner file content
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Runner Registration File Content:
|
||||
{{ runner_file_content.content | b64decode | default('File not found') }}
|
||||
when:
|
||||
- runner_file_exists.stat.exists
|
||||
- gitea_runner_show_status | default(true) | bool
|
||||
|
||||
- name: Check for GitHub URLs in .runner file
|
||||
ansible.builtin.shell: |
|
||||
grep -i "github.com" "{{ gitea_runner_path }}/data/.runner" 2>/dev/null || echo "NO_GITHUB_URLS"
|
||||
register: github_urls_check
|
||||
when: runner_file_exists.stat.exists
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display GitHub URLs check result
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
GitHub URLs in .runner file: {{ github_urls_check.stdout }}
|
||||
when: gitea_runner_show_status | default(true) | bool
|
||||
|
||||
- name: Check runner logs for incorrect URLs
|
||||
ansible.builtin.shell: |
|
||||
docker logs {{ gitea_runner_container_name }} --tail=100 2>&1 | grep -E "(github.com|instance|repo)" || echo "NO_MATCHES"
|
||||
register: runner_logs_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display runner logs analysis
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Runner Logs Analysis (last 100 lines):
|
||||
{{ runner_logs_check.stdout }}
|
||||
when: gitea_runner_show_status | default(true) | bool
|
||||
|
||||
- name: Check .env file for GITEA_INSTANCE_URL
|
||||
ansible.builtin.shell: |
|
||||
grep "^GITEA_INSTANCE_URL=" "{{ gitea_runner_path }}/.env" 2>/dev/null || echo "NOT_FOUND"
|
||||
register: env_instance_url
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display GITEA_INSTANCE_URL from .env
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
GITEA_INSTANCE_URL in .env: {{ env_instance_url.stdout }}
|
||||
when: gitea_runner_show_status | default(true) | bool
|
||||
|
||||
- name: Check if .env has correct Gitea URL
|
||||
ansible.builtin.set_fact:
|
||||
env_has_correct_url: "{{ env_instance_url.stdout is defined and gitea_instance_url in env_instance_url.stdout }}"
|
||||
|
||||
- name: Check if runner needs re-registration (for fix action)
|
||||
ansible.builtin.set_fact:
|
||||
runner_needs_reregistration: >-
|
||||
{%- if not runner_file_exists.stat.exists -%}
|
||||
true
|
||||
{%- elif 'github.com' in (github_urls_check.stdout | default('')) -%}
|
||||
true
|
||||
{%- elif not env_has_correct_url -%}
|
||||
true
|
||||
{%- else -%}
|
||||
false
|
||||
{%- endif -%}
|
||||
when: gitea_runner_action | default('fix') == 'fix'
|
||||
|
||||
- name: Display re-registration decision
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Runner Re-registration Needed: {{ runner_needs_reregistration | bool }}
|
||||
|
||||
Reasons:
|
||||
- Runner file exists: {{ runner_file_exists.stat.exists }}
|
||||
- Contains GitHub URLs: {{ 'github.com' in (github_urls_check.stdout | default('')) }}
|
||||
- .env has correct URL: {{ env_has_correct_url | bool }}
|
||||
when:
|
||||
- gitea_runner_action | default('fix') == 'fix'
|
||||
- gitea_runner_show_status | default(true) | bool
|
||||
|
||||
- name: Fail if registration token is not provided (for register action)
|
||||
ansible.builtin.fail:
|
||||
msg: "gitea_runner_registration_token must be provided via -e 'gitea_runner_registration_token=<token>'"
|
||||
when:
|
||||
- gitea_runner_action | default('fix') == 'register'
|
||||
- gitea_runner_registration_token | string | trim == ''
|
||||
|
||||
- name: Stop runner container before re-registration (fix action)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_runner_path }}
|
||||
docker compose stop {{ gitea_runner_container_name }}
|
||||
when:
|
||||
- gitea_runner_action | default('fix') == 'fix'
|
||||
- runner_needs_reregistration | bool
|
||||
register: stop_runner
|
||||
changed_when: stop_runner.rc == 0
|
||||
|
||||
- name: Stop runner container if running (register action)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_runner_path }}
|
||||
docker compose stop {{ gitea_runner_container_name }}
|
||||
when: gitea_runner_action | default('fix') == 'register'
|
||||
register: stop_result
|
||||
changed_when: stop_result.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Backup existing .runner file
|
||||
ansible.builtin.copy:
|
||||
src: "{{ gitea_runner_path }}/data/.runner"
|
||||
dest: "{{ gitea_runner_path }}/data/.runner.backup.{{ ansible_date_time.epoch }}"
|
||||
remote_src: yes
|
||||
when:
|
||||
- runner_file_exists.stat.exists
|
||||
- (gitea_runner_action | default('fix') == 'register') or (runner_needs_reregistration | bool)
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Remove existing .runner file
|
||||
ansible.builtin.file:
|
||||
path: "{{ gitea_runner_path }}/data/.runner"
|
||||
state: absent
|
||||
when:
|
||||
- (gitea_runner_action | default('fix') == 'register') or (runner_needs_reregistration | bool)
|
||||
|
||||
- name: Update .env file with correct GITEA_INSTANCE_URL (fix action)
|
||||
ansible.builtin.lineinfile:
|
||||
path: "{{ gitea_runner_path }}/.env"
|
||||
regexp: '^GITEA_INSTANCE_URL='
|
||||
line: "GITEA_INSTANCE_URL={{ gitea_instance_url }}"
|
||||
create: yes
|
||||
when:
|
||||
- gitea_runner_action | default('fix') == 'fix'
|
||||
- runner_needs_reregistration | bool
|
||||
register: env_updated
|
||||
|
||||
- name: Update .env file with correct configuration (register action)
|
||||
ansible.builtin.lineinfile:
|
||||
path: "{{ gitea_runner_path }}/.env"
|
||||
regexp: '^{{ item.key }}='
|
||||
line: "{{ item.key }}={{ item.value }}"
|
||||
create: yes
|
||||
loop:
|
||||
- { key: 'GITEA_INSTANCE_URL', value: '{{ gitea_instance_url }}' }
|
||||
- { key: 'GITEA_RUNNER_REGISTRATION_TOKEN', value: '{{ gitea_runner_registration_token }}' }
|
||||
- { key: 'GITEA_RUNNER_NAME', value: '{{ gitea_runner_name }}' }
|
||||
- { key: 'GITEA_RUNNER_LABELS', value: '{{ gitea_runner_labels }}' }
|
||||
when: gitea_runner_action | default('fix') == 'register'
|
||||
no_log: true
|
||||
|
||||
- name: Display instructions for manual re-registration (fix action)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
========================================
|
||||
Runner Re-registration Required
|
||||
========================================
|
||||
|
||||
The runner needs to be re-registered with the correct Gitea instance URL.
|
||||
|
||||
Steps to re-register:
|
||||
|
||||
1. Get a new registration token from Gitea:
|
||||
{{ gitea_instance_url }}/admin/actions/runners
|
||||
Click "Create New Runner" and copy the token
|
||||
|
||||
2. Update .env file with the token:
|
||||
GITEA_RUNNER_REGISTRATION_TOKEN=<your-token>
|
||||
|
||||
3. Re-register the runner:
|
||||
cd {{ gitea_runner_path }}
|
||||
./register.sh
|
||||
|
||||
Or use Ansible to set the token and register:
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/register-gitea-runner.yml \
|
||||
-e "gitea_runner_registration_token=<your-token>"
|
||||
|
||||
========================================
|
||||
when:
|
||||
- gitea_runner_action | default('fix') == 'fix'
|
||||
- runner_needs_reregistration | bool
|
||||
- gitea_runner_show_status | default(true) | bool
|
||||
|
||||
- name: Start runner services (register action)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_runner_path }}
|
||||
docker compose up -d
|
||||
when: gitea_runner_action | default('fix') == 'register'
|
||||
register: start_services
|
||||
changed_when: start_services.rc == 0
|
||||
|
||||
- name: Wait for services to be ready (register action)
|
||||
ansible.builtin.pause:
|
||||
seconds: "{{ gitea_runner_wait_seconds | default(5) }}"
|
||||
when: gitea_runner_action | default('fix') == 'register'
|
||||
|
||||
- name: Register runner with correct Gitea instance (register action)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_runner_path }}
|
||||
docker compose exec -T {{ gitea_runner_container_name }} act_runner register \
|
||||
--instance "{{ gitea_instance_url }}" \
|
||||
--token "{{ gitea_runner_registration_token }}" \
|
||||
--name "{{ gitea_runner_name }}" \
|
||||
--labels "{{ gitea_runner_labels }}"
|
||||
when: gitea_runner_action | default('fix') == 'register'
|
||||
register: register_result
|
||||
no_log: true
|
||||
changed_when: register_result.rc == 0
|
||||
|
||||
- name: Display registration result (register action)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Runner Registration Result:
|
||||
{{ register_result.stdout | default('No output') }}
|
||||
when:
|
||||
- gitea_runner_action | default('fix') == 'register'
|
||||
- register_result.rc == 0
|
||||
- gitea_runner_show_status | default(true) | bool
|
||||
|
||||
- name: Verify .runner file was created (register action)
|
||||
ansible.builtin.stat:
|
||||
path: "{{ gitea_runner_path }}/data/.runner"
|
||||
register: runner_file_created
|
||||
when: gitea_runner_action | default('fix') == 'register'
|
||||
|
||||
- name: Check .runner file for correct instance URL (register action)
|
||||
ansible.builtin.shell: |
|
||||
grep -i "{{ gitea_instance_url }}" "{{ gitea_runner_path }}/data/.runner" 2>/dev/null || echo "URL_NOT_FOUND"
|
||||
register: runner_url_check
|
||||
when:
|
||||
- gitea_runner_action | default('fix') == 'register'
|
||||
- runner_file_created.stat.exists
|
||||
changed_when: false
|
||||
|
||||
- name: Check .runner file for GitHub URLs (register action)
|
||||
ansible.builtin.shell: |
|
||||
grep -i "github.com" "{{ gitea_runner_path }}/data/.runner" 2>/dev/null || echo "NO_GITHUB_URLS"
|
||||
register: runner_github_check
|
||||
when:
|
||||
- gitea_runner_action | default('fix') == 'register'
|
||||
- runner_file_created.stat.exists
|
||||
changed_when: false
|
||||
|
||||
- name: Display final status (fix action)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
========================================
|
||||
Gitea Runner Configuration Status
|
||||
========================================
|
||||
Runner Directory: {{ gitea_runner_path }}
|
||||
Container Running: {{ 'YES' if runner_container_state.stdout == 'running' else 'NO' }}
|
||||
Runner File Exists: {{ 'YES' if runner_file_exists.stat.exists else 'NO' }}
|
||||
Contains GitHub URLs: {{ 'YES' if 'github.com' in (github_urls_check.stdout | default('')) else 'NO' }}
|
||||
.env has correct URL: {{ 'YES' if env_has_correct_url else 'NO' }}
|
||||
Re-registration Needed: {{ 'YES' if runner_needs_reregistration | bool else 'NO' }}
|
||||
========================================
|
||||
|
||||
{% if not runner_needs_reregistration | bool %}
|
||||
✅ Runner configuration looks correct!
|
||||
{% else %}
|
||||
⚠️ Runner needs to be re-registered with correct Gitea URL
|
||||
{% endif %}
|
||||
when:
|
||||
- gitea_runner_action | default('fix') == 'fix'
|
||||
- gitea_runner_show_status | default(true) | bool
|
||||
|
||||
- name: Display final status (register action)
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
========================================
|
||||
Gitea Runner Registration Status
|
||||
========================================
|
||||
Registration: {{ '✅ SUCCESS' if register_result.rc == 0 else '❌ FAILED' }}
|
||||
Runner File Created: {{ '✅ YES' if runner_file_created.stat.exists else '❌ NO' }}
|
||||
Contains Correct URL: {{ '✅ YES' if 'URL_NOT_FOUND' not in runner_url_check.stdout else '❌ NO' }}
|
||||
Contains GitHub URLs: {{ '❌ YES' if 'NO_GITHUB_URLS' not in runner_github_check.stdout else '✅ NO' }}
|
||||
========================================
|
||||
|
||||
{% if register_result.rc == 0 and runner_file_created.stat.exists %}
|
||||
✅ Runner registered successfully with {{ gitea_instance_url }}!
|
||||
|
||||
Check runner status:
|
||||
{{ gitea_instance_url }}/admin/actions/runners
|
||||
{% else %}
|
||||
❌ Registration failed. Check logs:
|
||||
docker logs {{ gitea_runner_container_name }}
|
||||
{% endif %}
|
||||
when:
|
||||
- gitea_runner_action | default('fix') == 'register'
|
||||
- gitea_runner_show_status | default(true) | bool
|
||||
|
||||
287
deployment/legacy/ansible/ansible/roles/gitea/tasks/setup.yml
Normal file
287
deployment/legacy/ansible/ansible/roles/gitea/tasks/setup.yml
Normal file
@@ -0,0 +1,287 @@
|
||||
---
|
||||
# Setup Gitea Initial Configuration
|
||||
|
||||
- name: Verify Gitea container exists
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml ps {{ gitea_container_name }} | grep -q "{{ gitea_container_name }}"
|
||||
register: gitea_exists
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Fail if Gitea container does not exist
|
||||
ansible.builtin.fail:
|
||||
msg: "Gitea container does not exist. Please deploy Gitea stack first using: ansible-playbook -i inventory/production.yml playbooks/setup-infrastructure.yml --tags gitea"
|
||||
when: gitea_exists.rc != 0
|
||||
|
||||
- name: Wait for Gitea to be ready
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200, 404]
|
||||
validate_certs: false
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
register: gitea_health
|
||||
until: gitea_health.status == 200
|
||||
retries: "{{ gitea_setup_health_retries | default(30) }}"
|
||||
delay: "{{ gitea_setup_health_delay | default(5) }}"
|
||||
ignore_errors: yes
|
||||
changed_when: false
|
||||
when: not (gitea_force_update_app_ini | default(false) | bool)
|
||||
|
||||
- name: Check if Gitea is already configured
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}"
|
||||
method: GET
|
||||
status_code: [200, 302, 502]
|
||||
validate_certs: false
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
follow_redirects: none
|
||||
return_content: yes
|
||||
register: gitea_main_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Check if app.ini exists in container
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T {{ gitea_container_name }} test -f {{ gitea_app_ini_container_path }}
|
||||
register: gitea_app_ini_exists
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Check if INSTALL_LOCK is set
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T {{ gitea_container_name }} grep -q "INSTALL_LOCK = true" {{ gitea_app_ini_container_path }} 2>/dev/null || echo "not_set"
|
||||
register: gitea_install_lock_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: gitea_app_ini_exists.rc == 0
|
||||
|
||||
- name: Determine if Gitea needs setup
|
||||
ansible.builtin.set_fact:
|
||||
gitea_needs_setup: "{{ (gitea_force_update_app_ini | default(false) | bool) or ('installation' in (gitea_main_check.content | default('') | lower) or 'initial configuration' in (gitea_main_check.content | default('') | lower)) or (gitea_app_ini_exists.rc != 0) or (gitea_install_lock_check.stdout | default('') | trim == 'not_set') }}"
|
||||
gitea_already_configured: "{{ not (gitea_force_update_app_ini | default(false) | bool) and 'installation' not in (gitea_main_check.content | default('') | lower) and 'initial configuration' not in (gitea_main_check.content | default('') | lower) and gitea_app_ini_exists.rc == 0 and gitea_install_lock_check.stdout | default('') | trim != 'not_set' }}"
|
||||
|
||||
- name: Display setup status
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Gitea Setup Status:
|
||||
- Main page status: {{ gitea_main_check.status }}
|
||||
- app.ini exists: {{ gitea_app_ini_exists.rc == 0 }}
|
||||
- INSTALL_LOCK set: {{ gitea_install_lock_check.stdout | default('unknown') }}
|
||||
- Force update: {{ gitea_force_update_app_ini | default(false) }}
|
||||
- Already configured: {{ gitea_already_configured }}
|
||||
- Needs setup: {{ gitea_needs_setup }}
|
||||
when: gitea_show_status | default(true) | bool
|
||||
|
||||
- name: Fail if admin password is not set
|
||||
ansible.builtin.fail:
|
||||
msg: |
|
||||
Gitea admin password is not set in vault.
|
||||
Please set vault_gitea_admin_password in:
|
||||
- deployment/ansible/secrets/production.vault.yml
|
||||
|
||||
To set it, run:
|
||||
ansible-vault edit secrets/production.vault.yml --vault-password-file secrets/.vault_pass
|
||||
|
||||
Then add:
|
||||
vault_gitea_admin_password: "your-secure-password"
|
||||
when:
|
||||
- gitea_needs_setup | bool
|
||||
- gitea_admin_password | default('') | trim == ''
|
||||
|
||||
- name: Get Gitea database configuration from environment
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T {{ gitea_container_name }} env | grep -E "^GITEA__database__" || true
|
||||
register: gitea_db_env
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: gitea_needs_setup | bool
|
||||
|
||||
- name: Parse database configuration
|
||||
ansible.builtin.set_fact:
|
||||
gitea_db_type: "{{ (gitea_db_env.stdout | default('') | regex_search('GITEA__database__DB_TYPE=([^\n]+)', '\\1') or ['postgres']) | first }}"
|
||||
gitea_db_host: "{{ (gitea_db_env.stdout | default('') | regex_search('GITEA__database__HOST=([^\n]+)', '\\1') or ['postgres:5432']) | first }}"
|
||||
gitea_db_name: "{{ (gitea_db_env.stdout | default('') | regex_search('GITEA__database__NAME=([^\n]+)', '\\1') or ['gitea']) | first }}"
|
||||
gitea_db_user: "{{ (gitea_db_env.stdout | default('') | regex_search('GITEA__database__USER=([^\n]+)', '\\1') or ['gitea']) | first }}"
|
||||
gitea_db_passwd: "{{ (gitea_db_env.stdout | default('') | regex_search('GITEA__database__PASSWD=([^\n]+)', '\\1') or ['gitea_password']) | first }}"
|
||||
when: gitea_needs_setup | bool
|
||||
|
||||
- name: Extract database host and port
|
||||
ansible.builtin.set_fact:
|
||||
gitea_db_hostname: "{{ gitea_db_host.split(':')[0] }}"
|
||||
gitea_db_port: "{{ (gitea_db_host.split(':')[1]) | default('5432') }}"
|
||||
when: gitea_needs_setup | bool
|
||||
|
||||
- name: Get Gitea server configuration from environment
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T {{ gitea_container_name }} env | grep -E "^GITEA__server__" || true
|
||||
register: gitea_server_env
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: gitea_needs_setup | bool
|
||||
|
||||
- name: Parse server configuration
|
||||
ansible.builtin.set_fact:
|
||||
gitea_domain_config: "{{ (gitea_server_env.stdout | default('') | regex_search('GITEA__server__DOMAIN=([^\n]+)', '\\1') or [gitea_domain]) | first }}"
|
||||
gitea_root_url: "{{ (gitea_server_env.stdout | default('') | regex_search('GITEA__server__ROOT_URL=([^\n]+)', '\\1') or ['https://' + gitea_domain + '/']) | first }}"
|
||||
gitea_ssh_domain: "{{ (gitea_server_env.stdout | default('') | regex_search('GITEA__server__SSH_DOMAIN=([^\n]+)', '\\1') or [gitea_domain]) | first }}"
|
||||
gitea_ssh_port: "{{ (gitea_server_env.stdout | default('') | regex_search('GITEA__server__SSH_PORT=([^\n]+)', '\\1') or ['2222']) | first }}"
|
||||
when: gitea_needs_setup | bool
|
||||
|
||||
- name: Get Gitea service configuration from environment
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T {{ gitea_container_name }} env | grep -E "^GITEA__service__" || true
|
||||
register: gitea_service_env
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: gitea_needs_setup | bool
|
||||
|
||||
- name: Parse service configuration
|
||||
ansible.builtin.set_fact:
|
||||
gitea_disable_registration: "{{ (gitea_service_env.stdout | default('') | regex_search('GITEA__service__DISABLE_REGISTRATION=([^\n]+)', '\\1') or ['true']) | first | lower }}"
|
||||
when: gitea_needs_setup | bool
|
||||
|
||||
- name: Generate app.ini file
|
||||
ansible.builtin.template:
|
||||
src: "{{ gitea_app_ini_template | default('../../templates/gitea-app.ini.j2') }}"
|
||||
dest: "{{ gitea_app_ini_path }}"
|
||||
mode: '0644'
|
||||
vars:
|
||||
gitea_domain: "{{ gitea_domain_config }}"
|
||||
postgres_db: "{{ gitea_db_name }}"
|
||||
postgres_user: "{{ gitea_db_user }}"
|
||||
postgres_password: "{{ gitea_db_passwd }}"
|
||||
disable_registration: "{{ gitea_disable_registration == 'true' }}"
|
||||
ssh_port: "{{ gitea_ssh_port | int }}"
|
||||
ssh_listen_port: 22
|
||||
when: gitea_needs_setup | bool
|
||||
|
||||
- name: Copy app.ini to Gitea container
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml cp {{ gitea_app_ini_path }} {{ gitea_container_name }}:{{ gitea_app_ini_container_path }}
|
||||
when: gitea_needs_setup | bool
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Wait for container to be ready for exec
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T {{ gitea_container_name }} true
|
||||
register: container_ready
|
||||
until: container_ready.rc == 0
|
||||
retries: "{{ gitea_config_retries | default(30) }}"
|
||||
delay: "{{ gitea_config_delay | default(2) }}"
|
||||
when:
|
||||
- gitea_needs_setup | bool
|
||||
- not (gitea_force_update_app_ini | default(false) | bool)
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Set correct permissions on app.ini in container
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T {{ gitea_container_name }} chown 1000:1000 {{ gitea_app_ini_container_path }} && \
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T {{ gitea_container_name }} chmod 644 {{ gitea_app_ini_container_path }}
|
||||
when: gitea_needs_setup | bool
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Restart Gitea container
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml restart {{ gitea_container_name }}
|
||||
when: gitea_needs_setup | bool
|
||||
register: gitea_restart
|
||||
changed_when: gitea_restart.rc == 0
|
||||
notify: wait for gitea
|
||||
|
||||
- name: Wait for Gitea to be ready after restart
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
validate_certs: false
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
register: gitea_health_after_restart
|
||||
until: gitea_health_after_restart.status == 200
|
||||
retries: "{{ gitea_restart_retries | default(30) }}"
|
||||
delay: "{{ gitea_restart_delay | default(5) }}"
|
||||
when:
|
||||
- not (gitea_force_update_app_ini | default(false) | bool)
|
||||
- gitea_restart.changed | default(false)
|
||||
changed_when: false
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Wait for database to be initialized
|
||||
ansible.builtin.pause:
|
||||
seconds: "{{ gitea_setup_db_wait | default(10) }}"
|
||||
when: gitea_needs_setup | bool
|
||||
|
||||
- name: Check if admin user already exists
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T {{ gitea_container_name }} \
|
||||
gitea admin user list --admin | grep -q "{{ gitea_admin_username }}" || echo "not_found"
|
||||
register: gitea_admin_exists
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: gitea_needs_setup | bool
|
||||
|
||||
- name: Create admin user
|
||||
ansible.builtin.shell: |
|
||||
docker compose -f {{ gitea_stack_path }}/docker-compose.yml exec -T --user git {{ gitea_container_name }} \
|
||||
gitea admin user create \
|
||||
--username "{{ gitea_admin_username }}" \
|
||||
--password "{{ gitea_admin_password }}" \
|
||||
--email "{{ gitea_admin_email }}" \
|
||||
--admin \
|
||||
--must-change-password=false
|
||||
register: gitea_admin_create_result
|
||||
when:
|
||||
- gitea_needs_setup | bool
|
||||
- gitea_admin_exists.stdout | default('') | trim == 'not_found'
|
||||
failed_when: gitea_admin_create_result.rc != 0 and 'already exists' not in (gitea_admin_create_result.stderr | default(''))
|
||||
no_log: true
|
||||
|
||||
- name: Verify Gitea is accessible
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}"
|
||||
method: GET
|
||||
status_code: [200, 302]
|
||||
validate_certs: false
|
||||
timeout: "{{ gitea_health_check_timeout | default(10) }}"
|
||||
follow_redirects: none
|
||||
register: gitea_access_check
|
||||
when: gitea_needs_setup | bool
|
||||
|
||||
- name: Display success message
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
========================================
|
||||
✅ Gitea Initial Setup Complete!
|
||||
========================================
|
||||
Configuration:
|
||||
- app.ini created with INSTALL_LOCK = true
|
||||
- Admin user created: {{ gitea_admin_username }}
|
||||
- Email: {{ gitea_admin_email }}
|
||||
|
||||
Next steps:
|
||||
1. Access Gitea: {{ gitea_url }}
|
||||
2. Login with:
|
||||
- Username: {{ gitea_admin_username }}
|
||||
- Password: (from vault: vault_gitea_admin_password)
|
||||
3. Configure Gitea Actions Runner (if needed):
|
||||
- Go to: {{ gitea_url }}/admin/actions/runners
|
||||
- Get registration token
|
||||
- Register runner using: deployment/gitea-runner/register.sh
|
||||
========================================
|
||||
when:
|
||||
- gitea_needs_setup | bool
|
||||
- gitea_show_status | default(true) | bool
|
||||
|
||||
- name: Display already configured message
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
========================================
|
||||
ℹ️ Gitea is already configured.
|
||||
========================================
|
||||
No setup needed. Access Gitea at: {{ gitea_url }}
|
||||
========================================
|
||||
when:
|
||||
- gitea_already_configured | bool
|
||||
- gitea_show_status | default(true) | bool
|
||||
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
minio_stack_path: "{{ stacks_base_path }}/minio"
|
||||
minio_wait_timeout: "{{ wait_timeout | default(60) }}"
|
||||
minio_wait_interval: 5
|
||||
minio_env_template: "{{ role_path }}/../../templates/minio.env.j2"
|
||||
minio_vault_file: "{{ role_path }}/../../secrets/production.vault.yml"
|
||||
minio_healthcheck_enabled: false
|
||||
94
deployment/legacy/ansible/ansible/roles/minio/tasks/main.yml
Normal file
94
deployment/legacy/ansible/ansible/roles/minio/tasks/main.yml
Normal file
@@ -0,0 +1,94 @@
|
||||
---
|
||||
- name: Check if MinIO vault file exists
|
||||
stat:
|
||||
path: "{{ minio_vault_file }}"
|
||||
delegate_to: localhost
|
||||
register: minio_vault_stat
|
||||
become: no
|
||||
|
||||
- name: Optionally load MinIO secrets from vault
|
||||
include_vars:
|
||||
file: "{{ minio_vault_file }}"
|
||||
when: minio_vault_stat.stat.exists
|
||||
no_log: yes
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Set MinIO root password from vault or generate
|
||||
set_fact:
|
||||
minio_root_password: "{{ vault_minio_root_password | default(lookup('password', '/dev/null length=32 chars=ascii_letters,digits,punctuation')) }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Set MinIO root user from vault or use default
|
||||
set_fact:
|
||||
minio_root_user: "{{ vault_minio_root_user | default('minioadmin') }}"
|
||||
no_log: yes
|
||||
|
||||
- name: Ensure MinIO stack directory exists
|
||||
file:
|
||||
path: "{{ minio_stack_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
- name: Create MinIO stack .env file
|
||||
template:
|
||||
src: "{{ minio_env_template }}"
|
||||
dest: "{{ minio_stack_path }}/.env"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0600'
|
||||
|
||||
- name: Deploy MinIO stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ minio_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
register: minio_compose_result
|
||||
|
||||
- name: Check MinIO container status
|
||||
shell: |
|
||||
docker compose -f {{ minio_stack_path }}/docker-compose.yml ps minio | grep -Eiq "Up|running"
|
||||
register: minio_state
|
||||
changed_when: false
|
||||
until: minio_state.rc == 0
|
||||
retries: "{{ ((minio_wait_timeout | int) + (minio_wait_interval | int) - 1) // (minio_wait_interval | int) }}"
|
||||
delay: "{{ minio_wait_interval | int }}"
|
||||
failed_when: minio_state.rc != 0
|
||||
when: not ansible_check_mode
|
||||
|
||||
- name: Check MinIO logs for readiness
|
||||
shell: docker compose logs minio 2>&1 | grep -Ei "(API:|WebUI:|MinIO Object Storage Server)" || true
|
||||
args:
|
||||
chdir: "{{ minio_stack_path }}"
|
||||
register: minio_logs
|
||||
until: minio_logs.stdout != ""
|
||||
retries: 6
|
||||
delay: 10
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: not ansible_check_mode
|
||||
|
||||
- name: Verify MinIO health endpoint
|
||||
uri:
|
||||
url: "http://127.0.0.1:9000/minio/health/live"
|
||||
method: GET
|
||||
status_code: [200, 404, 502, 503]
|
||||
timeout: 5
|
||||
register: minio_health_check
|
||||
ignore_errors: yes
|
||||
changed_when: false
|
||||
when:
|
||||
- not ansible_check_mode
|
||||
- minio_healthcheck_enabled | bool
|
||||
|
||||
- name: Display MinIO status
|
||||
debug:
|
||||
msg: "MinIO health check: {{ 'SUCCESS' if minio_health_check.status == 200 else 'FAILED - Status: ' + (minio_health_check.status|string) }}"
|
||||
when:
|
||||
- not ansible_check_mode
|
||||
- minio_healthcheck_enabled | bool
|
||||
|
||||
- name: Record MinIO deployment facts
|
||||
set_fact:
|
||||
minio_stack_changed: "{{ minio_compose_result.changed | default(false) }}"
|
||||
minio_health_status: "{{ minio_health_check.status | default('disabled' if not minio_healthcheck_enabled else 'unknown') }}"
|
||||
@@ -0,0 +1,7 @@
|
||||
---
|
||||
monitoring_stack_path: "{{ stacks_base_path }}/monitoring"
|
||||
monitoring_wait_timeout: "{{ wait_timeout | default(60) }}"
|
||||
monitoring_env_template: "{{ role_path }}/../../templates/monitoring.env.j2"
|
||||
monitoring_vault_file: "{{ role_path }}/../../secrets/production.vault.yml"
|
||||
# VPN IP whitelist: Allow WireGuard VPN network only (override via extra vars if needed)
|
||||
monitoring_vpn_ip_whitelist: "{{ monitoring_vpn_ip_whitelist_ranges | default([wireguard_network_default | default('10.8.0.0/24')]) | join(',') }}"
|
||||
@@ -0,0 +1,121 @@
|
||||
---
|
||||
- name: Check if monitoring vault file exists
|
||||
stat:
|
||||
path: "{{ monitoring_vault_file }}"
|
||||
delegate_to: localhost
|
||||
register: monitoring_vault_stat
|
||||
become: no
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Optionally load monitoring secrets from vault
|
||||
include_vars:
|
||||
file: "{{ monitoring_vault_file }}"
|
||||
when: monitoring_vault_stat.stat.exists
|
||||
no_log: yes
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
ignore_errors: yes
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Set Grafana admin password from vault or generate
|
||||
set_fact:
|
||||
grafana_admin_password: "{{ vault_grafana_admin_password | default(lookup('password', '/dev/null length=25 chars=ascii_letters,digits')) }}"
|
||||
no_log: yes
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Set Prometheus password from vault or generate
|
||||
set_fact:
|
||||
prometheus_password: "{{ vault_prometheus_password | default(lookup('password', '/dev/null length=25 chars=ascii_letters,digits')) }}"
|
||||
no_log: yes
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Generate Prometheus BasicAuth hash
|
||||
shell: |
|
||||
docker run --rm httpd:alpine htpasswd -nbB admin "{{ prometheus_password }}" 2>/dev/null | cut -d ":" -f 2
|
||||
register: prometheus_auth_hash
|
||||
changed_when: false
|
||||
no_log: yes
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Set Prometheus BasicAuth string
|
||||
set_fact:
|
||||
prometheus_auth: "admin:{{ prometheus_auth_hash.stdout }}"
|
||||
no_log: yes
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Build VPN IP whitelist with endpoints
|
||||
set_fact:
|
||||
monitoring_vpn_ip_whitelist_ranges: "{{ [wireguard_network_default | default('10.8.0.0/24')] }}"
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Set VPN IP whitelist for monitoring
|
||||
set_fact:
|
||||
monitoring_vpn_ip_whitelist: "{{ monitoring_vpn_ip_whitelist_ranges | join(',') }}"
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Set Traefik stack path
|
||||
set_fact:
|
||||
traefik_stack_path: "{{ stacks_base_path }}/traefik"
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Update Traefik middleware with dynamic VPN IPs
|
||||
template:
|
||||
src: "{{ role_path }}/../../templates/traefik-middlewares.yml.j2"
|
||||
dest: "{{ traefik_stack_path }}/dynamic/middlewares.yml"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0644'
|
||||
vars:
|
||||
vpn_network: "{{ wireguard_network_default | default('10.8.0.0/24') }}"
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Ensure monitoring stack directory exists
|
||||
file:
|
||||
path: "{{ monitoring_stack_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Create monitoring stack .env file
|
||||
template:
|
||||
src: "{{ monitoring_env_template }}"
|
||||
dest: "{{ monitoring_stack_path }}/.env"
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
mode: '0600'
|
||||
no_log: yes
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Deploy Monitoring stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ monitoring_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
register: monitoring_compose_result
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Wait for Monitoring to be ready
|
||||
wait_for:
|
||||
timeout: "{{ monitoring_wait_timeout }}"
|
||||
when: monitoring_compose_result.changed
|
||||
tags:
|
||||
- monitoring
|
||||
|
||||
- name: Record monitoring deployment facts
|
||||
set_fact:
|
||||
monitoring_stack_changed: "{{ monitoring_compose_result.changed | default(false) }}"
|
||||
tags:
|
||||
- monitoring
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user