- Update Gitea configuration (remove DEFAULT_ACTIONS_URL) - Fix deployment documentation - Update Ansible playbooks - Clean up deprecated files - Add new deployment scripts and templates
970 lines
22 KiB
Markdown
970 lines
22 KiB
Markdown
# Ansible-Based Deployment
|
|
|
|
⚠️ **WICHTIG:** Diese Dokumentation ist veraltet.
|
|
|
|
**Für aktuelle Ansible-Deployment-Dokumentation siehe:**
|
|
- **[deployment/ansible/README.md](../../deployment/ansible/README.md)** - Aktuelle Ansible-Dokumentation
|
|
- **[deployment/DEPLOYMENT_COMMANDS.md](../../deployment/DEPLOYMENT_COMMANDS.md)** - Command-Referenz
|
|
|
|
---
|
|
|
|
**Historische Dokumentation (veraltet):**
|
|
|
|
Fortgeschrittenes Deployment mit Ansible für Multi-Server Orchestrierung und Infrastructure as Code.
|
|
|
|
## Übersicht
|
|
|
|
Ansible erweitert das Script-based Deployment um:
|
|
|
|
- **Multi-Server Orchestrierung** - Deployment auf mehrere Server gleichzeitig
|
|
- **Infrastructure as Code** - Versionierte, wiederholbare Server-Konfiguration
|
|
- **Idempotenz** - Sichere, wiederholbare Ausführung ohne Seiteneffekte
|
|
- **Rollenzentralisierung** - Wiederverwendbare Konfigurationsbausteine
|
|
- **Inventory Management** - Verwaltung verschiedener Environments
|
|
|
|
## Wann Ansible verwenden?
|
|
|
|
### ✅ Ansible ist sinnvoll für:
|
|
|
|
- **Multiple Environments**: Staging, Production, DR (Disaster Recovery)
|
|
- **Load Balancing**: Mehrere Application Server hinter Load Balancer
|
|
- **Team Collaboration**: Mehrere Entwickler deployen
|
|
- **Compliance**: Dokumentierte, auditierbare Infrastruktur
|
|
- **Skalierung**: Einfaches Hinzufügen neuer Server
|
|
- **Konsistenz**: Identische Konfiguration über alle Server
|
|
|
|
### ❌ Ansible NICHT notwendig wenn:
|
|
|
|
- Single Production Server
|
|
- Kleine Infrastruktur
|
|
- Docker Compose Scripts ausreichend
|
|
- Deployment-Frequenz sehr niedrig
|
|
|
|
## Installation
|
|
|
|
### Lokal (Control Node)
|
|
|
|
```bash
|
|
# Python 3 und pip installieren
|
|
sudo apt install -y python3 python3-pip
|
|
|
|
# Ansible installieren
|
|
pip3 install ansible
|
|
|
|
# Verify
|
|
ansible --version
|
|
|
|
# Ansible Collections installieren
|
|
ansible-galaxy collection install community.docker
|
|
ansible-galaxy collection install community.general
|
|
```
|
|
|
|
### SSH Key Setup
|
|
|
|
```bash
|
|
# SSH Key generieren (falls nicht vorhanden)
|
|
ssh-keygen -t ed25519 -C "ansible@deployment"
|
|
|
|
# Public Key auf Server kopieren
|
|
ssh-copy-id -i ~/.ssh/id_ed25519.pub deploy@production-server
|
|
|
|
# SSH Connection testen
|
|
ssh deploy@production-server
|
|
```
|
|
|
|
---
|
|
|
|
## Projektstruktur
|
|
|
|
```
|
|
ansible/
|
|
├── ansible.cfg # Ansible Konfiguration
|
|
├── inventory/ # Server Inventories
|
|
│ ├── production # Production Server
|
|
│ ├── staging # Staging Server
|
|
│ └── group_vars/ # Group Variables
|
|
│ ├── all.yml # Alle Server
|
|
│ ├── production.yml # Production-spezifisch
|
|
│ └── staging.yml # Staging-spezifisch
|
|
├── playbooks/ # Ansible Playbooks
|
|
│ ├── site.yml # Master Playbook
|
|
│ ├── provision.yml # Server Provisioning
|
|
│ ├── deploy.yml # Application Deployment
|
|
│ ├── rollback.yml # Deployment Rollback
|
|
│ └── maintenance.yml # Wartungsaufgaben
|
|
├── roles/ # Ansible Roles
|
|
│ ├── common/ # Common Server Setup
|
|
│ ├── docker/ # Docker Installation
|
|
│ ├── nginx/ # Nginx Configuration
|
|
│ ├── ssl/ # SSL Certificate Management
|
|
│ ├── vault/ # Secrets Management
|
|
│ └── application/ # Application Deployment
|
|
└── files/ # Statische Files
|
|
└── templates/ # Jinja2 Templates
|
|
```
|
|
|
|
---
|
|
|
|
## Ansible Konfiguration
|
|
|
|
### ansible.cfg
|
|
|
|
```ini
|
|
[defaults]
|
|
inventory = inventory/production
|
|
remote_user = deploy
|
|
host_key_checking = False
|
|
retry_files_enabled = False
|
|
gathering = smart
|
|
fact_caching = jsonfile
|
|
fact_caching_connection = /tmp/ansible_facts
|
|
fact_caching_timeout = 3600
|
|
timeout = 30
|
|
pipelining = True
|
|
log_path = /var/log/ansible.log
|
|
roles_path = roles
|
|
|
|
[privilege_escalation]
|
|
become = True
|
|
become_method = sudo
|
|
become_user = root
|
|
become_ask_pass = False
|
|
|
|
[ssh_connection]
|
|
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o ServerAliveInterval=60
|
|
control_path = /tmp/ansible-ssh-%%h-%%p-%%r
|
|
```
|
|
|
|
---
|
|
|
|
## Inventory Setup
|
|
|
|
### Production Inventory
|
|
|
|
**inventory/production**:
|
|
```ini
|
|
[app_servers]
|
|
app1.yourdomain.com ansible_host=203.0.113.10
|
|
app2.yourdomain.com ansible_host=203.0.113.11
|
|
|
|
[db_servers]
|
|
db1.yourdomain.com ansible_host=203.0.113.20
|
|
|
|
[cache_servers]
|
|
redis1.yourdomain.com ansible_host=203.0.113.30
|
|
|
|
[load_balancers]
|
|
lb1.yourdomain.com ansible_host=203.0.113.5
|
|
|
|
[production:children]
|
|
app_servers
|
|
db_servers
|
|
cache_servers
|
|
load_balancers
|
|
|
|
[production:vars]
|
|
ansible_user=deploy
|
|
ansible_python_interpreter=/usr/bin/python3
|
|
```
|
|
|
|
### Group Variables
|
|
|
|
**inventory/group_vars/all.yml**:
|
|
```yaml
|
|
---
|
|
# Common variables für alle Server
|
|
app_name: app
|
|
app_user: www-data
|
|
app_group: www-data
|
|
app_base_dir: /var/www/app
|
|
log_dir: /var/log/app
|
|
backup_dir: /backups
|
|
|
|
# Docker
|
|
docker_compose_version: "2.20.0"
|
|
|
|
# Git
|
|
git_repo: "git@github.com:yourusername/app.git"
|
|
git_version: "HEAD"
|
|
|
|
# Timezone
|
|
server_timezone: "Europe/Berlin"
|
|
```
|
|
|
|
**inventory/group_vars/production.yml**:
|
|
```yaml
|
|
---
|
|
# Production-spezifische Variables
|
|
environment: production
|
|
domain_name: yourdomain.com
|
|
app_url: "https://{{ domain_name }}"
|
|
|
|
# SSL
|
|
ssl_cert_email: admin@yourdomain.com
|
|
ssl_provider: letsencrypt
|
|
|
|
# Resources
|
|
app_memory_limit: "2g"
|
|
app_cpu_limit: "2"
|
|
worker_count: 3
|
|
|
|
# Backup
|
|
backup_retention_days: 30
|
|
backup_schedule: "0 2 * * *"
|
|
|
|
# Monitoring
|
|
prometheus_enabled: true
|
|
grafana_enabled: true
|
|
```
|
|
|
|
---
|
|
|
|
## Ansible Roles
|
|
|
|
### Role: common (Server Grundkonfiguration)
|
|
|
|
**roles/common/tasks/main.yml**:
|
|
```yaml
|
|
---
|
|
- name: Update apt cache
|
|
apt:
|
|
update_cache: yes
|
|
cache_valid_time: 3600
|
|
|
|
- name: Install required system packages
|
|
apt:
|
|
name:
|
|
- apt-transport-https
|
|
- ca-certificates
|
|
- curl
|
|
- gnupg
|
|
- lsb-release
|
|
- git
|
|
- vim
|
|
- htop
|
|
- ufw
|
|
state: present
|
|
|
|
- name: Set timezone
|
|
timezone:
|
|
name: "{{ server_timezone }}"
|
|
|
|
- name: Create application user
|
|
user:
|
|
name: "{{ app_user }}"
|
|
shell: /bin/bash
|
|
createhome: yes
|
|
groups: sudo
|
|
append: yes
|
|
|
|
- name: Create application directories
|
|
file:
|
|
path: "{{ item }}"
|
|
state: directory
|
|
owner: "{{ app_user }}"
|
|
group: "{{ app_group }}"
|
|
mode: '0755'
|
|
loop:
|
|
- "{{ app_base_dir }}"
|
|
- "{{ log_dir }}"
|
|
- "{{ backup_dir }}"
|
|
- "/opt/vault"
|
|
|
|
- name: Configure UFW firewall
|
|
ufw:
|
|
rule: "{{ item.rule }}"
|
|
port: "{{ item.port }}"
|
|
proto: "{{ item.proto }}"
|
|
loop:
|
|
- { rule: 'allow', port: '22', proto: 'tcp' }
|
|
- { rule: 'allow', port: '80', proto: 'tcp' }
|
|
- { rule: 'allow', port: '443', proto: 'tcp' }
|
|
notify: Enable UFW
|
|
|
|
- name: Set UFW default policies
|
|
ufw:
|
|
direction: "{{ item.direction }}"
|
|
policy: "{{ item.policy }}"
|
|
loop:
|
|
- { direction: 'incoming', policy: 'deny' }
|
|
- { direction: 'outgoing', policy: 'allow' }
|
|
```
|
|
|
|
### Role: docker (Docker Installation)
|
|
|
|
**roles/docker/tasks/main.yml**:
|
|
```yaml
|
|
---
|
|
- name: Add Docker GPG key
|
|
apt_key:
|
|
url: https://download.docker.com/linux/ubuntu/gpg
|
|
state: present
|
|
|
|
- name: Add Docker repository
|
|
apt_repository:
|
|
repo: "deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
|
|
state: present
|
|
|
|
- name: Install Docker
|
|
apt:
|
|
name:
|
|
- docker-ce
|
|
- docker-ce-cli
|
|
- containerd.io
|
|
- docker-compose-plugin
|
|
state: present
|
|
update_cache: yes
|
|
|
|
- name: Add user to docker group
|
|
user:
|
|
name: "{{ app_user }}"
|
|
groups: docker
|
|
append: yes
|
|
|
|
- name: Ensure Docker service is running
|
|
systemd:
|
|
name: docker
|
|
state: started
|
|
enabled: yes
|
|
|
|
- name: Install Docker Python library
|
|
pip:
|
|
name:
|
|
- docker
|
|
- docker-compose
|
|
state: present
|
|
```
|
|
|
|
### Role: ssl (SSL Certificate Management)
|
|
|
|
**roles/ssl/tasks/main.yml**:
|
|
```yaml
|
|
---
|
|
- name: Install Certbot
|
|
apt:
|
|
name:
|
|
- certbot
|
|
- python3-certbot-nginx
|
|
state: present
|
|
|
|
- name: Check if certificate exists
|
|
stat:
|
|
path: "/etc/letsencrypt/live/{{ domain_name }}/fullchain.pem"
|
|
register: cert_exists
|
|
|
|
- name: Obtain SSL certificate
|
|
command: >
|
|
certbot certonly --nginx
|
|
-d {{ domain_name }}
|
|
-d www.{{ domain_name }}
|
|
--email {{ ssl_cert_email }}
|
|
--agree-tos
|
|
--non-interactive
|
|
when: not cert_exists.stat.exists
|
|
|
|
- name: Copy certificates to application directory
|
|
copy:
|
|
src: "/etc/letsencrypt/live/{{ domain_name }}/{{ item.src }}"
|
|
dest: "/etc/ssl/app/{{ item.dest }}"
|
|
remote_src: yes
|
|
mode: "{{ item.mode }}"
|
|
loop:
|
|
- { src: 'fullchain.pem', dest: 'cert.pem', mode: '0644' }
|
|
- { src: 'privkey.pem', dest: 'key.pem', mode: '0600' }
|
|
|
|
- name: Setup certificate auto-renewal
|
|
cron:
|
|
name: "Renew SSL certificates"
|
|
minute: "0"
|
|
hour: "3"
|
|
job: >
|
|
certbot renew --quiet &&
|
|
cp /etc/letsencrypt/live/{{ domain_name }}/fullchain.pem /etc/ssl/app/cert.pem &&
|
|
cp /etc/letsencrypt/live/{{ domain_name }}/privkey.pem /etc/ssl/app/key.pem &&
|
|
docker compose -f {{ app_base_dir }}/docker-compose.production.yml restart nginx
|
|
```
|
|
|
|
### Role: application (Application Deployment)
|
|
|
|
**roles/application/tasks/main.yml**:
|
|
```yaml
|
|
---
|
|
- name: Clone/Update git repository
|
|
git:
|
|
repo: "{{ git_repo }}"
|
|
dest: "{{ app_base_dir }}"
|
|
version: "{{ git_version }}"
|
|
force: yes
|
|
become_user: "{{ app_user }}"
|
|
|
|
- name: Copy environment file
|
|
template:
|
|
src: env.production.j2
|
|
dest: "{{ app_base_dir }}/.env.production"
|
|
owner: "{{ app_user }}"
|
|
group: "{{ app_group }}"
|
|
mode: '0600'
|
|
|
|
- name: Install Composer dependencies
|
|
command: docker compose -f docker-compose.production.yml run --rm php composer install --no-dev --optimize-autoloader
|
|
args:
|
|
chdir: "{{ app_base_dir }}"
|
|
become_user: "{{ app_user }}"
|
|
|
|
- name: Build frontend assets
|
|
command: "{{ item }}"
|
|
args:
|
|
chdir: "{{ app_base_dir }}"
|
|
become_user: "{{ app_user }}"
|
|
loop:
|
|
- docker compose -f docker-compose.production.yml run --rm nodejs npm ci
|
|
- docker compose -f docker-compose.production.yml run --rm nodejs npm run build
|
|
|
|
- name: Build Docker images
|
|
command: docker compose -f docker-compose.production.yml build
|
|
args:
|
|
chdir: "{{ app_base_dir }}"
|
|
become_user: "{{ app_user }}"
|
|
|
|
- name: Run database migrations
|
|
command: docker compose -f docker-compose.production.yml exec -T php php console.php db:migrate
|
|
args:
|
|
chdir: "{{ app_base_dir }}"
|
|
become_user: "{{ app_user }}"
|
|
register: migration_result
|
|
failed_when: false
|
|
|
|
- name: Start/Restart Docker containers
|
|
command: docker compose -f docker-compose.production.yml up -d
|
|
args:
|
|
chdir: "{{ app_base_dir }}"
|
|
become_user: "{{ app_user }}"
|
|
|
|
- name: Wait for application to be ready
|
|
uri:
|
|
url: "http://localhost/health"
|
|
status_code: 200
|
|
register: result
|
|
until: result.status == 200
|
|
retries: 30
|
|
delay: 2
|
|
|
|
- name: Run health checks
|
|
uri:
|
|
url: "http://localhost/health/detailed"
|
|
return_content: yes
|
|
register: health_check
|
|
|
|
- name: Display health check results
|
|
debug:
|
|
var: health_check.json
|
|
```
|
|
|
|
**roles/application/templates/env.production.j2**:
|
|
```jinja2
|
|
# Application
|
|
APP_ENV={{ environment }}
|
|
APP_DEBUG=false
|
|
APP_URL={{ app_url }}
|
|
|
|
# Database
|
|
DB_CONNECTION=mysql
|
|
DB_HOST=db
|
|
DB_PORT=3306
|
|
DB_DATABASE=app_{{ environment }}
|
|
DB_USERNAME={{ vault_db_username }}
|
|
DB_PASSWORD={{ vault_db_password }}
|
|
|
|
# Cache
|
|
CACHE_DRIVER=redis
|
|
REDIS_HOST=redis
|
|
REDIS_PORT=6379
|
|
REDIS_PASSWORD={{ vault_redis_password }}
|
|
|
|
# Queue
|
|
QUEUE_DRIVER=redis
|
|
QUEUE_CONNECTION=redis
|
|
|
|
# Vault
|
|
VAULT_ENCRYPTION_KEY={{ vault_encryption_key }}
|
|
|
|
# Admin Access
|
|
ADMIN_ALLOWED_IPS={{ admin_allowed_ips }}
|
|
|
|
# Logging
|
|
LOG_PATH={{ log_dir }}
|
|
LOG_LEVEL=info
|
|
```
|
|
|
|
---
|
|
|
|
## Playbooks
|
|
|
|
### Master Playbook
|
|
|
|
**playbooks/site.yml**:
|
|
```yaml
|
|
---
|
|
- name: Full Site Deployment
|
|
hosts: all
|
|
gather_facts: yes
|
|
roles:
|
|
- common
|
|
|
|
- name: Setup Docker
|
|
hosts: app_servers
|
|
roles:
|
|
- docker
|
|
|
|
- name: Setup SSL Certificates
|
|
hosts: app_servers
|
|
roles:
|
|
- ssl
|
|
|
|
- name: Deploy Application
|
|
hosts: app_servers
|
|
roles:
|
|
- application
|
|
|
|
- name: Setup Monitoring
|
|
hosts: app_servers
|
|
roles:
|
|
- monitoring
|
|
when: prometheus_enabled | default(false)
|
|
```
|
|
|
|
### Deployment Playbook
|
|
|
|
**playbooks/deploy.yml**:
|
|
```yaml
|
|
---
|
|
- name: Deploy Application Update
|
|
hosts: app_servers
|
|
serial: 1 # Ein Server nach dem anderen (zero-downtime)
|
|
vars_prompt:
|
|
- name: git_version
|
|
prompt: "Git branch/tag/commit to deploy"
|
|
default: "production"
|
|
private: no
|
|
|
|
pre_tasks:
|
|
- name: Create backup
|
|
command: >
|
|
docker compose -f {{ app_base_dir }}/docker-compose.production.yml exec -T db
|
|
mysqldump -u {{ vault_db_username }} -p{{ vault_db_password }} app_{{ environment }}
|
|
register: backup
|
|
changed_when: false
|
|
|
|
- name: Save backup
|
|
copy:
|
|
content: "{{ backup.stdout }}"
|
|
dest: "{{ backup_dir }}/backup_{{ ansible_date_time.iso8601_basic_short }}.sql"
|
|
|
|
tasks:
|
|
- name: Health check before deployment
|
|
uri:
|
|
url: "http://localhost/health"
|
|
status_code: 200
|
|
register: pre_health
|
|
failed_when: pre_health.status != 200
|
|
|
|
- name: Deploy application
|
|
include_role:
|
|
name: application
|
|
|
|
post_tasks:
|
|
- name: Health check after deployment
|
|
uri:
|
|
url: "http://localhost/health/detailed"
|
|
return_content: yes
|
|
register: post_health
|
|
failed_when: post_health.json.overall_healthy != true
|
|
|
|
- name: Run smoke tests
|
|
uri:
|
|
url: "{{ item }}"
|
|
status_code: 200
|
|
loop:
|
|
- "http://localhost/health"
|
|
- "http://localhost/metrics"
|
|
register: smoke_tests
|
|
failed_when: smoke_tests.status != 200
|
|
```
|
|
|
|
### Rollback Playbook
|
|
|
|
**playbooks/rollback.yml**:
|
|
```yaml
|
|
---
|
|
- name: Rollback Application
|
|
hosts: app_servers
|
|
vars_prompt:
|
|
- name: git_version
|
|
prompt: "Git version to rollback to"
|
|
private: no
|
|
- name: rollback_migrations
|
|
prompt: "Number of migrations to rollback (0 for none)"
|
|
default: "0"
|
|
private: no
|
|
|
|
tasks:
|
|
- name: Checkout previous version
|
|
git:
|
|
repo: "{{ git_repo }}"
|
|
dest: "{{ app_base_dir }}"
|
|
version: "{{ git_version }}"
|
|
force: yes
|
|
become_user: "{{ app_user }}"
|
|
|
|
- name: Rollback database migrations
|
|
command: docker compose -f docker-compose.production.yml exec -T php php console.php db:rollback {{ rollback_migrations }}
|
|
args:
|
|
chdir: "{{ app_base_dir }}"
|
|
when: rollback_migrations | int > 0
|
|
|
|
- name: Rebuild Docker images
|
|
command: docker compose -f docker-compose.production.yml build
|
|
args:
|
|
chdir: "{{ app_base_dir }}"
|
|
|
|
- name: Restart containers
|
|
command: docker compose -f docker-compose.production.yml up -d --force-recreate
|
|
args:
|
|
chdir: "{{ app_base_dir }}"
|
|
|
|
- name: Wait for application
|
|
uri:
|
|
url: "http://localhost/health"
|
|
status_code: 200
|
|
register: result
|
|
until: result.status == 200
|
|
retries: 30
|
|
delay: 2
|
|
|
|
- name: Verify rollback
|
|
uri:
|
|
url: "http://localhost/health/detailed"
|
|
return_content: yes
|
|
register: health_check
|
|
failed_when: health_check.json.overall_healthy != true
|
|
```
|
|
|
|
### Provisioning Playbook
|
|
|
|
**playbooks/provision.yml**:
|
|
```yaml
|
|
---
|
|
- name: Provision New Server
|
|
hosts: all
|
|
roles:
|
|
- common
|
|
- docker
|
|
- nginx
|
|
- ssl
|
|
|
|
tasks:
|
|
- name: Setup log rotation
|
|
template:
|
|
src: logrotate.j2
|
|
dest: /etc/logrotate.d/app
|
|
mode: '0644'
|
|
|
|
- name: Setup backup cron
|
|
cron:
|
|
name: "Daily backup"
|
|
minute: "0"
|
|
hour: "2"
|
|
job: "{{ app_base_dir }}/scripts/deployment/backup-database.sh"
|
|
|
|
- name: Setup monitoring
|
|
include_role:
|
|
name: monitoring
|
|
when: prometheus_enabled | default(false)
|
|
```
|
|
|
|
---
|
|
|
|
## Verwendung
|
|
|
|
### Server Provisioning (Einmalig)
|
|
|
|
```bash
|
|
cd ansible
|
|
|
|
# Alle Server provisionieren
|
|
ansible-playbook -i inventory/production playbooks/provision.yml
|
|
|
|
# Nur App Server
|
|
ansible-playbook -i inventory/production playbooks/provision.yml --limit app_servers
|
|
|
|
# Mit Vault Password
|
|
ansible-playbook -i inventory/production playbooks/provision.yml --ask-vault-pass
|
|
```
|
|
|
|
### Application Deployment
|
|
|
|
```bash
|
|
# Standard Deployment
|
|
ansible-playbook -i inventory/production playbooks/deploy.yml
|
|
|
|
# Spezifischer Branch/Tag
|
|
ansible-playbook -i inventory/production playbooks/deploy.yml -e "git_version=v2.1.0"
|
|
|
|
# Dry-Run (keine Änderungen)
|
|
ansible-playbook -i inventory/production playbooks/deploy.yml --check
|
|
|
|
# Nur ein Server
|
|
ansible-playbook -i inventory/production playbooks/deploy.yml --limit app1.yourdomain.com
|
|
```
|
|
|
|
### Rollback
|
|
|
|
```bash
|
|
# Rollback zu spezifischer Version
|
|
ansible-playbook -i inventory/production playbooks/rollback.yml
|
|
|
|
# Mit Migration Rollback
|
|
ansible-playbook -i inventory/production playbooks/rollback.yml -e "rollback_migrations=3"
|
|
```
|
|
|
|
### Ad-hoc Commands
|
|
|
|
```bash
|
|
# Health Check auf allen Servern
|
|
ansible app_servers -i inventory/production -m uri -a "url=http://localhost/health"
|
|
|
|
# Docker Container Status
|
|
ansible app_servers -i inventory/production -m shell -a "cd /var/www/app && docker compose ps"
|
|
|
|
# Log Tail
|
|
ansible app_servers -i inventory/production -m shell -a "tail -20 /var/log/app/app.log"
|
|
|
|
# Service Restart
|
|
ansible app_servers -i inventory/production -m shell -a "cd /var/www/app && docker compose restart php"
|
|
```
|
|
|
|
---
|
|
|
|
## Secrets Management mit Ansible Vault
|
|
|
|
### Vault erstellen
|
|
|
|
```bash
|
|
# Neues Vault File
|
|
ansible-vault create inventory/group_vars/production/vault.yml
|
|
|
|
# Content:
|
|
---
|
|
vault_db_username: app_user
|
|
vault_db_password: <strong-database-password>
|
|
vault_redis_password: <strong-redis-password>
|
|
vault_encryption_key: <vault-encryption-key>
|
|
admin_allowed_ips: "203.0.113.0/24,198.51.100.10"
|
|
```
|
|
|
|
### Vault verwenden
|
|
|
|
```bash
|
|
# Deployment mit Vault
|
|
ansible-playbook -i inventory/production playbooks/deploy.yml --ask-vault-pass
|
|
|
|
# Oder Vault Password File
|
|
echo "your-vault-password" > .vault_pass
|
|
chmod 600 .vault_pass
|
|
|
|
ansible-playbook -i inventory/production playbooks/deploy.yml --vault-password-file .vault_pass
|
|
```
|
|
|
|
### Vault editieren
|
|
|
|
```bash
|
|
# Vault bearbeiten
|
|
ansible-vault edit inventory/group_vars/production/vault.yml
|
|
|
|
# Vault anzeigen
|
|
ansible-vault view inventory/group_vars/production/vault.yml
|
|
|
|
# Vault rekey
|
|
ansible-vault rekey inventory/group_vars/production/vault.yml
|
|
```
|
|
|
|
---
|
|
|
|
## CI/CD Integration
|
|
|
|
### GitHub Actions
|
|
|
|
**.github/workflows/deploy-production.yml**:
|
|
```yaml
|
|
name: Deploy to Production
|
|
|
|
on:
|
|
push:
|
|
branches: [production]
|
|
workflow_dispatch:
|
|
|
|
jobs:
|
|
deploy:
|
|
runs-on: ubuntu-latest
|
|
steps:
|
|
- uses: actions/checkout@v3
|
|
|
|
- name: Setup Python
|
|
uses: actions/setup-python@v4
|
|
with:
|
|
python-version: '3.10'
|
|
|
|
- name: Install Ansible
|
|
run: |
|
|
pip install ansible
|
|
ansible-galaxy collection install community.docker
|
|
|
|
- name: Setup SSH
|
|
env:
|
|
SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
|
|
run: |
|
|
mkdir -p ~/.ssh
|
|
echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_ed25519
|
|
chmod 600 ~/.ssh/id_ed25519
|
|
ssh-keyscan -H production-server >> ~/.ssh/known_hosts
|
|
|
|
- name: Deploy with Ansible
|
|
env:
|
|
VAULT_PASSWORD: ${{ secrets.ANSIBLE_VAULT_PASSWORD }}
|
|
run: |
|
|
cd ansible
|
|
echo "$VAULT_PASSWORD" > .vault_pass
|
|
ansible-playbook -i inventory/production playbooks/deploy.yml --vault-password-file .vault_pass
|
|
rm .vault_pass
|
|
```
|
|
|
|
---
|
|
|
|
## Monitoring Integration
|
|
|
|
### Prometheus Metrics Collection
|
|
|
|
**roles/monitoring/tasks/main.yml**:
|
|
```yaml
|
|
---
|
|
- name: Install Prometheus Node Exporter
|
|
apt:
|
|
name: prometheus-node-exporter
|
|
state: present
|
|
|
|
- name: Setup Prometheus scraping
|
|
template:
|
|
src: prometheus.yml.j2
|
|
dest: /etc/prometheus/prometheus.yml
|
|
notify: Restart Prometheus
|
|
|
|
- name: Add health check monitoring
|
|
cron:
|
|
name: "Health check monitoring"
|
|
minute: "*/5"
|
|
job: "curl -f http://localhost/health || echo 'Health check failed' | mail -s 'Production Health Alert' admin@yourdomain.com"
|
|
```
|
|
|
|
---
|
|
|
|
## Best Practices
|
|
|
|
### 1. Idempotenz sicherstellen
|
|
|
|
```yaml
|
|
# ✅ Idempotent
|
|
- name: Ensure application directory exists
|
|
file:
|
|
path: "{{ app_base_dir }}"
|
|
state: directory
|
|
|
|
# ❌ Nicht idempotent
|
|
- name: Create directory
|
|
shell: mkdir -p {{ app_base_dir }}
|
|
```
|
|
|
|
### 2. Fehlerbehandlung
|
|
|
|
```yaml
|
|
- name: Run migrations
|
|
command: php console.php db:migrate
|
|
register: migration_result
|
|
failed_when: false
|
|
|
|
- name: Check migration result
|
|
fail:
|
|
msg: "Migrations failed: {{ migration_result.stderr }}"
|
|
when: migration_result.rc != 0 and 'already applied' not in migration_result.stderr
|
|
```
|
|
|
|
### 3. Atomare Deployments
|
|
|
|
```yaml
|
|
# Serial deployment für zero-downtime
|
|
- hosts: app_servers
|
|
serial: 1 # Ein Server nach dem anderen
|
|
max_fail_percentage: 0 # Stop bei Fehler
|
|
```
|
|
|
|
### 4. Backup vor Deployment
|
|
|
|
```yaml
|
|
pre_tasks:
|
|
- name: Backup database
|
|
include_tasks: backup.yml
|
|
```
|
|
|
|
---
|
|
|
|
## Vergleich: Script-Based vs Ansible
|
|
|
|
| Feature | Script-Based | Ansible |
|
|
|---------|--------------|---------|
|
|
| **Setup Complexity** | Niedrig | Mittel |
|
|
| **Multi-Server** | Manuell | Automatisch |
|
|
| **Idempotenz** | Teilweise | Vollständig |
|
|
| **Rollback** | Manuell | Automatisiert |
|
|
| **Secrets Management** | .env Files | Ansible Vault |
|
|
| **Infrastructure as Code** | Teilweise | Vollständig |
|
|
| **Learning Curve** | Niedrig | Mittel |
|
|
| **Best For** | Single Server, einfache Deployments | Multiple Servers, komplexe Infrastruktur |
|
|
|
|
---
|
|
|
|
## Empfehlung für dein Projekt
|
|
|
|
### Start: Script-Based
|
|
|
|
1. ✅ Schneller Start
|
|
2. ✅ Einfache Wartung
|
|
3. ✅ Ausreichend für initiales Setup
|
|
4. ✅ Docker Compose als Basis
|
|
|
|
### Später: Ansible hinzufügen wenn
|
|
|
|
- [ ] Zweiter Production Server
|
|
- [ ] Staging Environment
|
|
- [ ] Team wächst
|
|
- [ ] Compliance-Anforderungen
|
|
- [ ] Multi-Region Deployment
|
|
|
|
### Hybrid-Ansatz (Empfohlen)
|
|
|
|
1. **Phase 1**: Script-based deployment für initial setup
|
|
2. **Phase 2**: Ansible für Server Provisioning (einmalig)
|
|
3. **Phase 3**: Ansible Playbooks als Option für Team-Deployments
|
|
4. **Phase 4**: CI/CD mit Ansible für automatisierte Deployments
|
|
|
|
**Beide Optionen parallel verfügbar** - Team kann wählen!
|
|
|
|
---
|
|
|
|
## Nächste Schritte
|
|
|
|
1. ✅ Start mit Script-based deployment (siehe DEPLOYMENT_WORKFLOW.md)
|
|
2. 📝 Ansible Struktur vorbereiten (optional)
|
|
3. 🔄 Bei Bedarf auf Ansible migrieren
|
|
4. 🚀 CI/CD Pipeline einrichten
|
|
|
|
Die Scripts funktionieren weiterhin - Ansible ist Enhancement, kein Replacement!
|