fix(console): comprehensive TUI rendering fixes
- Fix Enter key detection: handle multiple Enter key formats (\n, \r, \r\n) - Reduce flickering: lower render frequency from 60 FPS to 30 FPS - Fix menu bar visibility: re-render menu bar after content to prevent overwriting - Fix content positioning: explicit line positioning for categories and commands - Fix line shifting: clear lines before writing, control newlines manually - Limit visible items: prevent overflow with maxVisibleCategories/Commands - Improve CPU usage: increase sleep interval when no events processed This fixes: - Enter key not working for selection - Strong flickering of the application - Menu bar not visible or being overwritten - Top half of selection list not displayed - Lines being shifted/misaligned
This commit is contained in:
210
deployment/ansible/playbooks/setup/REDEPLOY_GUIDE.md
Normal file
210
deployment/ansible/playbooks/setup/REDEPLOY_GUIDE.md
Normal file
@@ -0,0 +1,210 @@
|
||||
# Traefik/Gitea Redeploy Guide
|
||||
|
||||
This guide explains how to perform a clean redeployment of Traefik and Gitea stacks.
|
||||
|
||||
## Overview
|
||||
|
||||
A clean redeploy:
|
||||
- Stops and removes containers (preserves volumes and SSL certificates)
|
||||
- Syncs latest configurations
|
||||
- Redeploys stacks with fresh containers
|
||||
- Restores configurations
|
||||
- Verifies service discovery
|
||||
|
||||
**Expected downtime**: ~2-5 minutes
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Ansible installed locally
|
||||
- SSH access to production server
|
||||
- Vault password file: `deployment/ansible/secrets/.vault_pass`
|
||||
|
||||
## Step-by-Step Guide
|
||||
|
||||
### Step 1: Backup
|
||||
|
||||
**Automatic backup (recommended):**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/maintenance/backup-before-redeploy.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
**Manual backup:**
|
||||
```bash
|
||||
# On server
|
||||
cd /home/deploy/deployment/stacks
|
||||
docker compose -f gitea/docker-compose.yml exec gitea cat /data/gitea/conf/app.ini > /tmp/gitea-app.ini.backup
|
||||
cp traefik/acme.json /tmp/acme.json.backup
|
||||
```
|
||||
|
||||
### Step 2: Verify Backup
|
||||
|
||||
Check backup contents:
|
||||
```bash
|
||||
# Backup location will be shown in output
|
||||
ls -lh /home/deploy/backups/redeploy-backup-*/
|
||||
```
|
||||
|
||||
Verify:
|
||||
- `acme.json` exists
|
||||
- `gitea-app.ini` exists
|
||||
- `gitea-volume-*.tar.gz` exists (if volumes were backed up)
|
||||
|
||||
### Step 3: Redeploy
|
||||
|
||||
**With automatic backup:**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/setup/redeploy-traefik-gitea-clean.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
**With existing backup:**
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/setup/redeploy-traefik-gitea-clean.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name=redeploy-backup-1234567890" \
|
||||
-e "skip_backup=true"
|
||||
```
|
||||
|
||||
### Step 4: Verify Deployment
|
||||
|
||||
**Check Gitea accessibility:**
|
||||
```bash
|
||||
curl -k https://git.michaelschiemer.de/api/healthz
|
||||
```
|
||||
|
||||
**Check Traefik service discovery:**
|
||||
```bash
|
||||
# On server
|
||||
cd /home/deploy/deployment/stacks/traefik
|
||||
docker compose exec traefik traefik show providers docker | grep -i gitea
|
||||
```
|
||||
|
||||
**Check container status:**
|
||||
```bash
|
||||
# On server
|
||||
docker ps | grep -E "traefik|gitea"
|
||||
```
|
||||
|
||||
### Step 5: Troubleshooting
|
||||
|
||||
**If Gitea is not reachable:**
|
||||
|
||||
1. Check Gitea logs:
|
||||
```bash
|
||||
cd /home/deploy/deployment/stacks/gitea
|
||||
docker compose logs gitea --tail=50
|
||||
```
|
||||
|
||||
2. Check Traefik logs:
|
||||
```bash
|
||||
cd /home/deploy/deployment/stacks/traefik
|
||||
docker compose logs traefik --tail=50
|
||||
```
|
||||
|
||||
3. Check service discovery:
|
||||
```bash
|
||||
cd /home/deploy/deployment/stacks/traefik
|
||||
docker compose exec traefik traefik show providers docker
|
||||
```
|
||||
|
||||
4. Run diagnosis:
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/diagnose/gitea.yml \
|
||||
--vault-password-file secrets/.vault_pass
|
||||
```
|
||||
|
||||
**If SSL certificate issues:**
|
||||
|
||||
1. Check acme.json permissions:
|
||||
```bash
|
||||
ls -l /home/deploy/deployment/stacks/traefik/acme.json
|
||||
# Should be: -rw------- (600)
|
||||
```
|
||||
|
||||
2. Check Traefik ACME logs:
|
||||
```bash
|
||||
cd /home/deploy/deployment/stacks/traefik
|
||||
docker compose logs traefik | grep -i acme
|
||||
```
|
||||
|
||||
## Rollback Procedure
|
||||
|
||||
If something goes wrong, rollback to the backup:
|
||||
|
||||
```bash
|
||||
cd deployment/ansible
|
||||
ansible-playbook -i inventory/production.yml \
|
||||
playbooks/maintenance/rollback-redeploy.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name=redeploy-backup-1234567890"
|
||||
```
|
||||
|
||||
Replace `redeploy-backup-1234567890` with the actual backup name from Step 1.
|
||||
|
||||
## What Gets Preserved
|
||||
|
||||
- ✅ Gitea data (volumes)
|
||||
- ✅ SSL certificates (acme.json)
|
||||
- ✅ Gitea configuration (app.ini)
|
||||
- ✅ Traefik configuration
|
||||
- ✅ PostgreSQL data (if applicable)
|
||||
|
||||
## What Gets Recreated
|
||||
|
||||
- 🔄 Traefik container
|
||||
- 🔄 Gitea container
|
||||
- 🔄 Service discovery
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Issue: Gitea returns 404 after redeploy
|
||||
|
||||
**Solution:**
|
||||
1. Wait 1-2 minutes for service discovery
|
||||
2. Restart Traefik: `cd /home/deploy/deployment/stacks/traefik && docker compose restart traefik`
|
||||
3. Check if Gitea is in traefik-public network: `docker network inspect traefik-public | grep gitea`
|
||||
|
||||
### Issue: SSL certificate errors
|
||||
|
||||
**Solution:**
|
||||
1. Verify acme.json permissions: `chmod 600 /home/deploy/deployment/stacks/traefik/acme.json`
|
||||
2. Check Traefik logs for ACME errors
|
||||
3. Wait 5-10 minutes for certificate renewal
|
||||
|
||||
### Issue: Gitea configuration lost
|
||||
|
||||
**Solution:**
|
||||
1. Restore from backup: `playbooks/maintenance/rollback-redeploy.yml`
|
||||
2. Or manually restore app.ini:
|
||||
```bash
|
||||
cd /home/deploy/deployment/stacks/gitea
|
||||
docker compose exec gitea sh -c "cat > /data/gitea/conf/app.ini" < /path/to/backup/gitea-app.ini
|
||||
docker compose restart gitea
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always backup before redeploy** - Use automatic backup
|
||||
2. **Test in staging first** - If available
|
||||
3. **Monitor during deployment** - Watch logs in separate terminal
|
||||
4. **Have rollback ready** - Know backup name before starting
|
||||
5. **Verify after deployment** - Check all services are accessible
|
||||
|
||||
## Related Playbooks
|
||||
|
||||
- `playbooks/maintenance/backup-before-redeploy.yml` - Create backup
|
||||
- `playbooks/setup/redeploy-traefik-gitea-clean.yml` - Perform redeploy
|
||||
- `playbooks/maintenance/rollback-redeploy.yml` - Rollback from backup
|
||||
- `playbooks/diagnose/gitea.yml` - Diagnose Gitea issues
|
||||
- `playbooks/diagnose/traefik.yml` - Diagnose Traefik issues
|
||||
|
||||
|
||||
14
deployment/ansible/playbooks/setup/gitea.yml
Normal file
14
deployment/ansible/playbooks/setup/gitea.yml
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
# Setup Gitea Initial Configuration
|
||||
# Wrapper Playbook for gitea role setup tasks
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
tasks:
|
||||
- name: Include gitea setup tasks
|
||||
ansible.builtin.include_role:
|
||||
name: gitea
|
||||
tasks_from: setup
|
||||
tags:
|
||||
- gitea
|
||||
- setup
|
||||
242
deployment/ansible/playbooks/setup/infrastructure.yml
Normal file
242
deployment/ansible/playbooks/setup/infrastructure.yml
Normal file
@@ -0,0 +1,242 @@
|
||||
---
|
||||
- name: Deploy Infrastructure Stacks on Production Server
|
||||
hosts: production
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
# All deployment variables are now defined in group_vars/production.yml
|
||||
# Variables can be overridden via -e flag if needed
|
||||
vault_file: "{{ playbook_dir }}/../secrets/production.vault.yml"
|
||||
|
||||
pre_tasks:
|
||||
- name: Verify vault file exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ vault_file }}"
|
||||
register: vault_stat
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Load encrypted secrets from vault
|
||||
ansible.builtin.include_vars:
|
||||
file: "{{ vault_file }}"
|
||||
when: vault_stat.stat.exists
|
||||
no_log: yes
|
||||
ignore_errors: yes
|
||||
delegate_to: localhost
|
||||
become: no
|
||||
|
||||
- name: Verify vault secrets were loaded
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Vault secrets loaded:
|
||||
- vault_db_password: {{ 'SET (length: ' + (vault_db_password | default('') | string | length | string) + ')' if (vault_db_password | default('') | string | trim) != '' else 'NOT SET or EMPTY' }}
|
||||
- vault_redis_password: {{ 'SET' if (vault_redis_password | default('') | string | trim) != '' else 'NOT SET' }}
|
||||
- vault_app_key: {{ 'SET' if (vault_app_key | default('') | string | trim) != '' else 'NOT SET' }}
|
||||
- vault_docker_registry_password: {{ 'SET (length: ' + (vault_docker_registry_password | default('') | string | length | string) + ')' if (vault_docker_registry_password | default('') | string | trim) != '' else 'NOT SET or EMPTY' }}
|
||||
when: vault_stat.stat.exists
|
||||
no_log: yes
|
||||
|
||||
- name: Warn if vault file is missing
|
||||
ansible.builtin.debug:
|
||||
msg: "WARNING: Vault file not found at {{ vault_file }}. Some roles may fail if they require vault secrets."
|
||||
when: not vault_stat.stat.exists
|
||||
|
||||
tasks:
|
||||
- name: Debug - Show variables
|
||||
debug:
|
||||
msg:
|
||||
- "stacks_base_path: {{ stacks_base_path | default('NOT SET') }}"
|
||||
- "deploy_user_home: {{ deploy_user_home | default('NOT SET') }}"
|
||||
when: true # Debugging enabled
|
||||
|
||||
- name: Check if deployment stacks directory exists
|
||||
stat:
|
||||
path: "{{ stacks_base_path }}"
|
||||
register: stacks_dir
|
||||
|
||||
- name: Create deployment stacks directory if it doesn't exist
|
||||
file:
|
||||
path: "{{ stacks_base_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
when: not stacks_dir.stat.exists
|
||||
|
||||
- name: Ensure rsync is installed (required for synchronize)
|
||||
ansible.builtin.apt:
|
||||
name: rsync
|
||||
state: present
|
||||
update_cache: no
|
||||
become: yes
|
||||
|
||||
- name: Sync infrastructure stacks to server
|
||||
synchronize:
|
||||
src: "{{ playbook_dir }}/../../stacks/"
|
||||
dest: "{{ stacks_base_path }}/"
|
||||
delete: no
|
||||
recursive: yes
|
||||
rsync_opts:
|
||||
- "--chmod=D755,F644"
|
||||
- "--exclude=.git"
|
||||
- "--exclude=*.log"
|
||||
- "--exclude=data/"
|
||||
- "--exclude=volumes/"
|
||||
- "--exclude=acme.json"
|
||||
- "--exclude=*.key"
|
||||
- "--exclude=*.pem"
|
||||
|
||||
- name: Ensure executable permissions on PostgreSQL backup scripts
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
mode: '0755'
|
||||
loop:
|
||||
- "{{ stacks_base_path }}/postgresql-production/scripts/backup-entrypoint.sh"
|
||||
- "{{ stacks_base_path }}/postgresql-production/scripts/backup.sh"
|
||||
- "{{ stacks_base_path }}/postgresql-production/scripts/restore.sh"
|
||||
- "{{ stacks_base_path }}/postgresql-staging/scripts/backup-entrypoint.sh"
|
||||
- "{{ stacks_base_path }}/postgresql-staging/scripts/backup.sh"
|
||||
- "{{ stacks_base_path }}/postgresql-staging/scripts/restore.sh"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Ensure system packages are up to date
|
||||
include_role:
|
||||
name: system
|
||||
when: system_update_packages | bool
|
||||
|
||||
# Create external networks required by all stacks
|
||||
- name: Create traefik-public network
|
||||
community.docker.docker_network:
|
||||
name: traefik-public
|
||||
driver: bridge
|
||||
state: present
|
||||
|
||||
- name: Create app-internal network
|
||||
community.docker.docker_network:
|
||||
name: app-internal
|
||||
driver: bridge
|
||||
state: present
|
||||
|
||||
# 1. Deploy Traefik (Reverse Proxy & SSL)
|
||||
- name: Deploy Traefik stack
|
||||
import_role:
|
||||
name: traefik
|
||||
|
||||
# 2. Deploy PostgreSQL Production (Database)
|
||||
- name: Deploy PostgreSQL Production stack
|
||||
import_role:
|
||||
name: postgresql-production
|
||||
|
||||
# 3. Deploy Redis (Cache & Session Store)
|
||||
- name: Deploy Redis stack
|
||||
import_role:
|
||||
name: redis
|
||||
|
||||
# 4. Deploy Docker Registry (Private Registry)
|
||||
- name: Deploy Docker Registry stack
|
||||
import_role:
|
||||
name: registry
|
||||
|
||||
# 5. Deploy MinIO (Object Storage)
|
||||
- name: Deploy MinIO stack
|
||||
import_role:
|
||||
name: minio
|
||||
|
||||
# 6. Deploy Gitea (CRITICAL - Git Server + MySQL)
|
||||
- name: Deploy Gitea stack
|
||||
import_role:
|
||||
name: gitea
|
||||
|
||||
# 7. Deploy Monitoring (Portainer + Grafana + Prometheus)
|
||||
- name: Deploy Monitoring stack
|
||||
import_role:
|
||||
name: monitoring
|
||||
|
||||
# 8. Deploy Production Stack
|
||||
- name: Deploy Production Stack
|
||||
import_role:
|
||||
name: application
|
||||
vars:
|
||||
application_stack_src: "{{ playbook_dir | default(role_path + '/..') }}/../../stacks/production"
|
||||
application_stack_dest: "{{ app_stack_path | default(stacks_base_path + '/production') }}"
|
||||
application_compose_suffix: "production.yml"
|
||||
application_service_name: "php"
|
||||
application_env_template: "{{ role_path }}/../../templates/application.env.j2"
|
||||
app_env: "production"
|
||||
# Explicitly pass vault variables to the role
|
||||
vault_docker_registry_password: "{{ vault_docker_registry_password | default('') }}"
|
||||
app_domain: "michaelschiemer.de"
|
||||
app_debug: "false"
|
||||
db_name: "{{ db_name_default }}"
|
||||
db_host: "{{ db_host_default }}"
|
||||
|
||||
# Verification
|
||||
- name: List all running containers
|
||||
command: >
|
||||
docker ps --format 'table {{ "{{" }}.Names{{ "}}" }}\t{{ "{{" }}.Status{{ "}}" }}\t{{ "{{" }}.Ports{{ "}}" }}'
|
||||
register: docker_ps_output
|
||||
|
||||
- name: Display running containers
|
||||
debug:
|
||||
msg: "{{ docker_ps_output.stdout_lines }}"
|
||||
|
||||
- name: Verify Gitea accessibility via HTTPS
|
||||
uri:
|
||||
url: "https://{{ gitea_domain }}"
|
||||
method: GET
|
||||
validate_certs: no
|
||||
status_code: 200
|
||||
timeout: 10
|
||||
register: gitea_http_check
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display Gitea accessibility status
|
||||
debug:
|
||||
msg: "Gitea HTTPS check: {{ 'SUCCESS' if gitea_http_check.status == 200 else 'FAILED - Status: ' + (gitea_http_check.status|string) }}"
|
||||
|
||||
# 8. Deploy Production Stack
|
||||
- name: Deploy Production Stack
|
||||
import_role:
|
||||
name: application
|
||||
|
||||
- name: Display application health status
|
||||
debug:
|
||||
msg: "Application health: {{ application_health_output if application_health_output != '' else 'All services healthy or starting' }}"
|
||||
|
||||
- name: Display migration result
|
||||
debug:
|
||||
msg: |
|
||||
Migration Result:
|
||||
{{ application_migration_stdout if application_migration_stdout != '' else 'Migration may have failed - check logs with: docker compose -f ' + application_stack_dest + '/docker-compose.yml logs app' }}
|
||||
when: application_stack_changed and application_run_migrations
|
||||
|
||||
- name: Display application accessibility status
|
||||
debug:
|
||||
msg: >-
|
||||
Application health check: {{
|
||||
'SUCCESS (HTTP ' + (application_healthcheck_status | string) + ')'
|
||||
if application_healthcheck_status == 200 else
|
||||
'FAILED or not ready yet (HTTP ' + (application_healthcheck_status | string) + ')'
|
||||
}}
|
||||
when: application_stack_changed and application_healthcheck_url | length > 0
|
||||
|
||||
- name: Summary
|
||||
debug:
|
||||
msg:
|
||||
- "=== Infrastructure Deployment Complete ==="
|
||||
- "Traefik: {{ 'Deployed' if traefik_stack_changed is defined and traefik_stack_changed else 'Already running' }}"
|
||||
- "PostgreSQL: {{ 'Deployed' if postgresql_stack_changed is defined and postgresql_stack_changed else 'Already running' }}"
|
||||
- "Redis: {{ 'Deployed' if redis_stack_changed is defined and redis_stack_changed else 'Already running' }}"
|
||||
- "Docker Registry: {{ 'Deployed' if registry_stack_changed is defined and registry_stack_changed else 'Already running' }}"
|
||||
- "MinIO: {{ 'Deployed' if minio_stack_changed is defined and minio_stack_changed else 'Already running' }}"
|
||||
- "Gitea: {{ 'Deployed' if gitea_stack_changed is defined and gitea_stack_changed else 'Already running' }}"
|
||||
- "Monitoring: {{ 'Deployed' if monitoring_stack_changed is defined and monitoring_stack_changed else 'Already running' }}"
|
||||
- "Application: {{ 'Deployed' if application_stack_changed is defined and application_stack_changed else 'Already running' }}"
|
||||
- ""
|
||||
- "Next Steps:"
|
||||
- "1. Access Gitea at: https://{{ gitea_domain }}"
|
||||
- "2. Complete Gitea setup wizard if first-time deployment"
|
||||
- "3. Navigate to Admin > Actions > Runners to get registration token"
|
||||
- "4. Continue with Phase 1 - Gitea Runner Setup"
|
||||
- "5. Access Application at: https://{{ app_domain }}"
|
||||
@@ -0,0 +1,321 @@
|
||||
---
|
||||
# Clean Redeploy Traefik and Gitea Stacks
|
||||
# Complete redeployment with backup, container recreation, and verification
|
||||
#
|
||||
# Usage:
|
||||
# # With automatic backup
|
||||
# ansible-playbook -i inventory/production.yml playbooks/setup/redeploy-traefik-gitea-clean.yml \
|
||||
# --vault-password-file secrets/.vault_pass
|
||||
#
|
||||
# # With existing backup
|
||||
# ansible-playbook -i inventory/production.yml playbooks/setup/redeploy-traefik-gitea-clean.yml \
|
||||
# --vault-password-file secrets/.vault_pass \
|
||||
# -e "backup_name=redeploy-backup-1234567890" \
|
||||
# -e "skip_backup=true"
|
||||
|
||||
- name: Clean Redeploy Traefik and Gitea
|
||||
hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
traefik_stack_path: "{{ stacks_base_path }}/traefik"
|
||||
gitea_stack_path: "{{ stacks_base_path }}/gitea"
|
||||
gitea_url: "https://{{ gitea_domain }}"
|
||||
traefik_container_name: "traefik"
|
||||
gitea_container_name: "gitea"
|
||||
backup_base_path: "{{ backups_path | default('/home/deploy/backups') }}"
|
||||
skip_backup: "{{ skip_backup | default(false) | bool }}"
|
||||
backup_name: "{{ backup_name | default('') }}"
|
||||
|
||||
tasks:
|
||||
# ========================================
|
||||
# 1. BACKUP (unless skipped)
|
||||
# ========================================
|
||||
- name: Set backup name fact
|
||||
ansible.builtin.set_fact:
|
||||
actual_backup_name: "{{ backup_name | default('redeploy-backup-' + ansible_date_time.epoch) }}"
|
||||
when: not skip_backup
|
||||
|
||||
- name: Display backup note
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
⚠️ NOTE: Backup should be run separately before redeploy:
|
||||
ansible-playbook -i inventory/production.yml playbooks/maintenance/backup-before-redeploy.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name={{ actual_backup_name }}"
|
||||
|
||||
Or use existing backup with: -e "backup_name=redeploy-backup-XXXXX" -e "skip_backup=true"
|
||||
when: not skip_backup
|
||||
|
||||
- name: Display redeployment plan
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
CLEAN REDEPLOY TRAEFIK AND GITEA
|
||||
================================================================================
|
||||
|
||||
This playbook will:
|
||||
1. ✅ Backup ({% if skip_backup %}SKIPPED{% else %}Performed{% endif %})
|
||||
2. ✅ Stop and remove Traefik containers (keeps acme.json)
|
||||
3. ✅ Stop and remove Gitea containers (keeps volumes/data)
|
||||
4. ✅ Sync latest stack configurations
|
||||
5. ✅ Redeploy Traefik stack
|
||||
6. ✅ Redeploy Gitea stack
|
||||
7. ✅ Restore Gitea configuration (app.ini)
|
||||
8. ✅ Verify service discovery
|
||||
9. ✅ Test Gitea accessibility
|
||||
|
||||
⚠️ IMPORTANT:
|
||||
- SSL certificates (acme.json) will be preserved
|
||||
- Gitea data (volumes) will be preserved
|
||||
- Only containers will be recreated
|
||||
- Expected downtime: ~2-5 minutes
|
||||
{% if not skip_backup %}
|
||||
- Backup location: {{ backup_base_path }}/{{ actual_backup_name }}
|
||||
{% endif %}
|
||||
|
||||
================================================================================
|
||||
|
||||
# ========================================
|
||||
# 2. STOP AND REMOVE CONTAINERS
|
||||
# ========================================
|
||||
- name: Stop Traefik stack
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose down
|
||||
register: traefik_stop
|
||||
changed_when: traefik_stop.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Remove Traefik containers (if any remain)
|
||||
ansible.builtin.shell: |
|
||||
docker ps -a --filter "name={{ traefik_container_name }}" --format "{{ '{{' }}.ID{{ '}}' }}" | xargs -r docker rm -f 2>/dev/null || true
|
||||
register: traefik_remove
|
||||
changed_when: traefik_remove.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Stop Gitea stack (preserves volumes)
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose down
|
||||
register: gitea_stop
|
||||
changed_when: gitea_stop.rc == 0
|
||||
failed_when: false
|
||||
|
||||
- name: Remove Gitea containers (if any remain, volumes are preserved)
|
||||
ansible.builtin.shell: |
|
||||
docker ps -a --filter "name={{ gitea_container_name }}" --format "{{ '{{' }}.ID{{ '}}' }}" | xargs -r docker rm -f 2>/dev/null || true
|
||||
register: gitea_remove
|
||||
changed_when: gitea_remove.rc == 0
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# 3. SYNC CONFIGURATIONS
|
||||
# ========================================
|
||||
- name: Get stacks directory path
|
||||
ansible.builtin.set_fact:
|
||||
stacks_source_path: "{{ playbook_dir | dirname | dirname | dirname }}/stacks"
|
||||
delegate_to: localhost
|
||||
run_once: true
|
||||
|
||||
- name: Sync stacks directory to production server
|
||||
ansible.builtin.synchronize:
|
||||
src: "{{ stacks_source_path }}/"
|
||||
dest: "{{ stacks_base_path }}/"
|
||||
delete: no
|
||||
recursive: yes
|
||||
rsync_opts:
|
||||
- "--chmod=D755,F644"
|
||||
- "--exclude=.git"
|
||||
- "--exclude=*.log"
|
||||
- "--exclude=data/"
|
||||
- "--exclude=volumes/"
|
||||
- "--exclude=acme.json" # Preserve SSL certificates
|
||||
- "--exclude=*.key"
|
||||
- "--exclude=*.pem"
|
||||
|
||||
# ========================================
|
||||
# 4. ENSURE ACME.JSON EXISTS
|
||||
# ========================================
|
||||
- name: Check if acme.json exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ traefik_stack_path }}/acme.json"
|
||||
register: acme_json_stat
|
||||
|
||||
- name: Ensure acme.json exists and has correct permissions
|
||||
ansible.builtin.file:
|
||||
path: "{{ traefik_stack_path }}/acme.json"
|
||||
state: touch
|
||||
mode: '0600'
|
||||
owner: "{{ ansible_user }}"
|
||||
group: "{{ ansible_user }}"
|
||||
become: yes
|
||||
register: acme_json_ensure
|
||||
|
||||
# ========================================
|
||||
# 5. REDEPLOY TRAEFIK
|
||||
# ========================================
|
||||
- name: Deploy Traefik stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ traefik_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
register: traefik_deploy
|
||||
|
||||
- name: Wait for Traefik to be ready
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose ps {{ traefik_container_name }} | grep -Eiq "Up|running"
|
||||
register: traefik_ready
|
||||
changed_when: false
|
||||
until: traefik_ready.rc == 0
|
||||
retries: 12
|
||||
delay: 5
|
||||
failed_when: traefik_ready.rc != 0
|
||||
|
||||
# ========================================
|
||||
# 6. REDEPLOY GITEA
|
||||
# ========================================
|
||||
- name: Deploy Gitea stack
|
||||
community.docker.docker_compose_v2:
|
||||
project_src: "{{ gitea_stack_path }}"
|
||||
state: present
|
||||
pull: always
|
||||
register: gitea_deploy
|
||||
|
||||
- name: Wait for Gitea to be ready
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose ps {{ gitea_container_name }} | grep -Eiq "Up|running"
|
||||
register: gitea_ready
|
||||
changed_when: false
|
||||
until: gitea_ready.rc == 0
|
||||
retries: 12
|
||||
delay: 5
|
||||
failed_when: gitea_ready.rc != 0
|
||||
|
||||
- name: Wait for Gitea to be healthy
|
||||
ansible.builtin.shell: |
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose exec -T {{ gitea_container_name }} curl -f http://localhost:3000/api/healthz 2>&1 | grep -q "status.*pass" && echo "HEALTHY" || echo "NOT_HEALTHY"
|
||||
register: gitea_health
|
||||
changed_when: false
|
||||
until: gitea_health.stdout == "HEALTHY"
|
||||
retries: 30
|
||||
delay: 2
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# 7. RESTORE GITEA CONFIGURATION
|
||||
# ========================================
|
||||
- name: Restore Gitea app.ini from backup
|
||||
ansible.builtin.shell: |
|
||||
if [ -f "{{ backup_base_path }}/{{ actual_backup_name }}/gitea-app.ini" ]; then
|
||||
cd {{ gitea_stack_path }}
|
||||
docker compose exec -T {{ gitea_container_name }} sh -c "cat > /data/gitea/conf/app.ini" < "{{ backup_base_path }}/{{ actual_backup_name }}/gitea-app.ini"
|
||||
docker compose restart {{ gitea_container_name }}
|
||||
echo "app.ini restored and Gitea restarted"
|
||||
else
|
||||
echo "No app.ini backup found, using default configuration"
|
||||
fi
|
||||
when: not skip_backup
|
||||
register: gitea_app_ini_restore
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# 8. VERIFY SERVICE DISCOVERY
|
||||
# ========================================
|
||||
- name: Wait for service discovery (Traefik needs time to discover Gitea)
|
||||
ansible.builtin.pause:
|
||||
seconds: 15
|
||||
|
||||
- name: Check if Gitea is in traefik-public network
|
||||
ansible.builtin.shell: |
|
||||
docker network inspect traefik-public --format '{{ '{{' }}range .Containers{{ '}}' }}{{ '{{' }}.Name{{ '}}' }} {{ '{{' }}end{{ '}}' }}' 2>/dev/null | grep -q {{ gitea_container_name }} && echo "YES" || echo "NO"
|
||||
register: gitea_in_network
|
||||
changed_when: false
|
||||
|
||||
- name: Test direct connection from Traefik to Gitea
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
docker compose exec -T {{ traefik_container_name }} wget -qO- --timeout=5 http://{{ gitea_container_name }}:3000/api/healthz 2>&1 | head -5 || echo "CONNECTION_FAILED"
|
||||
register: traefik_gitea_direct
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
# ========================================
|
||||
# 9. FINAL VERIFICATION
|
||||
# ========================================
|
||||
- name: Test Gitea via HTTPS (with retries)
|
||||
ansible.builtin.uri:
|
||||
url: "{{ gitea_url }}/api/healthz"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
validate_certs: false
|
||||
timeout: 10
|
||||
register: gitea_https_test
|
||||
until: gitea_https_test.status == 200
|
||||
retries: 20
|
||||
delay: 3
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Check SSL certificate status
|
||||
ansible.builtin.shell: |
|
||||
cd {{ traefik_stack_path }}
|
||||
if [ -f acme.json ] && [ -s acme.json ]; then
|
||||
echo "SSL certificates: PRESENT"
|
||||
else
|
||||
echo "SSL certificates: MISSING or EMPTY"
|
||||
fi
|
||||
register: ssl_status
|
||||
changed_when: false
|
||||
|
||||
- name: Final status summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
================================================================================
|
||||
REDEPLOYMENT SUMMARY
|
||||
================================================================================
|
||||
|
||||
Traefik:
|
||||
- Status: {{ traefik_ready.rc | ternary('Up', 'Down') }}
|
||||
- SSL Certificates: {{ ssl_status.stdout }}
|
||||
|
||||
Gitea:
|
||||
- Status: {{ gitea_ready.rc | ternary('Up', 'Down') }}
|
||||
- Health: {% if gitea_health.stdout == 'HEALTHY' %}✅ Healthy{% else %}❌ Not Healthy{% endif %}
|
||||
- Configuration: {% if gitea_app_ini_restore.changed %}✅ Restored{% else %}ℹ️ Using default{% endif %}
|
||||
|
||||
Service Discovery:
|
||||
- Gitea in network: {% if gitea_in_network.stdout == 'YES' %}✅{% else %}❌{% endif %}
|
||||
- Direct connection: {% if 'CONNECTION_FAILED' not in traefik_gitea_direct.stdout %}✅{% else %}❌{% endif %}
|
||||
|
||||
Gitea Accessibility:
|
||||
{% if gitea_https_test.status == 200 %}
|
||||
✅ Gitea is reachable via HTTPS (Status: 200)
|
||||
URL: {{ gitea_url }}
|
||||
{% else %}
|
||||
❌ Gitea is NOT reachable via HTTPS (Status: {{ gitea_https_test.status | default('TIMEOUT') }})
|
||||
|
||||
Possible causes:
|
||||
1. SSL certificate is still being generated (wait 2-5 minutes)
|
||||
2. Service discovery needs more time (wait 1-2 minutes)
|
||||
3. Network configuration issue
|
||||
|
||||
Next steps:
|
||||
- Wait 2-5 minutes and test again: curl -k {{ gitea_url }}/api/healthz
|
||||
- Check Traefik logs: cd {{ traefik_stack_path }} && docker compose logs {{ traefik_container_name }} --tail=50
|
||||
- Check Gitea logs: cd {{ gitea_stack_path }} && docker compose logs {{ gitea_container_name }} --tail=50
|
||||
{% endif %}
|
||||
|
||||
{% if not skip_backup %}
|
||||
Backup location: {{ backup_base_path }}/{{ actual_backup_name }}
|
||||
To rollback: ansible-playbook -i inventory/production.yml playbooks/maintenance/rollback-redeploy.yml \
|
||||
--vault-password-file secrets/.vault_pass \
|
||||
-e "backup_name={{ actual_backup_name }}"
|
||||
{% endif %}
|
||||
|
||||
================================================================================
|
||||
|
||||
19
deployment/ansible/playbooks/setup/ssl.yml
Normal file
19
deployment/ansible/playbooks/setup/ssl.yml
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
# Setup Let's Encrypt SSL Certificates via Traefik
|
||||
# Wrapper Playbook for traefik role ssl tasks
|
||||
- hosts: production
|
||||
gather_facts: yes
|
||||
become: no
|
||||
vars:
|
||||
# ssl_domains and acme_email are defined in group_vars/production.yml
|
||||
# Can be overridden via -e flag if needed
|
||||
traefik_ssl_domains: "{{ ssl_domains | default([gitea_domain, app_domain]) }}"
|
||||
tasks:
|
||||
- name: Include traefik ssl tasks
|
||||
ansible.builtin.include_role:
|
||||
name: traefik
|
||||
tasks_from: ssl
|
||||
tags:
|
||||
- traefik
|
||||
- ssl
|
||||
- certificates
|
||||
Reference in New Issue
Block a user