fix: Gitea Traefik routing and connection pool optimization
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Failing after 10m14s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Has been skipped
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
Security Vulnerability Scan / Check for Dependency Changes (push) Failing after 11m25s
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled

- Remove middleware reference from Gitea Traefik labels (caused routing issues)
- Optimize Gitea connection pool settings (MAX_IDLE_CONNS=30, authentication_timeout=180s)
- Add explicit service reference in Traefik labels
- Fix intermittent 504 timeouts by improving PostgreSQL connection handling

Fixes Gitea unreachability via git.michaelschiemer.de
This commit is contained in:
2025-11-09 14:46:15 +01:00
parent 85c369e846
commit 36ef2a1e2c
1366 changed files with 104925 additions and 28719 deletions

View File

@@ -0,0 +1,263 @@
# PostgreSQL Staging Stack - Staging Database
## Overview
Staging PostgreSQL 16 database stack with automated backup system for testing and development.
**Features**:
- PostgreSQL 16 Alpine (lightweight, secure)
- Automated daily backups with shorter retention (3 days)
- Performance-optimized configuration (1GB memory allocation)
- Health checks and automatic recovery
- Persistent storage with named volumes
- Isolated `postgres-staging-internal` network
- Resource limits optimized for staging workloads
## Services
- **postgres-staging** - PostgreSQL 16 database server
- **postgres-staging-backup** - Automated backup service with cron scheduling
## Prerequisites
1. **Docker and Docker Compose** installed
2. **Environment file** created (`.env`)
## Configuration
### 1. Create Environment File
```bash
cp .env.example .env
```
### 2. Generate Secure Password
```bash
openssl rand -base64 32
```
Update `.env`:
```env
POSTGRES_DB=michaelschiemer_staging
POSTGRES_USER=postgres
POSTGRES_PASSWORD=<generated-password>
BACKUP_RETENTION_DAYS=3
BACKUP_SCHEDULE=0 3 * * *
```
**Note**: Staging uses shorter backup retention (3 days) and runs backups at 3 AM (1 hour after production) to reduce resource contention.
## Deployment
### Initial Setup
```bash
# Create environment file
cp .env.example .env
# Generate and set password
openssl rand -base64 32
# Update POSTGRES_PASSWORD in .env
# Start services
docker compose up -d
# Check logs
docker compose logs -f
# Verify health
docker compose ps
```
### Verify Deployment
```bash
# Check PostgreSQL is running
docker exec postgres-staging pg_isready -U postgres -d michaelschiemer_staging
# Expected: postgres-staging:5432 - accepting connections
# Check backup service
docker compose logs postgres-staging-backup
# Expected: Initial backup completed successfully
```
## Integration with Staging Application Stack
The Staging Application Stack connects to this database via the `postgres-staging-internal` network.
**Connection Configuration** in `docker-compose.staging.yml`:
```env
DB_HOST=postgres-staging
DB_PORT=5432
DB_DATABASE=michaelschiemer_staging
DB_USERNAME=postgres
DB_PASSWORD=<same-as-postgres-staging-password>
```
**Network Connection**: The Staging Application Stack must be connected to the `postgres-staging-internal` network.
## Usage
### Database Access
#### From Host Machine
```bash
# Connect to database
docker exec -it postgres-staging psql -U postgres -d michaelschiemer_staging
# Run SQL query
docker exec postgres-staging psql -U postgres -d michaelschiemer_staging -c "SELECT version();"
```
#### From Application Container
```bash
# Connection string format
postgresql://postgres:password@postgres-staging:5432/michaelschiemer_staging
```
### Backup Management
#### Manual Backup
```bash
# Trigger manual backup
docker exec postgres-staging-backup /scripts/backup.sh
# List backups
ls -lh backups/
# Example output:
# postgres_michaelschiemer_staging_20250130_030000.sql.gz
```
#### Restore from Backup
```bash
# List available backups
docker exec postgres-staging-backup ls -lh /backups
# Restore specific backup
docker exec -it postgres-staging-backup /scripts/restore.sh /backups/postgres_michaelschiemer_staging_20250130_030000.sql.gz
# ⚠️ WARNING: This will DROP and RECREATE the database!
```
## Network Isolation
This stack uses its own isolated network:
- **Network**: `postgres-staging-internal`
- **Purpose**: Isolate Staging database from Production and other services
- **Access**: Only services explicitly connected to this network can access the database
**Connecting Application Stack**:
```yaml
# In docker-compose.staging.yml
networks:
postgres-staging-internal:
external: true
name: postgres-staging-internal
```
## Security
### Network Isolation
- PostgreSQL only accessible via `postgres-staging-internal` network
- No external ports exposed
- Service-to-service communication only
### Authentication
- Strong password required (generated with `openssl rand -base64 32`)
- No default passwords
- Password stored in environment variables only
**Note**: Staging passwords can be different from Production, but should still be secure.
## Monitoring
### Health Checks
```bash
# Check service health
docker compose ps
# Expected: Both services "healthy"
# Manual health check
docker exec postgres-staging pg_isready -U postgres -d michaelschiemer_staging
```
### Resource Usage
```bash
# Database container stats
docker stats postgres-staging --no-stream
# Disk usage
docker exec postgres-staging du -sh /var/lib/postgresql/data
```
### Logs
```bash
# PostgreSQL logs
docker compose logs postgres-staging
# Backup logs
docker compose logs postgres-staging-backup
# Real-time monitoring
docker compose logs -f
```
## Troubleshooting
### Database Won't Start
```bash
# Check logs
docker compose logs postgres-staging
# Common issues:
# 1. Invalid configuration
docker exec postgres-staging postgres --check
# 2. Permission issues
docker exec postgres-staging ls -la /var/lib/postgresql/data
```
### Connection Refused from Application
```bash
# 1. Check PostgreSQL is running
docker compose ps postgres-staging
# 2. Verify network
docker network inspect postgres-staging-internal | grep postgres-staging
# 3. Check if application is connected to network
docker network inspect postgres-staging-internal | grep staging-app
```
## Differences from Production Stack
| Aspect | Production | Staging |
|--------|-----------|---------|
| **Container Name** | postgres-production | postgres-staging |
| **Network** | postgres-production-internal | postgres-staging-internal |
| **Volume** | postgres-production-data | postgres-staging-data |
| **Database** | michaelschiemer | michaelschiemer_staging |
| **Memory Limit** | 2GB | 1GB |
| **Backup Retention** | 7 days | 3 days |
| **Backup Schedule** | 2 AM | 3 AM |
## Additional Resources
- **PostgreSQL Documentation**: https://www.postgresql.org/docs/16/
- **Performance Tuning**: https://wiki.postgresql.org/wiki/Performance_Optimization
- **Backup Best Practices**: https://www.postgresql.org/docs/16/backup.html

View File

@@ -0,0 +1,70 @@
# PostgreSQL Configuration for Production
# Optimized for 2GB memory allocation
# Connection Settings
listen_addresses = '*'
max_connections = 100
superuser_reserved_connections = 3
# Memory Settings (for 2GB allocation)
shared_buffers = 512MB
effective_cache_size = 1536MB
maintenance_work_mem = 128MB
work_mem = 5MB
# Checkpoint Settings
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
# Query Planner
random_page_cost = 1.1
effective_io_concurrency = 200
# Write-Ahead Logging
wal_level = replica
max_wal_size = 2GB
min_wal_size = 1GB
# Logging
log_destination = 'stderr'
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 1d
log_rotation_size = 100MB
log_line_prefix = '%m [%p] %u@%d '
log_timezone = 'Europe/Berlin'
# What to Log
log_min_duration_statement = 1000
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_statement = 'ddl'
# Autovacuum
autovacuum = on
autovacuum_max_workers = 3
autovacuum_naptime = 1min
# Client Connection Defaults
datestyle = 'iso, mdy'
timezone = 'Europe/Berlin'
lc_messages = 'en_US.utf8'
lc_monetary = 'en_US.utf8'
lc_numeric = 'en_US.utf8'
lc_time = 'en_US.utf8'
default_text_search_config = 'pg_catalog.english'
# Lock Management
deadlock_timeout = 1s
# Statement Timeout (prevent long-running queries)
statement_timeout = 30000 # 30 seconds
# Parallel Query Execution
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_worker_processes = 4

View File

@@ -0,0 +1,72 @@
services:
# PostgreSQL Staging Database
postgres-staging:
image: postgres:16-alpine
container_name: postgres-staging
restart: unless-stopped
networks:
- postgres-staging-internal
environment:
- TZ=Europe/Berlin
- POSTGRES_DB=${POSTGRES_DB:-michaelschiemer_staging}
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- postgres-staging-data:/var/lib/postgresql/data
- ./conf.d:/etc/postgresql/conf.d:ro
- ./backups:/backups
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command:
- postgres
- -c
- config_file=/etc/postgresql/conf.d/postgresql.conf
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres} -d ${POSTGRES_DB:-michaelschiemer_staging}"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
shm_size: 256mb
deploy:
resources:
limits:
memory: 1G
reservations:
memory: 256M
# Automated Backup Service for Staging
postgres-staging-backup:
image: postgres:16-alpine
container_name: postgres-staging-backup
restart: unless-stopped
networks:
- postgres-staging-internal
environment:
- TZ=Europe/Berlin
- POSTGRES_HOST=postgres-staging
- POSTGRES_DB=${POSTGRES_DB:-michaelschiemer_staging}
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- BACKUP_RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-3}
- BACKUP_SCHEDULE=${BACKUP_SCHEDULE:-0 3 * * *}
volumes:
- ./backups:/backups
- ./scripts:/scripts:ro
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
entrypoint: ["/scripts/backup-entrypoint.sh"]
depends_on:
postgres-staging:
condition: service_healthy
volumes:
postgres-staging-data:
name: postgres-staging-data
networks:
postgres-staging-internal:
name: postgres-staging-internal
driver: bridge

View File

@@ -0,0 +1,23 @@
#!/bin/sh
set -e
echo "🔄 PostgreSQL Backup Service Starting..."
echo "📅 Backup Schedule: ${BACKUP_SCHEDULE}"
echo "🗑️ Retention: ${BACKUP_RETENTION_DAYS} days"
# Install cronie for scheduled backups
apk add --no-cache dcron
# Create cron job
echo "${BACKUP_SCHEDULE} /scripts/backup.sh >> /var/log/backup.log 2>&1" > /etc/crontabs/root
# Ensure backup directory exists
mkdir -p /backups
# Run initial backup
echo "🚀 Running initial backup..."
/scripts/backup.sh
# Start cron in foreground
echo "✅ Backup service ready - cron daemon starting"
crond -f -l 2

View File

@@ -0,0 +1,55 @@
#!/bin/sh
set -e
# Configuration
BACKUP_DIR="/backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/postgres_${POSTGRES_DB}_${TIMESTAMP}.sql.gz"
RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-7}
echo "======================================"
echo "PostgreSQL Backup - $(date)"
echo "======================================"
# Wait for PostgreSQL to be ready
until PGPASSWORD="${POSTGRES_PASSWORD}" pg_isready -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -d "${POSTGRES_DB}"; do
echo "⏳ Waiting for PostgreSQL..."
sleep 5
done
echo "✅ PostgreSQL is ready"
# Create backup
echo "📦 Creating backup: ${BACKUP_FILE}"
PGPASSWORD="${POSTGRES_PASSWORD}" pg_dump \
-h "${POSTGRES_HOST}" \
-U "${POSTGRES_USER}" \
-d "${POSTGRES_DB}" \
--clean \
--if-exists \
--create \
--no-owner \
--no-privileges \
| gzip > "${BACKUP_FILE}"
# Verify backup was created
if [ -f "${BACKUP_FILE}" ]; then
BACKUP_SIZE=$(du -h "${BACKUP_FILE}" | cut -f1)
echo "✅ Backup created successfully: ${BACKUP_SIZE}"
else
echo "❌ Backup failed!"
exit 1
fi
# Cleanup old backups
echo "🗑️ Cleaning up backups older than ${RETENTION_DAYS} days..."
find "${BACKUP_DIR}" -name "postgres_*.sql.gz" -type f -mtime +${RETENTION_DAYS} -delete
# List current backups
echo ""
echo "📊 Current backups:"
ls -lh "${BACKUP_DIR}"/postgres_*.sql.gz 2>/dev/null || echo "No backups found"
echo ""
echo "✅ Backup completed successfully"
echo "======================================"

View File

@@ -0,0 +1,55 @@
#!/bin/sh
set -e
# Configuration
BACKUP_DIR="/backups"
if [ -z "$1" ]; then
echo "Usage: ./restore.sh <backup-file>"
echo ""
echo "Available backups:"
ls -lh "${BACKUP_DIR}"/postgres_*.sql.gz 2>/dev/null || echo "No backups found"
exit 1
fi
BACKUP_FILE="$1"
if [ ! -f "${BACKUP_FILE}" ]; then
echo "❌ Backup file not found: ${BACKUP_FILE}"
exit 1
fi
echo "======================================"
echo "PostgreSQL Restore - $(date)"
echo "======================================"
echo "📦 Backup file: ${BACKUP_FILE}"
echo ""
# Wait for PostgreSQL to be ready
until PGPASSWORD="${POSTGRES_PASSWORD}" pg_isready -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -d postgres; do
echo "⏳ Waiting for PostgreSQL..."
sleep 5
done
echo "✅ PostgreSQL is ready"
echo ""
# Warning
echo "⚠️ WARNING: This will DROP and RECREATE the database!"
echo "⚠️ Database: ${POSTGRES_DB}"
echo ""
echo "Press Ctrl+C to cancel, or wait 10 seconds to continue..."
sleep 10
echo ""
echo "🔄 Restoring database..."
# Restore backup
gunzip -c "${BACKUP_FILE}" | PGPASSWORD="${POSTGRES_PASSWORD}" psql \
-h "${POSTGRES_HOST}" \
-U "${POSTGRES_USER}" \
-d postgres
echo ""
echo "✅ Database restored successfully"
echo "======================================"