fix: Gitea Traefik routing and connection pool optimization
Some checks failed
🚀 Build & Deploy Image / Determine Build Necessity (push) Failing after 10m14s
🚀 Build & Deploy Image / Build Runtime Base Image (push) Has been skipped
🚀 Build & Deploy Image / Build Docker Image (push) Has been skipped
🚀 Build & Deploy Image / Run Tests & Quality Checks (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Staging (push) Has been skipped
🚀 Build & Deploy Image / Auto-deploy to Production (push) Has been skipped
Security Vulnerability Scan / Check for Dependency Changes (push) Failing after 11m25s
Security Vulnerability Scan / Composer Security Audit (push) Has been cancelled

- Remove middleware reference from Gitea Traefik labels (caused routing issues)
- Optimize Gitea connection pool settings (MAX_IDLE_CONNS=30, authentication_timeout=180s)
- Add explicit service reference in Traefik labels
- Fix intermittent 504 timeouts by improving PostgreSQL connection handling

Fixes Gitea unreachability via git.michaelschiemer.de
This commit is contained in:
2025-11-09 14:46:15 +01:00
parent 85c369e846
commit 36ef2a1e2c
1366 changed files with 104925 additions and 28719 deletions

View File

@@ -0,0 +1,257 @@
# PostgreSQL Production Stack - Production Database
## Overview
Production-ready PostgreSQL 16 database stack with automated backup system and performance optimization.
**Features**:
- PostgreSQL 16 Alpine (lightweight, secure)
- Automated daily backups with configurable retention
- Performance-optimized configuration (2GB memory allocation)
- Health checks and automatic recovery
- Persistent storage with named volumes
- Isolated `postgres-production-internal` network
- Resource limits for stability
## Services
- **postgres-production** - PostgreSQL 16 database server
- **postgres-production-backup** - Automated backup service with cron scheduling
## Prerequisites
1. **Docker and Docker Compose** installed
2. **Environment file** created (`.env`)
## Configuration
### 1. Create Environment File
```bash
cp .env.example .env
```
### 2. Generate Secure Password
```bash
openssl rand -base64 32
```
Update `.env`:
```env
POSTGRES_DB=michaelschiemer
POSTGRES_USER=postgres
POSTGRES_PASSWORD=<generated-password>
BACKUP_RETENTION_DAYS=7
BACKUP_SCHEDULE=0 2 * * *
```
## Deployment
### Initial Setup
```bash
# Create environment file
cp .env.example .env
# Generate and set password
openssl rand -base64 32
# Update POSTGRES_PASSWORD in .env
# Start services
docker compose up -d
# Check logs
docker compose logs -f
# Verify health
docker compose ps
```
### Verify Deployment
```bash
# Check PostgreSQL is running
docker exec postgres-production pg_isready -U postgres -d michaelschiemer
# Expected: postgres-production:5432 - accepting connections
# Check backup service
docker compose logs postgres-production-backup
# Expected: Initial backup completed successfully
```
## Integration with Production Application Stack
The Production Application Stack connects to this database via the `postgres-production-internal` network.
**Connection Configuration** in `deployment/stacks/production/.env`:
```env
DB_HOST=postgres-production
DB_PORT=5432
DB_DATABASE=michaelschiemer
DB_USERNAME=postgres
DB_PASSWORD=<same-as-postgres-production-password>
```
**Network Connection**: The Production Application Stack must be connected to the `postgres-production-internal` network.
## Usage
### Database Access
#### From Host Machine
```bash
# Connect to database
docker exec -it postgres-production psql -U postgres -d michaelschiemer
# Run SQL query
docker exec postgres-production psql -U postgres -d michaelschiemer -c "SELECT version();"
```
#### From Application Container
```bash
# Connection string format
postgresql://postgres:password@postgres-production:5432/michaelschiemer
```
### Backup Management
#### Manual Backup
```bash
# Trigger manual backup
docker exec postgres-production-backup /scripts/backup.sh
# List backups
ls -lh backups/
# Example output:
# postgres_michaelschiemer_20250130_020000.sql.gz
```
#### Restore from Backup
```bash
# List available backups
docker exec postgres-production-backup ls -lh /backups
# Restore specific backup
docker exec -it postgres-production-backup /scripts/restore.sh /backups/postgres_michaelschiemer_20250130_020000.sql.gz
# ⚠️ WARNING: This will DROP and RECREATE the database!
```
## Network Isolation
This stack uses its own isolated network:
- **Network**: `postgres-production-internal`
- **Purpose**: Isolate Production database from other services
- **Access**: Only services explicitly connected to this network can access the database
**Connecting Application Stack**:
```yaml
# In deployment/stacks/production/docker-compose.production.yml
networks:
postgres-production-internal:
external: true
name: postgres-production-internal
```
## Security
### Network Isolation
- PostgreSQL only accessible via `postgres-production-internal` network
- No external ports exposed
- Service-to-service communication only
### Authentication
- Strong password required (generated with `openssl rand -base64 32`)
- No default passwords
- Password stored in environment variables only
## Monitoring
### Health Checks
```bash
# Check service health
docker compose ps
# Expected: Both services "healthy"
# Manual health check
docker exec postgres-production pg_isready -U postgres -d michaelschiemer
```
### Resource Usage
```bash
# Database container stats
docker stats postgres-production --no-stream
# Disk usage
docker exec postgres-production du -sh /var/lib/postgresql/data
```
### Logs
```bash
# PostgreSQL logs
docker compose logs postgres-production
# Backup logs
docker compose logs postgres-production-backup
# Real-time monitoring
docker compose logs -f
```
## Troubleshooting
### Database Won't Start
```bash
# Check logs
docker compose logs postgres-production
# Common issues:
# 1. Invalid configuration
docker exec postgres-production postgres --check
# 2. Permission issues
docker exec postgres-production ls -la /var/lib/postgresql/data
```
### Connection Refused from Application
```bash
# 1. Check PostgreSQL is running
docker compose ps postgres-production
# 2. Verify network
docker network inspect postgres-production-internal | grep postgres-production
# 3. Check if application is connected to network
docker network inspect postgres-production-internal | grep app
```
## Differences from Staging Stack
| Aspect | Production | Staging |
|--------|-----------|---------|
| **Container Name** | postgres-production | postgres-staging |
| **Network** | postgres-production-internal | postgres-staging-internal |
| **Volume** | postgres-production-data | postgres-staging-data |
| **Database** | michaelschiemer | michaelschiemer_staging |
| **Backup Retention** | 7 days (configurable) | 7 days (configurable) |
## Additional Resources
- **PostgreSQL Documentation**: https://www.postgresql.org/docs/16/
- **Performance Tuning**: https://wiki.postgresql.org/wiki/Performance_Optimization
- **Backup Best Practices**: https://www.postgresql.org/docs/16/backup.html

View File

@@ -0,0 +1,70 @@
# PostgreSQL Configuration for Production
# Optimized for 2GB memory allocation
# Connection Settings
listen_addresses = '*'
max_connections = 100
superuser_reserved_connections = 3
# Memory Settings (for 2GB allocation)
shared_buffers = 512MB
effective_cache_size = 1536MB
maintenance_work_mem = 128MB
work_mem = 5MB
# Checkpoint Settings
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
# Query Planner
random_page_cost = 1.1
effective_io_concurrency = 200
# Write-Ahead Logging
wal_level = replica
max_wal_size = 2GB
min_wal_size = 1GB
# Logging
log_destination = 'stderr'
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 1d
log_rotation_size = 100MB
log_line_prefix = '%m [%p] %u@%d '
log_timezone = 'Europe/Berlin'
# What to Log
log_min_duration_statement = 1000
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_statement = 'ddl'
# Autovacuum
autovacuum = on
autovacuum_max_workers = 3
autovacuum_naptime = 1min
# Client Connection Defaults
datestyle = 'iso, mdy'
timezone = 'Europe/Berlin'
lc_messages = 'en_US.utf8'
lc_monetary = 'en_US.utf8'
lc_numeric = 'en_US.utf8'
lc_time = 'en_US.utf8'
default_text_search_config = 'pg_catalog.english'
# Lock Management
deadlock_timeout = 1s
# Statement Timeout (prevent long-running queries)
statement_timeout = 30000 # 30 seconds
# Parallel Query Execution
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_worker_processes = 4

View File

@@ -0,0 +1,72 @@
services:
# PostgreSQL Production Database
postgres-production:
image: postgres:16-alpine
container_name: postgres-production
restart: unless-stopped
networks:
- postgres-production-internal
environment:
- TZ=Europe/Berlin
- POSTGRES_DB=${POSTGRES_DB:-michaelschiemer}
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- postgres-production-data:/var/lib/postgresql/data
- ./conf.d:/etc/postgresql/conf.d:ro
- ./backups:/backups
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command:
- postgres
- -c
- config_file=/etc/postgresql/conf.d/postgresql.conf
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres} -d ${POSTGRES_DB:-michaelschiemer}"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
shm_size: 256mb
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 512M
# Automated Backup Service for Production
postgres-production-backup:
image: postgres:16-alpine
container_name: postgres-production-backup
restart: unless-stopped
networks:
- postgres-production-internal
environment:
- TZ=Europe/Berlin
- POSTGRES_HOST=postgres-production
- POSTGRES_DB=${POSTGRES_DB:-michaelschiemer}
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- BACKUP_RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-7}
- BACKUP_SCHEDULE=${BACKUP_SCHEDULE:-0 2 * * *}
volumes:
- ./backups:/backups
- ./scripts:/scripts:ro
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
entrypoint: ["/scripts/backup-entrypoint.sh"]
depends_on:
postgres-production:
condition: service_healthy
volumes:
postgres-production-data:
name: postgres-production-data
networks:
postgres-production-internal:
name: postgres-production-internal
driver: bridge

View File

@@ -0,0 +1,23 @@
#!/bin/sh
set -e
echo "🔄 PostgreSQL Backup Service Starting..."
echo "📅 Backup Schedule: ${BACKUP_SCHEDULE}"
echo "🗑️ Retention: ${BACKUP_RETENTION_DAYS} days"
# Install cronie for scheduled backups
apk add --no-cache dcron
# Create cron job
echo "${BACKUP_SCHEDULE} /scripts/backup.sh >> /var/log/backup.log 2>&1" > /etc/crontabs/root
# Ensure backup directory exists
mkdir -p /backups
# Run initial backup
echo "🚀 Running initial backup..."
/scripts/backup.sh
# Start cron in foreground
echo "✅ Backup service ready - cron daemon starting"
crond -f -l 2

View File

@@ -0,0 +1,55 @@
#!/bin/sh
set -e
# Configuration
BACKUP_DIR="/backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/postgres_${POSTGRES_DB}_${TIMESTAMP}.sql.gz"
RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-7}
echo "======================================"
echo "PostgreSQL Backup - $(date)"
echo "======================================"
# Wait for PostgreSQL to be ready
until PGPASSWORD="${POSTGRES_PASSWORD}" pg_isready -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -d "${POSTGRES_DB}"; do
echo "⏳ Waiting for PostgreSQL..."
sleep 5
done
echo "✅ PostgreSQL is ready"
# Create backup
echo "📦 Creating backup: ${BACKUP_FILE}"
PGPASSWORD="${POSTGRES_PASSWORD}" pg_dump \
-h "${POSTGRES_HOST}" \
-U "${POSTGRES_USER}" \
-d "${POSTGRES_DB}" \
--clean \
--if-exists \
--create \
--no-owner \
--no-privileges \
| gzip > "${BACKUP_FILE}"
# Verify backup was created
if [ -f "${BACKUP_FILE}" ]; then
BACKUP_SIZE=$(du -h "${BACKUP_FILE}" | cut -f1)
echo "✅ Backup created successfully: ${BACKUP_SIZE}"
else
echo "❌ Backup failed!"
exit 1
fi
# Cleanup old backups
echo "🗑️ Cleaning up backups older than ${RETENTION_DAYS} days..."
find "${BACKUP_DIR}" -name "postgres_*.sql.gz" -type f -mtime +${RETENTION_DAYS} -delete
# List current backups
echo ""
echo "📊 Current backups:"
ls -lh "${BACKUP_DIR}"/postgres_*.sql.gz 2>/dev/null || echo "No backups found"
echo ""
echo "✅ Backup completed successfully"
echo "======================================"

View File

@@ -0,0 +1,55 @@
#!/bin/sh
set -e
# Configuration
BACKUP_DIR="/backups"
if [ -z "$1" ]; then
echo "Usage: ./restore.sh <backup-file>"
echo ""
echo "Available backups:"
ls -lh "${BACKUP_DIR}"/postgres_*.sql.gz 2>/dev/null || echo "No backups found"
exit 1
fi
BACKUP_FILE="$1"
if [ ! -f "${BACKUP_FILE}" ]; then
echo "❌ Backup file not found: ${BACKUP_FILE}"
exit 1
fi
echo "======================================"
echo "PostgreSQL Restore - $(date)"
echo "======================================"
echo "📦 Backup file: ${BACKUP_FILE}"
echo ""
# Wait for PostgreSQL to be ready
until PGPASSWORD="${POSTGRES_PASSWORD}" pg_isready -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -d postgres; do
echo "⏳ Waiting for PostgreSQL..."
sleep 5
done
echo "✅ PostgreSQL is ready"
echo ""
# Warning
echo "⚠️ WARNING: This will DROP and RECREATE the database!"
echo "⚠️ Database: ${POSTGRES_DB}"
echo ""
echo "Press Ctrl+C to cancel, or wait 10 seconds to continue..."
sleep 10
echo ""
echo "🔄 Restoring database..."
# Restore backup
gunzip -c "${BACKUP_FILE}" | PGPASSWORD="${POSTGRES_PASSWORD}" psql \
-h "${POSTGRES_HOST}" \
-U "${POSTGRES_USER}" \
-d postgres
echo ""
echo "✅ Database restored successfully"
echo "======================================"