feat: CI/CD pipeline setup complete - Ansible playbooks updated, secrets configured, workflow ready

This commit is contained in:
2025-10-31 01:39:24 +01:00
parent 55c04e4fd0
commit e26eb2aa12
601 changed files with 44184 additions and 32477 deletions

View File

@@ -0,0 +1,22 @@
# PostgreSQL Stack Configuration
# Copy this file to .env and adjust values
# Timezone
TZ=Europe/Berlin
# PostgreSQL Configuration
POSTGRES_DB=michaelschiemer
POSTGRES_USER=postgres
POSTGRES_PASSWORD=<generate-with-openssl-rand-base64-32>
# Backup Configuration
BACKUP_RETENTION_DAYS=7
BACKUP_SCHEDULE=0 2 * * *
# Connection Settings (for application integration)
# Use these in Stack 4 (Application) .env:
# DB_HOST=postgres
# DB_PORT=5432
# DB_NAME=michaelschiemer
# DB_USER=postgres
# DB_PASS=<same-as-POSTGRES_PASSWORD>

View File

@@ -0,0 +1,681 @@
# PostgreSQL Stack - Production Database with Automated Backups
## Overview
Production-ready PostgreSQL 16 database with automated backup system and performance optimization.
**Features**:
- PostgreSQL 16 Alpine (lightweight, secure)
- Automated daily backups with configurable retention
- Performance-optimized configuration (2GB memory allocation)
- Health checks and automatic recovery
- Persistent storage with named volumes
- Isolated app-internal network
- Resource limits for stability
## Services
- **postgres** - PostgreSQL 16 database server
- **postgres-backup** - Automated backup service with cron scheduling
## Prerequisites
1. **Traefik Stack Running**
```bash
cd ../traefik
docker compose up -d
```
2. **App-Internal Network Created**
```bash
docker network create app-internal
```
(Created automatically by Stack 4 - Application)
## Configuration
### 1. Create Environment File
```bash
cp .env.example .env
```
### 2. Generate Secure Password
```bash
openssl rand -base64 32
```
Update `.env`:
```env
POSTGRES_PASSWORD=<generated-password>
```
### 3. Review Configuration
**Database Settings** (`.env`):
- `POSTGRES_DB` - Database name (default: michaelschiemer)
- `POSTGRES_USER` - Database user (default: postgres)
- `POSTGRES_PASSWORD` - Database password (REQUIRED)
**Backup Settings** (`.env`):
- `BACKUP_RETENTION_DAYS` - Keep backups for N days (default: 7)
- `BACKUP_SCHEDULE` - Cron expression (default: `0 2 * * *` = 2 AM daily)
**Performance Tuning** (`conf.d/postgresql.conf`):
- Optimized for 2GB memory allocation
- Connection pooling (max 100 connections)
- Write-ahead logging for reliability
- Query logging for slow queries (>1s)
- Parallel query execution enabled
## Deployment
### Initial Setup
```bash
# Create environment file
cp .env.example .env
# Generate and set password
openssl rand -base64 32
# Update POSTGRES_PASSWORD in .env
# Ensure app-internal network exists
docker network inspect app-internal || docker network create app-internal
# Start services
docker compose up -d
# Check logs
docker compose logs -f
# Verify health
docker compose ps
```
### Verify Deployment
```bash
# Check PostgreSQL is running
docker exec postgres pg_isready -U postgres -d michaelschiemer
# Expected: postgres:5432 - accepting connections
# Check backup service
docker compose logs postgres-backup
# Expected: Initial backup completed successfully
```
## Usage
### Database Access
#### From Host Machine
```bash
# Connect to database
docker exec -it postgres psql -U postgres -d michaelschiemer
# Run SQL query
docker exec postgres psql -U postgres -d michaelschiemer -c "SELECT version();"
```
#### From Application Container
```bash
# Connection string format
postgresql://postgres:password@postgres:5432/michaelschiemer
# Example with environment variables (Stack 4 - Application)
DB_HOST=postgres
DB_PORT=5432
DB_NAME=michaelschiemer
DB_USER=postgres
DB_PASS=<same-as-POSTGRES_PASSWORD>
```
### Backup Management
#### Manual Backup
```bash
# Trigger manual backup
docker exec postgres-backup /scripts/backup.sh
# List backups
ls -lh backups/
# Example output:
# postgres_michaelschiemer_20250130_020000.sql.gz
# postgres_michaelschiemer_20250131_020000.sql.gz
```
#### Restore from Backup
```bash
# List available backups
docker exec postgres-backup ls -lh /backups
# Restore specific backup
docker exec -it postgres-backup /scripts/restore.sh /backups/postgres_michaelschiemer_20250130_020000.sql.gz
# ⚠️ WARNING: This will DROP and RECREATE the database!
# Confirm after 10 second countdown
```
#### Download Backup
```bash
# Copy backup to host
docker cp postgres-backup:/backups/postgres_michaelschiemer_20250130_020000.sql.gz ./local-backup.sql.gz
# Extract and inspect
gunzip -c local-backup.sql.gz | less
```
### Database Maintenance
#### Vacuum and Analyze
```bash
# Full vacuum (recommended weekly)
docker exec postgres psql -U postgres -d michaelschiemer -c "VACUUM FULL ANALYZE;"
# Quick vacuum (automatic, but can run manually)
docker exec postgres psql -U postgres -d michaelschiemer -c "VACUUM ANALYZE;"
```
#### Check Database Size
```bash
docker exec postgres psql -U postgres -d michaelschiemer -c "
SELECT
pg_size_pretty(pg_database_size('michaelschiemer')) as db_size,
pg_size_pretty(pg_total_relation_size('users')) as users_table_size;
"
```
#### Connection Statistics
```bash
docker exec postgres psql -U postgres -d michaelschiemer -c "
SELECT
datname,
numbackends as connections,
xact_commit as commits,
xact_rollback as rollbacks
FROM pg_stat_database
WHERE datname = 'michaelschiemer';
"
```
### Performance Monitoring
#### Active Queries
```bash
docker exec postgres psql -U postgres -d michaelschiemer -c "
SELECT
pid,
usename,
application_name,
state,
query_start,
query
FROM pg_stat_activity
WHERE state != 'idle'
ORDER BY query_start;
"
```
#### Slow Queries
```bash
# Check PostgreSQL logs for slow queries (>1s)
docker exec postgres tail -f /var/lib/postgresql/data/pgdata/log/postgresql-*.log
```
#### Index Usage
```bash
docker exec postgres psql -U postgres -d michaelschiemer -c "
SELECT
schemaname,
tablename,
indexname,
idx_scan as index_scans,
idx_tup_read as tuples_read
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;
"
```
## Integration with Other Stacks
### Stack 4: Application
Update `deployment/stacks/application/.env`:
```env
# Database Configuration
DB_HOST=postgres
DB_PORT=5432
DB_NAME=michaelschiemer
DB_USER=postgres
DB_PASS=<same-as-postgres-stack-password>
```
**Connection Test** from Application:
```bash
# From app container
docker exec app php -r "
\$dsn = 'pgsql:host=postgres;port=5432;dbname=michaelschiemer';
\$pdo = new PDO(\$dsn, 'postgres', getenv('DB_PASS'));
echo 'Connection successful: ' . \$pdo->query('SELECT version()')->fetchColumn();
"
```
### Stack 2: Gitea (Optional PostgreSQL Backend)
If migrating Gitea from MySQL to PostgreSQL:
```env
# In deployment/stacks/gitea/.env
DB_TYPE=postgres
DB_HOST=postgres
DB_NAME=gitea
DB_USER=postgres
DB_PASS=<same-password>
```
**Note**: Requires creating separate `gitea` database:
```bash
docker exec postgres psql -U postgres -c "CREATE DATABASE gitea;"
```
## Backup & Recovery
### Automated Backup Strategy
**Schedule**: Daily at 2:00 AM (configurable via `BACKUP_SCHEDULE`)
**Retention**: 7 days (configurable via `BACKUP_RETENTION_DAYS`)
**Location**: `./backups/` directory on host
**Format**: Compressed SQL dumps (`postgres_<dbname>_<timestamp>.sql.gz`)
### Manual Backup Workflow
```bash
# 1. Create manual backup
docker exec postgres-backup /scripts/backup.sh
# 2. Verify backup
ls -lh backups/
# 3. Test backup integrity (optional)
gunzip -t backups/postgres_michaelschiemer_20250130_020000.sql.gz
```
### Disaster Recovery
#### Scenario: Complete Database Loss
```bash
# 1. Stop application to prevent writes
cd ../application
docker compose stop
# 2. Remove corrupted database
cd ../postgresql
docker compose down
docker volume rm postgres-data
# 3. Recreate database
docker compose up -d
# 4. Wait for PostgreSQL to initialize
docker compose logs -f postgres
# 5. Restore from latest backup
docker exec -it postgres-backup /scripts/restore.sh /backups/postgres_michaelschiemer_<latest>.sql.gz
# 6. Verify restoration
docker exec postgres psql -U postgres -d michaelschiemer -c "\dt"
# 7. Restart application
cd ../application
docker compose start
```
#### Scenario: Point-in-Time Recovery
```bash
# 1. List available backups
docker exec postgres-backup ls -lh /backups
# 2. Choose backup timestamp
# postgres_michaelschiemer_20250130_143000.sql.gz
# 3. Restore to that point
docker exec -it postgres-backup /scripts/restore.sh /backups/postgres_michaelschiemer_20250130_143000.sql.gz
```
### Off-site Backup
**Recommended**: Copy backups to external storage
```bash
#!/bin/bash
# backup-offsite.sh - Run daily after local backup
BACKUP_DIR="./backups"
REMOTE_HOST="backup-server.example.com"
REMOTE_PATH="/backups/michaelschiemer/postgresql"
# Sync backups to remote server
rsync -avz --delete \
"${BACKUP_DIR}/" \
"${REMOTE_HOST}:${REMOTE_PATH}/"
echo "✅ Off-site backup completed"
```
**Alternative: S3 Upload**
```bash
# Using AWS CLI
aws s3 sync ./backups/ s3://my-backup-bucket/postgresql/ --delete
```
## Security
### Connection Security
**Network Isolation**:
- PostgreSQL only accessible via `app-internal` network
- No external ports exposed
- Service-to-service communication only
**Authentication**:
- Strong password required (generated with `openssl rand -base64 32`)
- No default passwords
- Password stored in environment variables only
### Backup Security
**Encryption** (recommended for production):
```bash
# Encrypt backup before off-site storage
gpg --symmetric --cipher-algo AES256 backups/postgres_michaelschiemer_*.sql.gz
# Decrypt when needed
gpg --decrypt backups/postgres_michaelschiemer_*.sql.gz.gpg | gunzip | psql
```
**Access Control**:
- Backup directory mounted as read-only in other containers
- Backup service has write access only
- Host filesystem permissions: `chmod 700 backups/`
### Update Security
```bash
# Update PostgreSQL image
docker compose pull
# Recreate containers with new image
docker compose up -d
# Verify version
docker exec postgres psql -U postgres -c "SELECT version();"
```
## Monitoring
### Health Checks
```bash
# Check service health
docker compose ps
# Expected: Both services "healthy"
# Manual health check
docker exec postgres pg_isready -U postgres -d michaelschiemer
# Check backup service
docker compose logs postgres-backup | grep "✅ Backup completed"
```
### Resource Usage
```bash
# Database container stats
docker stats postgres --no-stream
# Expected:
# - Memory: ~200-800MB (under 2GB limit)
# - CPU: <50% sustained
# Disk usage
docker exec postgres du -sh /var/lib/postgresql/data
```
### Logs
```bash
# PostgreSQL logs
docker compose logs postgres
# Backup logs
docker compose logs postgres-backup
# Real-time monitoring
docker compose logs -f
# PostgreSQL server logs (inside container)
docker exec postgres tail -f /var/lib/postgresql/data/pgdata/log/postgresql-*.log
```
### Alerts
**Recommended Monitoring**:
- Backup success/failure notifications
- Disk space warnings (>80% full)
- Connection count monitoring
- Slow query alerts
- Replication lag (if using replication)
## Troubleshooting
### Database Won't Start
```bash
# Check logs
docker compose logs postgres
# Common issues:
# 1. Invalid configuration
docker exec postgres postgres --check
# 2. Corrupted data directory
docker compose down
docker volume rm postgres-data
docker compose up -d
# 3. Permission issues
docker exec postgres ls -la /var/lib/postgresql/data
```
### Backup Failures
```bash
# Check backup service logs
docker compose logs postgres-backup
# Common issues:
# 1. Disk full
df -h
# 2. Connection to PostgreSQL failed
docker exec postgres-backup pg_isready -h postgres -U postgres
# 3. Manual backup test
docker exec postgres-backup /scripts/backup.sh
```
### Connection Refused from Application
```bash
# 1. Check PostgreSQL is running
docker compose ps postgres
# 2. Verify network
docker network inspect app-internal | grep postgres
# 3. Test connection
docker exec app nc -zv postgres 5432
# 4. Check credentials
docker exec app printenv | grep DB_
```
### Slow Queries
```bash
# Enable extended logging
docker exec postgres psql -U postgres -c "ALTER SYSTEM SET log_min_duration_statement = 500;"
docker compose restart postgres
# Check for missing indexes
docker exec postgres psql -U postgres -d michaelschiemer -c "
SELECT
schemaname,
tablename,
attname,
n_distinct,
correlation
FROM pg_stats
WHERE schemaname = 'public'
ORDER BY correlation;
"
```
### Out of Disk Space
```bash
# Check disk usage
df -h
# Check database size
docker exec postgres psql -U postgres -d michaelschiemer -c "
SELECT pg_size_pretty(pg_database_size('michaelschiemer'));
"
# Vacuum to reclaim space
docker exec postgres psql -U postgres -d michaelschiemer -c "VACUUM FULL;"
# Clean old backups manually
find ./backups -name "*.sql.gz" -mtime +30 -delete
```
## Performance Tuning
### Current Configuration (2GB Memory)
**`conf.d/postgresql.conf`** optimized for:
- **Memory**: 2GB allocated (512MB shared buffers, 1.5GB effective cache)
- **Connections**: 100 max connections
- **Workers**: 4 parallel workers
- **Checkpoint**: 2GB max WAL size
### Scaling Up (4GB+ Memory)
```ini
# conf.d/postgresql.conf
shared_buffers = 1GB # 25% of RAM
effective_cache_size = 3GB # 75% of RAM
maintenance_work_mem = 256MB
work_mem = 10MB
max_connections = 200
max_parallel_workers = 8
```
### Query Optimization
```bash
# Analyze query performance
docker exec postgres psql -U postgres -d michaelschiemer -c "
EXPLAIN ANALYZE
SELECT * FROM users WHERE email = 'test@example.com';
"
# Create index for frequently queried columns
docker exec postgres psql -U postgres -d michaelschiemer -c "
CREATE INDEX idx_users_email ON users(email);
"
```
### Connection Pooling
**Recommended**: Use PgBouncer for connection pooling in high-traffic scenarios
```yaml
# Add to docker-compose.yml
pgbouncer:
image: pgbouncer/pgbouncer:latest
environment:
- DATABASES_HOST=postgres
- DATABASES_PORT=5432
- DATABASES_DBNAME=michaelschiemer
- PGBOUNCER_POOL_MODE=transaction
- PGBOUNCER_MAX_CLIENT_CONN=1000
- PGBOUNCER_DEFAULT_POOL_SIZE=25
```
## Upgrading PostgreSQL
### Minor Version Upgrade (e.g., 16.0 → 16.1)
```bash
# Pull latest 16.x image
docker compose pull
# Recreate container
docker compose up -d
# Verify version
docker exec postgres psql -U postgres -c "SELECT version();"
```
### Major Version Upgrade (e.g., 16 → 17)
```bash
# 1. Create full backup
docker exec postgres-backup /scripts/backup.sh
# 2. Stop services
docker compose down
# 3. Update docker-compose.yml
# Change: postgres:16-alpine → postgres:17-alpine
# 4. Remove old data volume
docker volume rm postgres-data
# 5. Start new version
docker compose up -d
# 6. Restore data
docker exec -it postgres-backup /scripts/restore.sh /backups/postgres_michaelschiemer_<latest>.sql.gz
```
## Additional Resources
- **PostgreSQL Documentation**: https://www.postgresql.org/docs/16/
- **Performance Tuning**: https://wiki.postgresql.org/wiki/Performance_Optimization
- **Backup Best Practices**: https://www.postgresql.org/docs/16/backup.html
- **Security Hardening**: https://www.postgresql.org/docs/16/security.html

View File

@@ -0,0 +1,70 @@
# PostgreSQL Configuration for Production
# Optimized for 2GB memory allocation
# Connection Settings
listen_addresses = '*'
max_connections = 100
superuser_reserved_connections = 3
# Memory Settings (for 2GB allocation)
shared_buffers = 512MB
effective_cache_size = 1536MB
maintenance_work_mem = 128MB
work_mem = 5MB
# Checkpoint Settings
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
# Query Planner
random_page_cost = 1.1
effective_io_concurrency = 200
# Write-Ahead Logging
wal_level = replica
max_wal_size = 2GB
min_wal_size = 1GB
# Logging
log_destination = 'stderr'
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 1d
log_rotation_size = 100MB
log_line_prefix = '%m [%p] %u@%d '
log_timezone = 'Europe/Berlin'
# What to Log
log_min_duration_statement = 1000
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_statement = 'ddl'
# Autovacuum
autovacuum = on
autovacuum_max_workers = 3
autovacuum_naptime = 1min
# Client Connection Defaults
datestyle = 'iso, mdy'
timezone = 'Europe/Berlin'
lc_messages = 'en_US.utf8'
lc_monetary = 'en_US.utf8'
lc_numeric = 'en_US.utf8'
lc_time = 'en_US.utf8'
default_text_search_config = 'pg_catalog.english'
# Lock Management
deadlock_timeout = 1s
# Statement Timeout (prevent long-running queries)
statement_timeout = 30000 # 30 seconds
# Parallel Query Execution
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_worker_processes = 4

View File

@@ -0,0 +1,71 @@
version: '3.8'
services:
# PostgreSQL Database
postgres:
image: postgres:16-alpine
container_name: postgres
restart: unless-stopped
networks:
- app-internal
environment:
- TZ=Europe/Berlin
- POSTGRES_DB=${POSTGRES_DB:-michaelschiemer}
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- postgres-data:/var/lib/postgresql/data
- ./conf.d:/etc/postgresql/conf.d:ro
- ./backups:/backups
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command: >
postgres
-c config_file=/etc/postgresql/conf.d/postgresql.conf
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres} -d ${POSTGRES_DB:-michaelschiemer}"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
shm_size: 256mb
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 512M
# Automated Backup Service
postgres-backup:
image: postgres:16-alpine
container_name: postgres-backup
restart: unless-stopped
networks:
- app-internal
environment:
- TZ=Europe/Berlin
- POSTGRES_HOST=postgres
- POSTGRES_DB=${POSTGRES_DB:-michaelschiemer}
- POSTGRES_USER=${POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- BACKUP_RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-7}
- BACKUP_SCHEDULE=${BACKUP_SCHEDULE:-0 2 * * *}
volumes:
- ./backups:/backups
- ./scripts:/scripts:ro
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
entrypoint: ["/scripts/backup-entrypoint.sh"]
depends_on:
postgres:
condition: service_healthy
volumes:
postgres-data:
name: postgres-data
networks:
app-internal:
external: true

View File

@@ -0,0 +1,23 @@
#!/bin/sh
set -e
echo "🔄 PostgreSQL Backup Service Starting..."
echo "📅 Backup Schedule: ${BACKUP_SCHEDULE}"
echo "🗑️ Retention: ${BACKUP_RETENTION_DAYS} days"
# Install cronie for scheduled backups
apk add --no-cache dcron
# Create cron job
echo "${BACKUP_SCHEDULE} /scripts/backup.sh >> /var/log/backup.log 2>&1" > /etc/crontabs/root
# Ensure backup directory exists
mkdir -p /backups
# Run initial backup
echo "🚀 Running initial backup..."
/scripts/backup.sh
# Start cron in foreground
echo "✅ Backup service ready - cron daemon starting"
crond -f -l 2

View File

@@ -0,0 +1,55 @@
#!/bin/sh
set -e
# Configuration
BACKUP_DIR="/backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/postgres_${POSTGRES_DB}_${TIMESTAMP}.sql.gz"
RETENTION_DAYS=${BACKUP_RETENTION_DAYS:-7}
echo "======================================"
echo "PostgreSQL Backup - $(date)"
echo "======================================"
# Wait for PostgreSQL to be ready
until PGPASSWORD="${POSTGRES_PASSWORD}" pg_isready -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -d "${POSTGRES_DB}"; do
echo "⏳ Waiting for PostgreSQL..."
sleep 5
done
echo "✅ PostgreSQL is ready"
# Create backup
echo "📦 Creating backup: ${BACKUP_FILE}"
PGPASSWORD="${POSTGRES_PASSWORD}" pg_dump \
-h "${POSTGRES_HOST}" \
-U "${POSTGRES_USER}" \
-d "${POSTGRES_DB}" \
--clean \
--if-exists \
--create \
--no-owner \
--no-privileges \
| gzip > "${BACKUP_FILE}"
# Verify backup was created
if [ -f "${BACKUP_FILE}" ]; then
BACKUP_SIZE=$(du -h "${BACKUP_FILE}" | cut -f1)
echo "✅ Backup created successfully: ${BACKUP_SIZE}"
else
echo "❌ Backup failed!"
exit 1
fi
# Cleanup old backups
echo "🗑️ Cleaning up backups older than ${RETENTION_DAYS} days..."
find "${BACKUP_DIR}" -name "postgres_*.sql.gz" -type f -mtime +${RETENTION_DAYS} -delete
# List current backups
echo ""
echo "📊 Current backups:"
ls -lh "${BACKUP_DIR}"/postgres_*.sql.gz 2>/dev/null || echo "No backups found"
echo ""
echo "✅ Backup completed successfully"
echo "======================================"

View File

@@ -0,0 +1,55 @@
#!/bin/sh
set -e
# Configuration
BACKUP_DIR="/backups"
if [ -z "$1" ]; then
echo "Usage: ./restore.sh <backup-file>"
echo ""
echo "Available backups:"
ls -lh "${BACKUP_DIR}"/postgres_*.sql.gz 2>/dev/null || echo "No backups found"
exit 1
fi
BACKUP_FILE="$1"
if [ ! -f "${BACKUP_FILE}" ]; then
echo "❌ Backup file not found: ${BACKUP_FILE}"
exit 1
fi
echo "======================================"
echo "PostgreSQL Restore - $(date)"
echo "======================================"
echo "📦 Backup file: ${BACKUP_FILE}"
echo ""
# Wait for PostgreSQL to be ready
until PGPASSWORD="${POSTGRES_PASSWORD}" pg_isready -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -d postgres; do
echo "⏳ Waiting for PostgreSQL..."
sleep 5
done
echo "✅ PostgreSQL is ready"
echo ""
# Warning
echo "⚠️ WARNING: This will DROP and RECREATE the database!"
echo "⚠️ Database: ${POSTGRES_DB}"
echo ""
echo "Press Ctrl+C to cancel, or wait 10 seconds to continue..."
sleep 10
echo ""
echo "🔄 Restoring database..."
# Restore backup
gunzip -c "${BACKUP_FILE}" | PGPASSWORD="${POSTGRES_PASSWORD}" psql \
-h "${POSTGRES_HOST}" \
-U "${POSTGRES_USER}" \
-d postgres
echo ""
echo "✅ Database restored successfully"
echo "======================================"